Understanding guarantees of the improved IPNS over pubsub

I know that in 0.5 go-ipfs landed improved IPNS-over-pubsub but I am struggling to find information about its mechanism and guarantees. I found some bits in the experimental features docs but there is still a few questions unanswered.

I am mainly interested in what happens with records of offline nodes.

  1. I know that for the old DHT method you have to republish the IPNS record once in a while, is it needed also for pubsub?
  2. If a record is kept alive “in pubsub”, then it most probably can’t be retrieved using the DHT method right?
  3. If republishing or something else is needed to keep record alive in network, can it be delegated to somebody else? Preferably without giving away the private key?

@adin could you please comment on these?

Sort of. IPNS over PubSub periodically rebroadcasts records as a backup in case any issues arise. Additionally, IPNS records in general must be republished once their “lifetime” has elapsed (i.e. every record has an expiration time in order to allow the publisher to guarantee some degree of freshness about the data, more info in the spec).

As described in that doc go-ipfs will publish to BOTH the DHT and PubSub. However, if for some reason you published IPNS record v7 over PubSub and v8 over the DHT go-ipfs will not find v8 for you since it won’t waste time doing a DHT query when the data is already available.

Yes, this is actually achievable now (as long as the record isn’t expired) for both PubSub and the DHT it’s just that the are unfortunately not yet APIs to make third party republishing of IPNS records easy.

  • DHT
  • PubSub
    • This will actually work by default if you just keep a third party node online. However, if your third-party node restarts then it won’t keep rebroadcasting the record
    • You can manually publish the IPNS record to the channel with the correct name if you want (getting the pubsub topic name can be computed with the tool above or just following the spec)

Note about lifetimes: There’s currently a bug in go-ipfs where the republisher will clobber the lifetime you originally set after a republish (https://github.com/ipfs/go-ipfs/issues/7537).

1 Like

Thanks a lot! I will have to revisit our solution and maybe will come up with some follow up questions. Thanks again!


As described in that doc go-ipfs will publish to BOTH the DHT and PubSub

I do see the published values on both on my machine, but, I’m having trouble finding my IPNS value on the DHT on other machines?

:white_check_mark: go-ipfs$ ipfs name resolve // resolves ok
:white_check_mark: go-ipfs$ ipfs dht get /ipns/Qmhashhhh // shows value ok

BUT, when I go to a go-IPFS node on another machine, I cannot
:x: go-ipfs-another-machine$ ipfs name resolve // Error: routing: not found

Even though the 2 nodes are connected:
:white_check_mark: go-ipfs-another-machine$ ipfs swarm connect /ip4/..../mutiladdr // connected ok

So I try to see the providers, but
:x: go-ipfs$ ipfs dht findprovs /ipns/Qmhashhhhh // Error: selected encoding not supported

So, my many questions are:
:question:Any ideas on how to troubleshoot why my published value isn’t showing elsewhere on the DHT?

:question: What encoding is required for ipfs dht findprovs so I can see where my values are being replicated?

:question: I presume I don’t have to manually ipfs dht provide, do I? If so 1) is there a way to automatically do this, and 2) is there any special encoding I need for the in ipfs dht provide <key> (I tried ipfs dht provide /ipns/QmHashhhh but encoding is not supported, is it b64 or b32)?

@DougAnderson444 Without a little more information I cannot tell you why ipfs name resolve /ipns/QmKeyHash isn’t resolving on the other machine. It also seems like this issue has very little to do with IPNS over PubSub and is likely a result of either networking/configuration issues (more likely) or other IPNS issues (also possible).

Below is some more information on IPNS over PubSub that should clarify a bit more how it interacts with the DHT and clear up some of your questions/issues. Lmk if you have any more questions/concerns.

DHT usage by IPNS

DHT FindProvs and IPNS over PubSub

TLDR: ipfs dht findprovs $ipnsDHTRendezvous, where ipnsDHTRendezvous is defined for a given base58 encoded multihash of an IPNS public key QmIPNSKey as multihash(sha256, ("floodsub:" + /record/base64url-unpadded("/ipns/base58Decode(QmIPNSKey)))). Note: that while ipfs dht findprovs internally uses multihashes it takes a CID. The go-libp2p-discovery code deals with this by creating a CIDv1 with a Raw multicodec.

Recall IPNS over PubSub is not enabled by default (as of go-ipfs v0.6.0) and requires --enable-namesys-pubsub.

ipfs dht findprovs finds Provider Records (i.e. the multiaddrs of peers who have advertised some interest in a particular key, in this case a multihash). This is used for find people who have IPFS content, as well for finding people who have expressed interest in an IPNS over PubSub topic.

The key used for IPNS over PubSub provider records is SHA256(“floodsub:” + IPNS-over-PubSub Topic Name). The IPNS over PubSub Topic Name is specified/defined, however the IPNS over PubSub DHT record isn’t yet in the spec.

The IPNS over PubSub topic name is defined in https://github.com/ipfs/specs/blob/master/naming/pubsub.md#translating-an-ipns-record-name-tofrom-a-pubsub-topic and https://github.com/ipfs/specs/blob/master/IPNS.md as /record/base64url-unpadded("/ipns/BINARY_ID"), where BINARY_ID is the wire representation of the multihash of the IPNS public key.

I’ve added some tools for calculating some of these identifiers to https://github.com/aschmahmann/ipns-utils, and just (as of a few minutes ago) added a new function for calculating the DHT rendezvous record name.

DHT IPNS Records

IPNS records (i.e. the full record as defined in the IPNS spec and contains things like the path that it points to, e.g. /ipfs/QmMyData) are published to the DHT using the equivalent of ipfs dht put $key $value where the value is the IPNS record and the key is /ipns/BINARY_ID (defined above and in the spec).

1 Like