Unable to resolve IPNS name published from go-ipfs

Hey guys,

Sorry if this has been asked before I searched a lot couldn’t find anything of significance. So basically I have an application that continuously pins new IPFS hashes and also pushes it to IPNS, I have a peer ID that starts with 12D3Koo and I am able to access it using ipfs.io/ipns/12D3Koo..

Just a note: All the examples in the docs that I’ve seen have the IPNS hash/Peer ID starting with Qm instead of 12 :thinking:

const IPFS = require('ipfs-core')

async function main(){
    const ipfs = await IPFS.create()
    const addr = '12D3Koo...'

    for await (const name of ipfs.name.resolve(addr)) {
        return name
    }
}

main().then(resolved =>{
    console.log(`resolved name is ${resolved}`)
})

But as soon as I try to fetch it from jsIPFS using the ipfs-core using the code above library I get

Error: record requested for 12D3Koo was not found in the network

Is it not possible to fetch using ipfs-core what is published using go-ipfs?

Thanks in advance for any help

Whenever you ipns publish a CID you get back an object that contains a ‘name’ property (and a value property). That name can then be resolved if you want or used to get the actual data.

Sorry, so is that different from the one I use on the ipfs.io website? That’s the peer ID right? Can I get back the latest published IPFS hash from that ID?

The documentation on IPFS is super sketchy isn’t it? It’s unfortunate that a seemingly well funded project like this doesn’t place any priority on creating better docs and examples.

Anyway, here’s a bit of IPNS code I got running, which I know will be alien to you, but may contain enough to clarify how things work, even if you don’t know Java.

All that code does is create a DAG of some text, publish it, then verify that it can resolve, and then it publishes updated content, and verifies it can read that back also. Frankly it’s all I’ve ever done with IPFS, because once I learned the ‘publish’ can take several MINUTES to complete, I abandoned the idea of using IPNS in the way that I had intended.

It’s unfortunate indeed. I’m thinking of abandoning using IPFS altogether and come up with some custom solution that can be tailored to my needs, IPNS even when it works with ipfs.io gateway is way too slow even if I have multiple servers pinning what I need. Perhaps a few years later when the project is a bit more mature I’ll take a look at it again.

Thank you for the examples and your time.

Glad to help. Remember that ‘ipfs.io’ is imo just a demo site. It’s not really for production use.

The only way to use IPFS in production is to either rely on some pinning service (like Pinata) to host files for you, or else run an instance of IPFS itself on a server of your own. That’s what I’m doing, with my own platform. There is a docker image they have that I use which makes it very simple (in that same project I linked for you). Good luck. Don’t give up on IPFS too quickly. It’s a great project, they just need better examples and more actual narrative discussions in their docs.

EDIT: Oh, yeah, about IPNS, I agree it’s to slow to really use, for most use cases. I didn’t need IPNS so I get along ok without it.

I use ipfs.io as a synonym for a public gateway, if it’s accessible there it should be accessible anywhere in my mind. For my production use I thought of using a full blown IPFS node, weirdly the goIPFS can resolve the kind of hashes I mentioned 12D3Koo.. with no issues albeit a bit slow which gets faster as I add more nodes. It’s only that jsIPFS cannot resolve it at all.

I guess you’ve searched for “resolve” on this discussion board? There is another user on here who seems to have a particular machine that just simply cannot resolve, and I replied to him just this morning that my current best guess is that it’s network related, or configuration related somehow. Maybe jsIPFS has some debug logging you can turn on, to look at where it failed internally, to get more clues than “not found on the network” error. Good luck.

Good point, my home network is under a heavy NAT, so I tried the same piece of code on a public server, it also yields the same was not found in the network error

1 Like

Hi,

In order to make IPFS/IPNS usefull and reliable, you have to consider how big and consistent your IPFS node swarm is. Every P2P distributed information system is depending on https://en.wikipedia.org/wiki/CAP_theorem

So, if you publish in the public IPFS network you try to reach from ipfs.io the DHT are too big to be able to keep consistency up there. So you have to “shape your swarm” and accelerate IPNS replication.

If your data have a slow mutable pace, you can use DNS Link technic to maintain accessibility

If your IPNS mutability have to be quick, this are ways to “make it work”:

  1. Using IPFS cluster, but all of your nodes needs to be “equal”

  2. Activate pub-sub to spread IPNS. To do so, activate the option:

  • ipfs config Pubsub.Router gossipsub
  • launch ipfs daemon with these features --enable-pubsub-experiment --enable-namesys-pubsub

Then control your swarm… Manually using bootstap nodes of your own and even swarm.key encryption.

If your intention is to make it fully automatic. You have to couple your swarm with a “social/trust” communication system layer. I succeed using scuttlebutt a great “off grid” protocol: (2 years struggling with IPFS… And how I succeed using it with ScuttleButt! ).

Now I connect it to https://cesium.app wallets and https://gchange.fr
A great Libre currency experimentation network

So CAP Theorem is resolved by swarm control. You have to find the right way of doing it depending on your needs and application.

Hey mate,

I think the problem is isolated to jsIPFS, the same machine go IPFS is able to resolve without issues.

The difference between 12D3… and Qm… peer IDs is the type of key that they are using I think (i.e. ed25519 or rsa).

I don’t think that js-ipfs would have a problem with that (check that you are integrating against the latest library). It may simply be that 12D3 has bad luck and the DHT nodes that should be storing its value are not well connected or responsive.

It has been mentioned on other threads: the reason IPNS is so unreliable is the large amount of nodes on older versions of IPFS in the network (<0.5.0) that are not diallable. If people upgraded it would work better. So far IPFS has been cautious to keep backwards compatibility and not break older nodes.

Yes the derivations for the hashes seem different indeed, I guess this warrants an update for the docs, I’ve also made sure that the js-ipfs I’m using is the latest, also added a bunch of go IPFS nodes to perhaps help the discovery process easier, including trying to add the broadcasting node as a bootstrap, nothing has worked so far.

Old discussion I know, but this is a new variation I’m trying to understand.

I have 3 kubo servers all running the same version (0.17.0). All 3 servers list the others in their Peering.Peers section of their config files. I used ipfs key gen to generate a key on 2 of the servers, lets call those server A and server B.

I can successfully do an ipfs name resolve of server A’s key on server B, and resolve server B’s key on server A. However I cannot resolve either key from server C. I even used ipfs swarm connect to insure I had a connection to both server A and server B, although that shouldn’t be necessary with them listed in the peering section of the config file.

On server C the response is immediate from ipfs name resolve /ipns/<name> Error: could not resolve name. Same exact command on server A or B works fine.

What could possibly be going on here? Server C has over 700 peer connections including those for server A & B

1 Like

OK, I figured out what was going on, and have resolved the problem I described.

The issue is permissions on server C. I was running the ipfs name resolve command from another linux user account than the account the ipfs daemon was running under. Although I had no problems adding or pining files from the other account, the name resolver was blocked.

I figured that out from the clue that I got an error immediately, so I looked at the .ipfs folder contents and noticed the keystore folder was highly restricted (which makes total sense) to account owner only. This is a bug. Even if using ipfs from other accounts than the daemon is not a valid use case, the error should be reported differently.