Confusion about IPNS

I have read through the various docs and examples and am still confused a bit about IPNS. The use case I am using it for is pretty much standard I think: I want to publish a webapp and some assets on a private IPFS swarm. Other than using a private swarm, that is exactly the use case for which IPNS is introduced in the samples etc.

However, I have had various problems with this.

  • publishing and resolving was sometimes unbearably slow.
  • publishing does not work at all if a node is not connected to the swarm (has zero peers). I understand why it is done this way, but there should be an option to update the ipns record just locally and distribute the update on reconnect. A swarm of a single node should still work. Imagine an app on a device that currently has no network connection.
  • I frequently saw that there is a need to continually republish every 12h or so for IPNS to work. IMHO that defeats the purpose of IPNS, because now I need some central place that is the authority for which hash is associated with which name. But maybe I am just not getting what IPNS is meant for.
  • I saw mention that IPNS is to be replaced with something called IPRS. When will this happen?

Currently I am trying to use IPNS for permanent publishing by just setting both the --lifetime and --ttl parameters to some very high values (years). Will this work? It certainly seems to work now, but that could also be because we are republishing frequently while the app/assets are in development.

Is there some more detailed documentation about IPNS, specifically the exact purpose and the expectations regarding performance that I have missed?

2 Likes

Our DHT is currently very slow. We’re working on ways to fix this by (a) speeding up the DHT and (b) subscribing to IPNS updates over pubsub for a period of time after making an initial lookup (under the assumption that we’re probably interested in that key).

In general, I agree. However, this is really just a fact of DHTs, not IPNS. That is, you have to re-publish the DHT record every 12 hours. Otherwise:

  1. Due to node churn, your IPNS record is likely to disappear anyways (nodes will come and go and forget the record). DHTs are best-effort.
  2. The DHT would grow unbounded. This could also be used to attack the DHT by filling it with bogus records.
  3. You’d be vulnerable to replay attacks. That is, if IPNS records didn’t expire, if the network ever forgets your current IPNS record, an attacker would be able to re-publish any of your old records as if they were new.

Now, there are a few solutions:

  1. Allow semi-trusted third parties to re-publish your IPNS records. However, to get this to work correctly, we’d need two keys: one for publishing, one for the actual record validity.
  2. Use a consensus system (e.g., a blockchain). For example, Namecoin exists for precisely this reason.

IPRS is a generalization of IPNS. That is, IPNS is the name system that allows one to map public keys to values. IPRS is a more general purpose record system for publishing records with both order (A is newer than B) and validity information (e…g, A is signed by key X). This would also be useful for publishing:

  1. Provider records (peer Y provides block X).
  2. Routing records (peer Y can be found at internet addess X).

Etc… Basically, instead of treating every record we store in the DHT (or any other keyvalue store we end up implementing) as a completely new entity, we’d like to create a general-purpose system for validating and ordering all records stored on the DHT.

Currently I am trying to use IPNS for permanent publishing by just setting both the --lifetime and --ttl parameters to some very high values (years). Will this work? It certainly seems to work now, but that could also be because we are republishing frequently while the app/assets are in development.

No. DHT nodes will forget these records eventually. DHT nodes automatically expire all DHT records after at most 24 hours.


I hope this answered most of your questions. Unfortunately, I’m unaware of any decent IPNS documentation (although there’s probably some scattered around somewhere if anyone feels like chiming in…).

2 Likes

Thanks for the detailed response. I think I now have a pretty good understanding what IPNS is. I just don’t know what it’s for. I was originally thinking that it is a distributed replacement for DNS, but with the 12h limitation this is obviously not the case…

How does the 12h limitation solve this? You could write a script that generates a large number of keys and publishes values for those keys as quickly as possible to overload the DHT. I suspect that you could do some real damage with just a single powerful node in 12h.

Couldn’t you do something like renew names whenever they are resolved? Or introduce the concept of “pinning” a name (a node indicates that it is interested in the value of a name and thus willing to share the burden to remember the value)?

Will this still have the 12h limitation? From the mission statement “… span massive distances (>1AU) or suffer long partitions (>1 earth yr) …” it seems not. 12h is not a long time for real interplanetary use cases.

So, with IPRS in place, would I be able to publish an app or a website using just IPFS components?

Let’s say you are a political dissident and want to publish a static website like e.g. a blog on IPFS. You want to update it every few days when you write new blog post. You want the blog to stick around as long as people are frequently reading it, even if there is no node anymore that continuously republishes it (All your ec2 instances that do the republishing have been shut down, and your private computer has been confiscated). You don’t want people to have to pass around the hash of the last blog entry.

This would seem to be a very good use case for IPFS, but with the current components (IPFS, IPNS and pubsub) this is not possible, right? Will IPRS allow this?

1 Like

It’s actually a combination of DNS (long term) and HTTP (fast updates). Again, the 12h limitation is a limitation of the DHT, not IPNS specifically. They’re two different systems.

For example, we’re currently working on making IPNS work over pubsub (we’ll likely have even shorter lifetimes for those records) to better cover the HTTP case (faster updates). As I said before, we’d also like some form of blockchain solution (would solve the 12h limit and work well for long-lived records) but that has yet to be implemented or even fully designed, thoughts here:

How does the 12h limitation solve this? You could write a script that generates a large number of keys and publishes values for those keys as quickly as possible to overload the DHT. I suspect that you could do some real damage with just a single powerful node in 12h.

At the moment, you could probably do this. Eventually, we’ll likely need to find a way to protect against DoS attacks (e.g., proof of work, reputation, etc.). However, for now, the 12h limitation means that nobody will expect records to stick around forever so nodes that become overloaded can flush infrequently requested IPNS records (not currently implemented because this isn’t on fire (yet)).

Couldn’t you do something like renew names whenever they are resolved?

As I said above, we’ve considered doing this explicitly: you’d give a republish key to a semi-trusted node to allow it to republish your IPNS records. However, this hasn’t been a high priority. Our current priority with respect to IPNS is making IPNS queries faster (they’re currently absurdly slow).

Or introduce the concept of “pinning” a name (a node indicates that it is interested in the value of a name and thus willing to share the burden to remember the value)?

That wouldn’t work with a DHT. By the nature of how DHTs operate, DHT nodes can’t choose which records they store (they can store them anyways but it’ll be hard for peers to find these records).

Also, if we simply allowed nodes to serve records indefinitely (no re-signing), we’d still have the replay attack issue.

Will this still have the 12h limitation? From the mission statement “… span massive distances (>1AU) or suffer long partitions (>1 earth yr) …” it seems not. 12h is not a long time for real interplanetary use cases.

The system as a whole is designed to handle these cases; that doesn’t mean the current implementation does. Interplanetary DHTs (well, Kademlia DHTs at least) don’t work at all. We’d probably have separate DHTs (or something better) on every planet and specify timeouts using a clock that takes distance into account (expires = date + distanceFromSource/c).

That statement is specifically referring to the fact that the current system (HTTP+SSL+central servers) can’t work on an interplanetary scale (without trusted servers on every planet). As a matter of fact, it already starts breaks down on a planetary scale (hence CDNs).

On topic, with respect to IPRS…

No, IPRS is just a generalization of IPNS. It’s completely unrelated to the 12 hour issue, that’s DHT related. As this is all a bit confusing, here are the three layers:

  1. IPNS - a name system. Maps a cryptographic key to a file in IPFS.
  2. IPRS - a record system. An abstract record that can be validated and queried for. All IPNS records are (well, would be) IPRS records but not all IPRS records would be IPNS records.
  3. DHT, Blockchains, Pubsub, etc. - Ways to distribute these records (and more).

The 12 hour limitation comes from level 3. In order to be able to publish permanent records, we’d need a some key-value store with actual consistency guarantees (like a blockchain).

1 Like

Thanks. I will take a look. May I ask what’s the urgency of reducing IPNS latency? Why wouldn’t I use pubsub directly if I have an application that requires quick communication?

In my mind I had the model

  • IPFS (the merkle tree and mechanism for distributions) for large amounts of data
  • IPNS to have a small number of mutable roots pointing into IPFS, e.g. for apps and their state
  • pubsub for when high speed is necessary

The biggest thing I am currently missing is a mechanism to replace DNS that does not have the 12h limit.

I guess given the current limit I will use DNS with the TXT record trick for what I was originally planning to use IPNS for. But it does not feel right…

I don’t get the replay attack problem. Due to the nature of IPNS we know that only the owner of the private key can publish to an IPNS name (hash of the public key, right?). So if the published data contains a wallclock timestamp or sequence number, couldn’t you always determine which is the newer value and discard the old one when storing in the DHT?

So an attacker would first somehow need to make sure that the DHT forgets the newest record (maybe by flooding with random values to force the DHT to discard things), and then flood it with the old record, only to restore an old but legitimate version of the value.

A blockchain is a linear ledger of transactions, so it is not partition tolerant. At least traditional blockchains aren’t. So how would it work over interplanetary distances, when a roundtrip from one part of the network to the other takes hours or days? Or when you have devices that are offline for long periods of time and only infrequently connected to the rest of the internet?

I see that you already have this as a con for the strict consistency option: “It’s not interplanetary. At the very least, we’d need a separate IPNS network per “latency zone”.”. But I would add that this also causes problems in case of normal network partitions, e.g. a device or a network is disconnected from “the internet”, or there is a partition of the internet itself (can happen!).

1 Like