I have this 2 sites link1, link2. When I didn’t visited my sites for a long time (1-2 months), ipfs.io was not able to load them.
The server was up, ipfs daemon was running. After I started some diagnostics (for example
ipfs stats bitswap,
ipfs stats bw,
ipfs stats dht), the site loaded.
I think, it was not about the commands that I ran, but about the thing that the IPFS peers started to discover the network for the given files, peers added the CIDs to their wantlist.
I would like to understand what was happening here. What can cause this? Is it possible, that my IPFS peer does not advertise itself automatically if nobody asks anything from it?
These were the results of the diagnostics by the way:
# ipfs stats bw
TotalIn: 14 GB
TotalOut: 8.8 GB
RateIn: 47 kB/s
RateOut: 20 kB/s
# ipfs stats dht
DHT wan (192 peers):
Bucket 0 (20 peers) - refreshed 16s ago:
Peer last useful last queried Agent Version
@ QmRKzpFsNGXQg6k1obwrjNGUwpwHRbdirEa4xwD1aj9oMX 0s ago 0s ago go-ipfs/0.4.21/
@ QmRMkTYMK9iuu3UqpwHhbCPcLPHEMVcaL9BurCGdNLrS8W 1s ago 1s ago go-ipfs/0.8.0/48f94e2
@ 12D3KooWRA5WdCTGsFRatyx7dmRj16SqXacAsNDUyMjBuvVdhY16 1s ago 1s ago go-ipfs/0.10.0/
@ QmRoGVbUtf3Q4JkWYCyte5fmBdjHYnZdJyS6Qb2qBJgDst 1s ago 1s ago go-ipfs/0.8.0/
@ QmbEjWRSFtwRDzno8iSjZe2ntAM72ujcrhtRZvpNRffVxh 2s ago 0s ago go-ipfs/0.9.1/
@ QmTHdMNsDjPkE7fhG71smGSawtq74TeDFzTF8DBDdykpTb 1s ago 1s ago go-ipfs/0.8.0/
@ QmcvpmY8jrDmVJqsE4LxW91fbFRAgHdz5c5EhNtDr2zXDo 2s ago 2s ago go-ipfs/0.8.0/
@ 12D3KooWJK1QQF1n9JTmMzLU52BWHV9FEUuQCfvD4PUu3XmBMLkK 3s ago 3s ago go-ipfs/0.8.0/16615d7
@ 12D3KooWKo5qEttgaFEgRKUpwMLBDQJaMNpBrFcv2qPLXoTMWmoB 3s ago 3s ago go-ipfs/0.9.1/
@ QmRXP6S7qwSH4vjSrZeJUGT68ww8rQVhoFWU5Kp7UkVkPN 4s ago 0s ago go-ipfs/0.9.1/
@ 12D3KooWD4S6jky1hfrCj79cNbMbRzxQ3PznoQBYzjPaAHeq2btt 4s ago 4s ago go-ipfs/0.10.0/
@ 12D3KooWCKusV8PGhjcJr6Lo6sR5uC8hcJ32TSc2VBKyu1KnwzFZ 4s ago 4s ago rust-libp2p/0.2.0
@ QmaPpVkaD5ceri35pB9KgpTQ2PwJdknaXCxJwJxmNXTKCG 4s ago 4s ago go-ipfs/0.8.0/48f94e2
# ipfs stats bitswap
provides buffer: 0 / 256
blocks received: 0
blocks sent: 102
data received: 0
data sent: 324449
dup blocks received: 0
dup data received: 0
wantlist [0 keys]
When using IPNS, the records will expire at some point. You can specify the lifetime at creation.
See → Command-line reference | IPFS Docs
Then why did the site came back online? I did upload a new article for one of the site, which will do an
ipfs name publish for one of the site, but not the other. Also, I think it won’t do an
ipfs name publish for the website itself, only a json file and an articles folder.
UPDATE I run
ipfs id and then tried to open
ipfs.io/ipns/<id> before it started to work.
The other thing is that your using a gateway. They can go offline or remove your website from their cache.
May I ask, which guide did you follow for setting this up?
I did the ProtoSchool for sure, and I was also reading the documentation page of IPFS (https://docs.ipfs.io/)
So it’s likely that refreshing the gateway for a longer time would have solved the problem?
This blog is for experimental purposes by the way, and the experiment is about having a blog that is very resilient. The part I don’t really understand, if a server is running 0-24h, wouldn’t that cause the other nodes to have the CIDs in the DHT table?
If this thing with the DHT doesn’t work as I thought, than maybe it would be good to run a script that periodically calls a curl command on a gateway so they now about this CID, even if the website is not visited, but if I get it right, this is something that happens automatically.
Im actually thinking of working on such a project to automatically repin IPNS records for you on a server, so you don’t have to. This is my project if you are interested in reading more about it. mrodriguez3313/IPNSGoServer (github.com) . Very early stage, currently working on in part time, but will ramp up production soon! If you want to offer ideas or suggestions or better yet contribute! I am more than happy to hear you!
That’s an interesting project, but my original idea was that this blog would survive without a central actor. Currently, the refreshing of the content can’t be done with solely ipfs-desktop, as we discussed here Adding IPNS link to ipfs-desktop. So I wrote this bash script for Linux, and I started creating a Qt app for Windows. It is not compiled for Windows yet, but I think it would work. I think in the long term, there should be some standardized solution for pinning IPNS in ipfs-desktop, because these script-solutions only work for tech-savvy people, and/or you need to run additional software.
The thing that I would need now is that the DHT tables are fresh, not on the systems where the bash script is running, but on the other nodes, especially gateways. I think, for this, first I should understand how the DHT is refreshing, I thought that the DHT keeps record of every CID that exists somewhere in the network, but now I’m not sure, because this 2 month inactivity caused behavior that surprised me.
By the way, a problem that I have is that an average user couldn’t upload a new blog post, because it requires handling private keys. I think your project could solve this if it would be possible to just post with hash, username and password, but it would be a single point of failure. But for this, your server should store the keys.
AFAIK your node reprovide your list of CIDs every 2 hours.
If your going for resilience, you can’t expect gateways to work. Use ENS maybe?
Also, (shameless plug) my project can help with the blogging part but there’s many road block for normies.
I think I will start learning Yew, your project made me curious.
If you want something to be always available to you, pin it
I’m not sure anymore that this is related to IPNS, I re-tried some 1-2 months later and my observation is that the site will not load, but if I try several times, it will. So I think it is just that the IPFS gateway is that much slow to find content that was not asked for a long time.