Ipfs content unavailable

Hi,

I create that topic about an event that I don’t fully understand.
I have a machine with an ipfs instance and a classic web server.
Both expose exactly the same content. I monitor the availability of both from the Site24x7 service. I had 5 hours of unavailability on the ipfs content this despite the fact that the machine remained functional. I have done nothing to make the content available again.
My understanding is that the ipfs network serves https://ipfs.io/ipfs/… content based on distributed architecture, the performance is much better from ipfs than from my machine. But I also thought that the availability was much better by design. 5 hours of unavailability is huge how is that possible?

Thank you for your advice and comment

If you use IPNS, it’s not super reliable but it should be improving soonish with new work on IPNS over PubSub.

If serving static ipfs cids as usually recommended at the moment. Maybe something severe happened to the DHT? Finding your node depends on being able to navigate the DHT to find the nodes responsible for keeping track of who has certain data. Maybe you got unlucky and all the nodes responsible for keeping track of that data all went offline in a fairly short period, taking the knowledge of yoru content with it. It’s tricky.

You can improve the availability of your IPFS content by pinning it from multiple machines. Hopefully the coming “cohosting” gui in the ipfs browser companion will encourage visitors to your web content to pin it and help serve it up in the long term. You could also try using pinning services like pinata.cloud or temporal.cloud. They’ll pin your content to a bunch of high availability servers around the world for a fee, and on pinata the first gigabyte is free. More nodes serving your data means more nodes updating the DHT with notifications that they provide that content, and more opportunities for the DHT to regain knowledge of available locations quickly as a result, i think.

I am not an IPFS expert though. Perhaps there are other reasons things could go down like this.

1 Like

Thank you!
Yes, I had read that the availability was not perfect, but this test revealed to me that the idea of simply publishing content using the extension in the browser and a local instance is not reliable enough at the moment. It is necessary to use complementary services.

Yeah. It should keep getting better quickly. IPFS seems to be picking up momentum and support, with more and more people using it and contributing to it, so I expect issues like this will get better with time, it’s kind of early days. Core features like unixfs are still being worked out with unixfsv2 coming soon, and resolution speed and accuracy are certainly something that people are pretty focused on improving in the near future. :slight_smile: