Ipfs gateway timeouts

Hello all!

I’m very newbie on IPFS subjetcts. I use it just for pinning my NFTs on Teia.art marketplace.

THE THING IS… a few weeks ago all the images from Teia.art appear broken to me and recently I’ve noticed that this may related to the IPFS adress. None of the links open in my browsers and even the IPFS.IO website does not work here.

I’ve aked on Teia Discord channel and someone told me that could be an ‘ipfs gateway timeout’ problem.

Any of you could help me to start seeing IPFS images again?

Thanks A LOT!!!

Hey there @Blue_Safari,

Sorry to hear you’re having trouble loading the images for your NFTs with IPFS.

This typically when the CIDs are not available/reachable on the IPFS network.

To resolve this, you can pin the images to an IPFS node that you run or use pinning service (like nft.storage, pinata, or infura )

Pinning services are basically IPFS nodes that ensure your CIDs are pinned and available to the IPFS network.

Assuming you have the original image files, you can upload them again and you should get the same CIDs in your NFTs.

I also recommend checking out this practical explainer for IPFS gateways to understand how IPFS gateways work.

I deployed IPFS for a blockchain project, there is a need to get the NFT (metadata and picture) from its CID, but our IPFS gives timeout 504, I found to peer our IPFS with strong peers listed here Peering with content providers | IPFS Docs, still it gives timeout for some CIDs.
My question is how to implement a robust IPFS gateway? the same CID gets resolved by some of the public IPFS gateways.
Furthermore, I am running a single IPFS service n inside EKS not IPFS swarm or cluster. (edited)

Try our gatewat ipfs.4everland.io

Do you have those CIDs that you’re getting timeouts for pinned to the node on EKS?

@danieln thank you for replying,
I am able to get the content which is pinned in our own IPFS. but we have NFT indexer DB which is being updated by NFT metadata and data CIDs.

Below is list of the CIDs that our IPFS gives timeout.
QmeMBccZ4XwZExsPNaGANDDPq4KHEghXAQH91cKA2167pJ
QmXaHteiJkmKpqsAS1xCEet8kZqQWuZ3GncjMR5afsrWay/1108.json
QmV8UTwuFQPP4x4pRLKcmWf59uH1dssmvBZrQi3dQMVbz8

but here can be retrieved with out any issue
https://gateway.ipfs.io/ipfs/QmeMBccZ4XwZExsPNaGANDDPq4KHEghXAQH91cKA2167pJ
https://gateway.ipfs.io/ipfs/QmV8UTwuFQPP4x4pRLKcmWf59uH1dssmvBZrQi3dQMVbz8

I want to know best practices of deploying IPFS, to get the data very fast.

Some of the things I did are

  1. I increased the UDP buffer size limit
sysctl -a | grep net.core.rmem
net.core.rmem_default = 212992
net.core.rmem_max = 2500000
  1. peered our IPFS with Peering with content providers | IPFS Docs

Do I need to run our IPFS with server profile or badgerds ? for now it is using default profile.

I checked those addresses you provided, and, while I was able to retrieve all of them, they each took a long time to find. Meaning, the problem isn’t on your end, it’s just that whoever is providing those blocks isn’t doing a very good job of it. Unfortunately for you, that’s what needs to improve.

2 Likes

Thanks @ylempereur,
Is there anyway to increase the IPFS timeout, something that I can change in ~/.ipfs/config?
our user to IPFS traffic is via AWS ALB ===> Nginx Reverse Proxy ====> IPFS Pod.

So to concluse we should expect 40% loss of the request from our IPFS, right?

Usually, the timeout actually comes from the web server/proxy in front of the gateway, not the gateway itself (but I’m sure the gateway has a timeout too).

I just use the ipfs command on the CLI, which doesn’t timeout, and let it run, possibly for hours, until it locates the data. That’s how I got your blocks.

Thanks but your IPFS GW also gave timeout

Oh, that wasn’t me, that was someone else you were talking to.

Yes @ylempereur he is another user (@4everlandorg ) who proposed me to use his IPFS GW, which had still timeout for one CID.

@ylempereur now I can also get those CIDs which had timeout, seems after you accessed those CIDs, they got spreaded over the IPFS network.

Actually, it just means you got them from my node, as they must still be in the cache, and my node does a really good job of providing. Which doesn’t solve your problem, but demonstrates that this can work really well, if it’s done right.

@ylempereur yes,

Please can you share how you deployed IPFS GW:

  • in a single pod inside K8s
  • in cluster of pods inside K8s
  • in a single machine (EC2 or…)
  • or in cluster of machines.

and how much of resources you gave to the service?
and for which purpose you use it ?

  • NFT pinning
  • website hosting
  • other

I’m just running IPFS Desktop on my Mac, nothing special.

The two things you want to make sure of are:

  • make sure your node is reachable by others
  • make sure you are using the accelerated DHT client
2 Likes

Thanks @ylempereur,
we will be using this a core of NFT and also pinning our static websites.

I need to dig into accelerated DHT client

Use this to turn it on:

ipfs config --json Experimental.AcceleratedDHTClient true

1 Like

@ylempereur awesome thank you,

A quick question please, does your machine have a public IP attached to it so other nodes find your local IPFS, or it is using your router’s dynamic public IP assigned by ISP?

Sorry, I’m in a meeting, I’ll post a description after it ends.

1 Like

My situation is a little bit complex, but I’ll give you a simplified, more typical version of it:

my ISP assigns one public IP to my router, which then uses NAT to provide connectivity to the various devices in my home. I have set up port-forwarding on the router to make TCP 4001 and UDP 4001 forward to the same ports on my Mac, which the IPFS daemon uses for communication.

the daemon will announce my public IP along with the TCP and UDP ports, which is what other nodes will use to establish a connection.

on occasion, the IP address changes, but the daemon is able to notice and announce my new address when it happens.