File of 8888 images, each around 13MB each, not being discovered by other IPFS services

I am supporting a client’s NFT project, and helping them sort out their image and metadata hosting on IPFS.

Normally, I run my own IPFS node on my local system, and from there I upload a folder of nft images and then a folder of corresponding nft metadata jsons. Then I use a service like Pinata to create an account for the client and pin the folders to their Pinata node by using the pin by CID feature.

My current client just informed me that their images are going to be around 13MB each and there are 8888 of them.

I just did a test using a dummy image around 10MB and copying it into a folder 8888 times. I can upload it to my IPFS node with no problems. I run ipfs block stat bafyMYHASH and get a size of 487734, so I understand this is below the limit of 1,000,000 to ensure the file and it’s contents are discoverable.
My issue is that I have tried accessing the image folder and underlying images via public gateways and get 504 timeout errors, and when I try pinning the CID to Pinata, it seems to just be stuck on searching for this CID…
Is a folder of 8888 13MB images too big for IPFS? Is there anything I am missing to make this folder discoverable and pinnable on a service like Pinata?

What does say for your CID?

I think you helped me realize that I messed up…
I am running the IPFS node on my local machine’s C drive. I have the folder of images on an external harddrive and was using powershell to point to that drive and call ipfs add --recursive --progress --cid-version=1 ./images/ to what I thought was uploading loading the folder of images onto my local node.
I just opened the IPFS webui to figure out my multiaddr, and that is when I saw that I don’t even the folder of images on my node. Which is weird because I don’t even understand what happened when I called ipfs add --recursive --progress --cid-version=1 ./images/ in my powershell pointing to my external harddrive.
So I am just importing the images to my node via the webui for now and will see what happens when that is done.

ipfs add does add things but they are not visible on MFS which is what the IPFS webui shows. You need an additional step like ipfs files cp /ipfs/<cid> /myfiles. See ipfs files --help.

btw, this is what I received when using the link you provided:

:x: Could not connect to multiaddr: context deadline exceeded
:heavy_check_mark: Found multiaddrs advertised in the DHT:
:heavy_check_mark: Found multihash adverised in the dht
:x: There was an error downloading the CID from the peer: could not connect to peer

I looked up the error for not connecting to muliaddr, which was this:

  • Could not connect to the multiaddr . Machines on the internet cannot talk to your machine. Fix your firewall, add port forwarding, or use a relay.

I don’t understand though as I’ve never faced this issue before when uploading things to the IPFS network from my local node, and I haven’t changed anything with my wifi network.

@hector I appreciate your help so far, and not sure if you might know how I can trouble shoot the issue I’m facing now.
I’ve tried making sure my node is discoverable, tried turning on relayhops, even testing with turning off my Windows firewall. I’m definitely connected to a lot of peers, and can see from the webui that I have data continually incoming and outgoing traffic to my node. I just don’t understand why I can’t seen to relay data out to the network now even though I have always done so before.

Being connected to lots of peers means nothing, because your node being able to contact other nodes, doesn’t means that other nodes can contact you.

It is very unlikely to have anything to do with the windows firewall. What is likely happening is an issue with your router firewall / NAT.



I see you have a public IP. I guess you just inputed a private IP in ipfs-checker. (or your upnp server stops working after a while).

So it did turn out to be my router’s firewall. Once I configured that to allow IPFS through then it was working like normal again.

Thanks everyone on this forum for saving me once again!