I tried downloading the data after manually connecting to the peerID you listed (
ipfs swarm connect /p2p/QmcfgsJsMtx6qJb74akCw1M24X1zFwgGo11h1cuhwQjtJP). It took about 23s to get via
ipfs pin add, unfortunately that peer is not advertising your data in the DHT so it has to be connected to manually (e.g. peering for longer term connections and
ipfs swarm connect for shorter term ones) or sort of bumbled into as mentioned above.
Unfortunately, writing the 10k tiny files out to disk even once the data was cached via
ipfs get QmXUUXRSAJeb4u8p4yKHmXN1iAKtAV7jwLHjw35TNm5j took about 40 seconds when I tried it. Untarring data and optimally writing lots of tiny files to disk can be tricky. When I tried passing the
-a flag to get the data exported as a tar archive almost instantly and then I manually untarred it with my system utilities and it took around 10 seconds.
I’m not sure how you’re trying to get the data out of go-ipfs, but if you’re using
ipfs get I’d recommend using archive flag and using whatever the fastest untarring utility your OS provides. At that point it’s no longer an IPFS problem and is between you, your untarring utility, OS, and filesystem . If tar utilities are too slow you might need to take a step back and look at your use case and approach.
Having a single server that has and advertises the data in the DHT (i.e. no red X in ipfs-check) should help you out on the discovery times and not require your users to manually peer with a single server. I would highly recommend having your data advertised in the DHT if you can, otherwise the ability for other nodes to find your data is compromised (for example, it may require manual configuration)
The more nodes that store and advertise your data the more resiliency you have to a single node being out of service, busy, censored, etc.