Are the public gateway timeouts expected for content solely owned by the uploader?

Hi,

From my research on this forum, I can see quite a lot of people facing HTTP timeouts when trying to access their newly uploaded content through the public gateway.

This comment here suggests that the timeouts are expected when the data isn’t mirrored (solely owned by the uploader):

First question: is this confirmed? Is this documented?
I think most first time IPFS users all start their journey by ipfs add [FILE] and straightaway expect this file to be served through the public gateway, which won’t be the case (in an expected way?).

Running two nodes locally, I can start confirming the above by trying to access through the gateway data newly added by NodeA, facing an in progress HTTP request, running an ipfs get [HASH] on NodeB, and then seeing the HTTP request resolved instantly. Even an HTTP request on my phone on a different network is resolved instantly once a node (different from the uploader node) fetches the data (through the CLI, IPFS desktop or even the node’s local gateway).

Inspecting the DHT shows that the provider entry linking the uploader node to the hash of the uploaded data is successfully created. ipfs dht findprovs [FILE_HASH] on any node retrieves the entry as soon as the data is uploaded, while the public gateway is still unable to serve the data.

If the DHT is fine, what is causing the gateway to hang?

Thanks!

1 Like

The more peers that have the content, the easier it is to find the content. However, in this case, the issue is likely that your peer is behind a NAT and the gateway can’t find/dial you.

IPFS tries to auto-configure NATs to accept inbound connections but this doesn’t always work. For now, you could try enabling AutoRelay (set ipfs config --json Swarm.EnableAutoRelay true).

1 Like

My second local node is on the same network (same machine) so a possible NAT in my setup would prevent the gateway from finding it as well, wouldn’t it?

Do you have UPnP activated on your NAT? :thinking:

Additionally I think you could work around static port-forwarding, but I’m not sure how to configure IPFS to announce the public IP in this case, @stebalien? :thinking:

Also, is there a way to announce public DNS entries?

In this case we could add dyndns entries plus static port forwarding, to help in those cases to traverse the NAT without UPnP.

@sebastiendan do you have IPv6?

I can check that tomorrow.

My goal is to perform a bunch of stress tests on the IPFS network in order to have a clearer idea of its performance and capabilities.

My projects will most likely involve a very large number of I/O operations over the network with possibly very large files.

Any source/resource available online about IPFS performances?

I’m getting away from my original questions but I’d like to focus on the generic behavior rather than my own setup (which is new to me).

I’ll get the info tomorrow anyway @RubenKelevra.

Thanks guys!

The performance just relies on your individual nodes, since the connections are directly peer to peer.

I would recommend to use ZFS below the data storage and deactivate ‘sync’ on it. This allows extremely low write delays, since the data doesn’t have to be committed to disk for acknowledge.

ZFS will still maintain a consistent Filesystem and might just roll back writes several seconds on a power outage. This is in a IPFS application no issue.

A good idea is also to deactivate ‘atime’.

ZFS will also guarantee data integrity, so you don’t have to activate ‘checksum on read’ in IPFS. Additionally it lowers the CPU time for running an integrity check in the data, since ZFS use a much more lightweight algorithm, since it only cares about bitrot/flipped bits, not data integrity on an cryptographic attack level.

On the IPFS side:

If you’re looking into writing large ‘files’ you may want to use the trickle layout instead of balanced, if you’re going to consume the ‘files’ without any random access, e.g. compressed archives (single thread), video files etc.

Random access on those files is extremely slow, so in doubt use balanced.

You can increase the chunk size if you’re dealing with large files and don’t want to read small chunks of it anyway, the default chunking is around 250 kByte, you can increase this up to 1 MB.

If you’re pretty much CPU-Bound you might want to use blake2b-256 instead of sha2-256 for your file hashing.

You want to disable any relay services, since they might be selected for a a connection and they are usually pretty slow. So if you got IPv6 or IPv4 with port forwarding running, you can safely disable the use of relays.

If you run two or more IPFS clients in the same network, you can either activate mDNS to automatically connect them, or just add the static IPs/DNS entries to the bootstrap entries on both sides. This will lower the delay for an initial connection between both nodes.

If you want to copy a dataset, feel free to use this one:

/ipfs/bafybeibw4t3abehtu4rw3rjlsxpknjr3tz3vvbbd6sce7a3jmva5i32xvm (around 43 GB without the redundancies)

It’s just a mirror of the ArchLinux repo, the server holding is currently not under any load, so the speed should be decent.

The dataset uses sha2-256, so the speed might be CPU bound on your side.

I would suspect that the gateway is just at it’s limit of open connections, running into the limiter. This avoids that new data can be fetched directly after the DHT resolved it.

Note that the gateways are an convience feature IMHO and should just be used if there’s no way to use a local IPFS client or a local http gateway.

IPFS will try to auto-detect this based on IP address observations from peers. However, depending on the port you forward, it might not be able to guess the right port.

1 Like

We don’t use IPv6 but UPnP is on on our router settings.
We also don’t currently use port forwarding as far as I can see.

If you run two or more IPFS clients in the same network, you can either activate mDNS to automatically connect them, or just add the static IPs/DNS entries to the bootstrap entries on both sides. This will lower the delay for an initial connection between both nodes.

My nodes are currently on the same network but that’s only for testing, the end goal is to have global users sharing data between each other.

Note that the gateways are an convience feature IMHO and should just be used if there’s no way to use a local IPFS client or a local http gateway.

I fully agree on that.

The easiest solution would be to get IPv6 working.

Have you checked the router logs/status if UPnP has successfully created dynamic port-forwardings?

IMHO static forwardings make more sense, since UPnP might assign a new port if you restart a IPFS node/daemon. This means the network has to learn the new port, increasing the impact of such an operation.

From IPFS side, there’s zero difference. :slight_smile:

My nodes are currently on the same network but that’s only for testing, the end goal is to have global users sharing data between each other.

From IPFS side, there’s zero difference. :slight_smile:

What I meant is my end goal is to have users store/read data using IPFS via our software stack on their own setup.
My project is not an PaaS/SaaS, each user will behave as an IPFS node in background, running the whole software stack.

Without control on the user network layer, I don’t think it’s valuable to test my network setup any further.

Am I missing something here?

Ah, okay. There are so many different usecases for IPFS, so I just assumed the wrong one: »You need to hold and exchange large amounts of data on your servers with IPFS.«

In this case scrap my recommendation earlier. Just use the plain default settings plus the pubsub stuff (if you need fast ipns).

I’m using IPFS at home on an LTE connection with dual NAT without UPnP (one carrier grade NAT one NAT to split the private IP from LTE to multiple devices) and no IPv6. So probably the worst setup possible. Works like a charm. :thinking:

Just to make sure we’re on the same page: You’re using go-ipfs in the latest version 0.4.23?

Yes I am. Most of the time go-ipfs is working perfectly, the issue was only on the public gateway (even though yesterday I noticed a timeout appearing on both).

Right now I’m working on a web app using js-ipfs to benchmark I/O in different use cases.

@sebastiendan alright, just wanted to make sure, since the last version had some fixes.

Which content do you try to receive? A cid directly or do you have issues getting a IPNS resolved?

Also make sure to have these settings on:

"PreferTLS": true,
"QUIC": true,

on the nat/relay settings I use the following settings:

"DisableNatPortMap": false,
"DisableRelay": false,
"EnableAutoNATService": true,
"EnableAutoRelay": true,
"EnableRelayHop": false

@RubenKelevra Yes I’m testing fetching content through its cid directly.

I’ve just set my two local nodes config as you listed and actually am facing the same timeout I mentioned in my previous reply.

It seems the whole IPFS network (not only the public gateway) is having sometimes performance issues:

  • ipfs2 get QmNNLcUeJ6uQuu7aKjaVWE8S45BsYSZ5TJxxgTTNWNvryy (ipfs2 refers to my second node) hangs right now.
  • ipfs dht findprovs QmNNLcUeJ6uQuu7aKjaVWE8S45BsYSZ5TJxxgTTNWNvryy returns successfully my first node.
  • ipfs2 dht findprovs QmNNLcUeJ6uQuu7aKjaVWE8S45BsYSZ5TJxxgTTNWNvryy hang as well.

This situation is the same I noticed yesterday, a network issue on a layer apparently before the one where the public gateway timeouts happen.

EDIT: by the time I finished writing this reply, the public gateway was able to resolve the HTTP request for that content while my second node still hangs…

FYI reverting the config to the initial one for my second node makes it able to get the content and find the DHT provider entry.

The DHT is definitely facing performance issues at the moment. The current 2020 H1 roadmap, and the entire focus of the next several go-ipfs releases, is fixing this. See https://github.com/ipfs/roadmap#2020-priority.

go-ipfs 0.5.0 is slated to land by the end of March and will have a rewritten DHT.

Different test: this hash QmPct44ed5f8XU6ZE2aoDg3vsZ6YivR482wjoYu95jQbau is related to data uploaded by my Tokyo node.

It’s available through the public gateway but not from my Germany node.

EDIT: nevermind this

check if your German node is connected to public node. maybe your node is not accessible for other node.

Regards

Hey @josselinchevalay, sorry I removed all my posts except the last one because I reached the limit of operations, went to do something else and forgot to get back here…

My node in Germany had a pre-existing configuration that was causing my issue.

Thanks anyway :+1: