Getting auto-relay to work

I am currently trying to get auto-relay feature to work to transfer files between to NAT’ed nodes.

I add a file with ipfs add on a node behind NAT, get the hash and try to ipfs cat Qm...it on another node, but it never resolves. ipfs dht findprovs Qm... also does not resolve. I do see both nodes connected to other nodes, but I don’t see any relay nodes in ipfs swarm peers (the relays occasionally come and go, never matching on both of the nodes, which I think is a prerequisite for auto-relay to work).

Attempting to fetch from a public gateway (e.g. https://cloudflare-ipfs.com/ipfs/Qm...) also does not resolve, but I think that’s because I am not connected to a relay node.

I have some configuration changes that may impact the way it works (this is configured to minimize the impact on the local network and to avoid firewall blocks/notices):

Swarm.ConnMgr.HighWater = 160;
Swarm.ConnMgr.LowWater = 80;
Swarm.EnableAutoRelay = true;
Discovery.MDNS.Enabled = false;
Addresses.Swarm = ['/ip4/127.0.0.1/tcp/4001'];

Why are you using 127.0.0.1 instead of 0.0.0.0 for your swarm port? I think this means your node’s swarm port will only be reachable from other IPFS nodes on localhost (likely none unless you set up multiple local nodes on the same machine).

My external ports are closed anyway, so no point in trying to listen on an ip/port that is going to both a) be blocked and b) notify network admin through a firewall. As far as I understand, this would still allow a node to make outgoing connections, just not accept incoming ones - which is the case for NAT anyway.

I think you’re right.

Have you modified the Adresses.Announce section in the config?

edit: you can ignore this question. I think I got myself confused and I don’t think this would affect auto-relay after all.

I think I’ll need to do more testing to try to recreate your issue.

Are these the only changes to the default config? If not, could you share your ipfs config show output?

These are the only changes.

After looking at this some more, I don’t think it’s going to be feasible right now for me to recreate the strict network conditions and NATing your node is behind.

How are you using the output of ipfs swarm peers to tell which nodes you’re connected to are relays and which ones aren’t? Relay connections should be somewhat protected from being killed by the connection manager, so it’s odd that you’d see them come and go (though I’m not sure how you’re doing this).

I might be missing something in this thread, but I’m not sure that two NATed nodes should need to be connected to the same relay in order for discovery to work. Besides it taking up to a couple of minutes for NAT detection to figure out that relay connections are required, I’m also not sure yet what would prevent a NATed node from successfully advertising relay addresses.

I’m assuming that ipfs id never shows any public relay addresses for your NATed node, even after several minutes?

I think I’ve got my connection messed up enough so that my node shouldn’t be dialable by external nodes. However, it’s hit or miss whether I get any relay addresses showing up in ipfs id. Or in some cases, it just takes a long time to see public relay addresses show up as addresses for my node.

By looking at the output from ipfs --debug daemon 2>&1 | grep -E "autonat|autorelay" I think what might be happening is that AutoNAT is correctly detecting that my node has bad connectivity and I’m connecting to relays (some connection attempts don’t succeed), but maybe it’s taking a while to connect to a relay that has AutoHop enabled (?).

In one test it took maybe 30 minutes for me to get a public relay address to start getting advertised for my node. It looks like there might be some AutoNat/AutoRelay improvements coming in v0.4.20, so perhaps this experimental feature will see some improvement in the next release.