IPFS, NAT and k8s

Hi everyone,

I have a k8s cluster running several pods, each with IPFS nodes running within - in order to peer explicitly with a single IPFS container from outside the cluster, we came up with a combination of pod cluster ip + pod cluster port along with some rules to prevent these pods from being moved around on restart. Using this information we have successfully peered within the cluster itself.

Which looks something like this:

$ ks describe pod ipfs-1 | grep Node:
Node: ip-192-xyz…us-west-2.compute.internal/192-xyz

$ k describe node ip-192-xyz…us-west-2.compute.internal | grep ExternalIP:
ExternalIP: 44.222.33.555

$ ks get svc | grep ipfs-1
ipfs-1 NodePort 10.100.180.231 4001:32639/TCP,5001:32058/TCP, 80m

Putting it all together,

ipfs//tcp//ipfs/

/ip4/44.222.33.555/tcp/32639/ipfs/QmVyKLpva…

^Peering with the above multiaddress from outside the cluster works perfectly.

However, this means that propagation to the public gateways is…still an issue since the announced value of ‘ipfs id’ contains an internal value. Explicit attempts to call 'ipfs dht provide ’ are still not propagated externally.

Ex. of ips id:
“/ip4/127.0.0.1/tcp/4001/ipfs/QmVyKLpva…”,
“/ip4/192.168.3.7/tcp/4001/ipfs/QmVyKLpva…”

Is there a recommended approach for this specific scenario?

You could set to set your “Announce” address to “/ip4/44.222.33.555/tcp/32639/ipfs/QmVyKLpva…” in the Addresses section of the IPFS config file.

Thanks for the help Jim! This actually worked perfectly for what we needed to do!

Minor note is that the Announce config doesn’t expect peerID:
“Announce”: ["/ip4/44.222.33.555/tcp/32639"] is what worked

I’m trying something similar, but running into an error about “failed to negotiate security protocol”.

First I tried using a GCP LoadBalancer service in order to preserve the 4001 port.
I update the Announce address with the LoadBalancer address:

/ # ipfs swarm addrs local
/ip4/35.223.117.213/tcp/4001

Then, from another node:
$ ipfs dht findpeer Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau
/ip4/35.223.117.213/tcp/4001

$ ipfs swarm connect /ip4/35.223.117.213/tcp/4001/ipfs/Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau
Error: connect Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau failure: failed to dial : all dials failed

  • [/ip4/35.223.117.213/tcp/4001] failed to negotiate security protocol: read tcp4 165.22.38.112:4001->35.223.117.213:4001: read: connection reset by peer

So, I switched to a NodePort, and updated the Announce config

/ # ipfs swarm addrs local
/ip4/104.197.190.12/tcp/30641

Then, from another node:
$ ipfs dht findpeer Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau
/ip4/104.197.190.12/tcp/30641

$ ipfs swarm connect /ip4/104.197.190.12/tcp/30641/ipfs/Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau
Error: connect Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau failure: failed to dial : all dials failed

  • [/ip4/104.197.190.12/tcp/30641] failed to negotiate security protocol: read tcp4 165.22.38.112:4001->
    104.197.190.12:30641: read: connection reset by peer

Any idea what’s failing to negotiate and why my kubernetes node is resetting the connection?

Thanks!
Ben