Using IPFS over a SOCKS proxy

From @mateon1 on Mon Oct 03 2016 12:51:03 GMT+0000 (UTC)

Recently, I’ve been on a network over which IPFS couldn’t connect to anything, because of agressive outgoing port filtering, and the inability to punch through the NAT.
It did allow SSH (didn’t filter port 22), so I’m wondering, whether it’s possible to use IPFS over a SOCKS proxy, with an additional reverse-forwarded port for incoming connections.

An example solution would be:

ssh -R4001:0.0.0.0:4001 -D1080 user@remote

ipfs daemon --with-socks=localhost:1080

and

ipfs config Swarm.AdditionalAdvertised '["/ip4/remote/tcp/4001"]'
# and
ipfs swarm advertise /ip4/remote/tcp/4001
# (similarly to how `ipfs swarm filters` works)

Something along those lines should send everything through the localhost:1080 SOCKS proxy, and advertise /ip4/remote/tcp/4001/ipfs/QmPeerID to the swarm additionally to the regular addresses.


Copied from original issue: https://github.com/ipfs/faq/issues/185

From @ivar on Sun Oct 23 2016 13:35:00 GMT+0000 (UTC)

This would be interesting to know, as it might facilitate anonymization by running IPFS over Tor.

From @jbenet on Mon Nov 14 2016 18:27:46 GMT+0000 (UTC)

- Since IPFS is just opening and using ports it is definitely configurable to use proxies or virtual networks with your OS networking stack.

  • I recommend the vpn / cjdns approach to things – where it’s not a single proxy, but all traffic is routed over a special network.
  • It may or may not make sense to enable go-ipfs to follow proxy settings from a config – not sure.
  • I like the Swarm.AdditionalAdvertised idea. something like that makes sense. if you’d like to propose it, prototype the whole feature in https://github.com/ipfs/go-ipfs
    • just as an issue in text is fine, just how it work in full. it can be tweaked and refined once it’s being considered.
  • There will be native Tor support

It would be great if the ipfs daemon could read a list of pre-existing interfaces, like tun0, tun1, enp4s2… (tun0, tun1 might be pre-existing OpenVPN connections),

  • optionally through some of them create a SOCKS5 tunnel given proxyServerNameOrIP and user+password
  • bind to each interface or the newly created SOCKS5 tunnel that goes through it
  • apply load balance + failover (I have no idea whether DHT should run through all interfaces/tunnels or some of them would only serve if/when necessary/convenient for data transfer).

The list template might be:
interface1, optional SOCKS5 proxyServerNameOrIP, loginuser, loginpassword
interface2, optional SOCKS5 proxyServerNameOrIP, loginuser, loginpassword
interface3, optional SOCKS5 proxyServerNameOrIP, loginuser, loginpassword

{{{
EDIT: or better
interface1, optional SOCKS5 proxyServerNameOrIP, loginuser, loginpassword
interface1, optional SOCKS5 proxyServerNameOrIP, loginuser, loginpassword
interface1, optional SOCKS5 proxyServerNameOrIP, loginuser, loginpassword
interface2, optional SOCKS5 proxyServerNameOrIP, loginuser, loginpassword
interface3, optional SOCKS5 proxyServerNameOrIP, loginuser, loginpassword

because that would allow for instance to create various SOCKS5 tunnels from the same ethernet port (I have much more bandwidth than allowed by any single server of the VPN service I’m using, which on the other hand allows simultaneous connections to various servers).
}}}

Of course, some lines could only carry the first element.

In the meantime, I’ve tried to figure out how to connect the ipfs daemon through a SOCKS5 tunnel using PuTTY and a proxyfier called tsocket, but I haven’t been able to accomplish it yet.

Instead of dynamic ports forwarding, static remote port(s) forwarding might be the way to go, for port 4001, or for ports 4001/tcp, 4002/udp, 8081/tcp (and possibly 8080/tcp).

@flyingzumwalt Could you please share how, if you were able to solve it?

What I need is connect to a SOCKS5 proxy server on port 1080 authenticating with user and password, no ssh keys, and have all ipfs daemon traffic go through it, the same way as QBittorrent does with a SOCKS5 proxy server.

The authentication user name is an e-mail address which domain is not the same as the proxy one (which I can reference through its fixed IP). That poses no problem with QBittorrent.

I’d prefer the data tunnel not to be encrypted, to avoid adding latency and losing bandwidth.


Testing through OpenVPN anyway:

Running ipfs id I actually see that the last IP address is the VPN server one.
However, when accessing through some gateways some small jpeg and mp4 files that I’ve added to this node, 260 kBytes - 22 MBytes, what happens varies greatly, and doesn’t seem to depend much on the size. Sometimes, seldom alas, the file is downloaded quickly and completely from the gateway after 5-20 seconds for discovery, but more often the transfer hangs and restarts many seconds later or never at all.

Could it be related to problems in listening to port 4001?

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
4001/tcp on tun0           ALLOW IN    Anywhere                   # ipfs daemon
4001/tcp                   ALLOW IN    Anywhere                   # ipfs daemon
4001/tcp (v6) on tun0      ALLOW IN    Anywhere (v6)              # ipfs daemon
4001/tcp (v6)              ALLOW IN    Anywhere (v6)              # ipfs daemon

I have no better results when disabling the firewall completely.

$ ipfs swarm peers | wc -l
550

I see - much faster - 750-880 peers when not going through the VPN.
In that case, ipfs id shows my public IP, but transfers from my node seem to be even more problematic. I find it thrilling, however, that they aren’t completely dead, maybe some chunks of data can travel because other nodes config has “EnableRelayHop”: true?


Not indispensable details:

I’ve gone through plenty of articles and tried with ssh, nc, ncat, tsocks, two versions of proxychains and socksify, without success so far (I haven’t given up yet but this is driving me nuts).

Alas after a big effort to switch to an ISP with higher bandwidth (nothing fancy anyway) and lower latency, to have them arrive here with an optical fiber cable, I get to know that they aren’t allowing to open any ports on their modems for incoming connections, since three months ago I’m told. I guess they had some hacks… I insisted that the port number only means anything when the packet reaches its destination, with no result.
(Damn my previous ISP was slower but they had put my modem in bridge mode.)

I need to solve this, not only because I’d like to run an IPFS node myself but also because I’d like to recruit a few more persons to do so and I guess that some of them might face the same situation.

The next step, ideally, would be what I described in my previous post, to be able to connect one ipfs daemon instance to various SOCKS5 proxy servers simultaneously, with balanced load and failover. But being able to connect to just one would be great already now.

Thanks for any advice!

Since main IPFS implementation is written in Go, you should not ever try to do torsocks on it. All it gives you is a false sense of security.
Go programs cannot use the LD_PRELOAD trick employed by proxifiers since they use static linking for most things. Tried that and got it just after reading on that particular peculiarity of Go programs. Noone ever told me about IPFS + torsocks not going along.
I’ve tried to set up manual tunnels with listening ncats pointed to known swarm addresses over Tor to no avail. IPFS is not just built this way. Every time it connects to a peer, it resorts to plain connections.
I’ve been lurking for three years around the #37 issue on GitHub with no progress taking place and I feel that Tor or I2P support is a non-goal for IPFS developers. They just won’t let it go en masse. A Tor transport has been there for a lot of time; an I2P transport exists too. None included nor available as a separate offshoot. Web 3.0 does not need Tor, all it needs is CloudFlare.
Consider that there won’t be IPFS over non-standard medium anytime soon.