What is happening when I connect to the public gateway?

What is happening when I go to ipfs.io, or when I lookup a site on the public gateway (e.g. ipfs.io/ipfs/HASH)?

For instance, when I do traceroute I will get a completely different lookup result

# traceroute ipfs.io
traceroute to ipfs.io (147.135.130.181), 30 hops max, 60 byte packets
 1  198.199.64.253 (198.199.64.253)  0.823 ms 197.159.62.154 (197.159.62.154)  0.345 ms  0.340 ms
 2  138.197.248.106 (138.197.248.106)  0.560 ms 138.197.248.104 (138.197.248.104)  0.506 ms 138.197.248.80 (138.197.248.80)  0.522 ms
 3  * 138.197.244.13 (138.197.244.13)  0.649 ms 138.197.244.15 (138.197.244.15)  0.496 ms
 4  nyiix.nyc.ny.us (198.32.160.77)  0.634 ms be100-1298.ldn-5-a9.uk.eu (192.99.146.132)  71.579 ms nyiix.nyc.ny.us (198.32.160.77)  0.641 ms
 5  213.251.128.64 (213.251.128.64)  74.028 ms be8.nwk-20-6k.nj.us (178.32.135.59)  0.989 ms 213.251.128.64 (213.251.128.64)  74.016 ms
 6  * be100-5.nwk-1-a9.nj.us (192.99.146.94)  1.243 ms *
 7  * * *
 8  * * *
 9  * * *
10  * chappy.i.ipfs.io (147.135.130.181)  73.259 ms  73.266 ms

And again:

# traceroute ipfs.io
traceroute to ipfs.io (217.182.195.23), 30 hops max, 60 byte packets
 1  197.159.62.154 (197.159.62.154)  0.257 ms 198.199.64.253 (198.199.64.253)  0.207 ms 197.159.62.154 (197.159.62.154)  0.204 ms
 2  138.197.248.104 (138.197.248.104)  0.418 ms 138.197.248.94 (138.197.248.94)  0.416 ms 138.197.248.108 (138.197.248.108)  0.411 ms
 3  138.197.244.7 (138.197.244.7)  0.681 ms * *
 4  be100-1298.ldn-5-a9.uk.eu (192.99.146.132)  69.852 ms  69.851 ms nyiix.nyc.ny.us (198.32.160.77)  0.586 ms
 5  be8.nwk-20-6k.nj.us (178.32.135.59)  1.291 ms 213.251.128.64 (213.251.128.64)  75.130 ms be100-1295.ldn-1-a9.uk.eu (192.99.146.126)  69.385 ms
 6  be100-5.nwk-5-a9.nj.us (192.99.146.96)  1.182 ms  1.490 ms *
 7  be100-1298.ldn-5-a9.uk.eu (192.99.146.132)  70.144 ms * *
 8  * * *
 9  * * *
10  scrappy.i.ipfs.io (217.182.195.23)  76.107 ms  76.130 ms *

Are these nodes all public gateways? Or are they nodes of people running ipfs daemon? Why is the final destination different, shouldn’t it go to the same place?

Those are both are our public gateways. You’re getting connected to different physical machines because we’re using something called round-robin DNS for load balancing:

2 Likes

Thanks, that’s what I thought. Do you consider having these end-points a risk to the public access of IPFS? I totally understand that it’s not really a priority since IPFS is optimized for peer-to-peer. I’m wondering because I’d like to share IPFS website with non-IPFS people via the public gateway, and I’d like to have some idea about the resilience of the public gateway in the case that these two machines go down.

Do you consider having these end-points a risk to the public access of IPFS?

Yes. They’re a stop-gap measure to bridge the non-distributed web and the distributed web. It’s not that bad because you can always use your own gateway (there’s a browser addon that will substitute known gateway domains for one you specify) but it’s definitely not how we envision people using IPFS.

We’ve discussed (and will likely build) a JavaScript based gateway that will load js-ipfs (our JavaScript IPFS implementation) in the browser and fetch files through that. This way, users can load this webapp once and continue using it even if they can’t connect to any of our gateways.

2 Likes

If you don’t want want to use your own instance of IPFS and use a resilient public gateway, to avoid risk of breaking the server bandwitch, you can use this code https://github.com/VanVan/ipfsProxyHTTP.
It can help you to redirect users to a random verified working gateway to dispatch request on several servers.