Using ipfs for p2p distribution of large files

I’m asking because the Leela Chess Zero project ( has a distributed model where neural network files are being distributed to thousands of contributors by direct download, and this process could perhaps be improved.

Currently, the files are about 70 MB.

Basically my question is: what steps would be involved in making this work? Server side and client side.

I’m not part of the development team directly but I said I would look into the feasibility of this.

The situation is that there is a client running training data generation and once in a while there is a new version of the network file, which it then downloads from the server.

I’m thinking that these neural network files could be put in IPFS and the clients could run an IPFS daemon to share the load of distributing the files.

The client itself is written in go, at

Perhaps the neural network files could be put under an IPNS, with the current hashes of the networks as file names, and it would be as easy as to just use a different URL? See for links to the files that are being downloaded, the client uses these links as well.

This isn’t a comprehensive response, but I’m just sharing my initial thoughts.

I think the client-side and server-side would be pretty similar. They’d both need to initialize IPFS repositories as needed and (as you already thought) have running go-ipfs daemons if you wanted the files to be distributed in a p2p manner.

In order to publish updates you would just add the the directory with the files to IPFS on the “server”, then publish the hash to your IPNS address that you share to the clients.

If you wanted to just use a domain name to point to the IPNS address (or even the latest IPFS hash) you could create a dnslink record for a subdomain that with the IPNS or IPFS address. That way you wouldn’t need to tell clients about the IPNS address, though you certainly could just use the IPNS address.

Clients could pull the latest files by polling the latest hash periodically to see when it changes and pinning it to their node when it does change. After the new hash is pinned, then optionally unpin the previous hash (or keep some number of previous versions pinned). To prevent the clients from running out of space eventually, garbage collection would need to be run occasionally on the IPFS repo.

Thanks for the reply! I experimented a bit with IPNS but found it to be very slow, so I don’t think relying on that would be optimal. It could be useful for keeping an up-to-date directory of all the hashes though, since people will sometimes want to download them all.

However I don’t think you need IPNS for the simple scheme I’m imagining, because clients will need contact the server anyway to get the ids of the nets to download (if any) and a task to perform on it, and that call could easily provide the corresponding ipfs hashes too. IPNS would only be needed if the server goes down for some reason, and then nothing works anyway, since the clients are uploading training games to it.

These files would be pretty specific to the distribution (nobody else would know the ipfs hashes anyway) so it would make sense to initialize the swarm from the server in some way, so that the clients can quickly find each other, or is that already taken care of by just adding the servers IPFS daemon to the bootstrap list?

Since clients only need the last few nets, it makes sense for them to garbage collect (at least have that be an option), although I think many people will also want to keep the data live to help others with downloading and take load off the server.

Yeah, there are currently some performance issues with IPNS that I personally haven’t found solutions to (maybe someone has). There was an option added in v0.4.14 that was supposed to make IPNS faster, but it still seems to be slow for me. The only reason I still mentioned IPNS in its current state is that I had (maybe incorrectly) assumed that a 1 or 2 minute wait to get the most recent hash would be tolerable for clients.

I didn’t realize there was already server to client communication happening that would continue even if they were using IPFS, so it definitely sounds like you already have an existing channel in place that would work for sharing the latest hash.

Any node can act as a bootstrap node, so adding your server’s address as a bootstrap node for the client nodes should work just fine for helping the clients find each other.