Large files long-term storage

There are a way to “plug” Minio as a “cache” for IPFS-cluster long-term (with very-low access frequency) big-data storage?

As Git LFS (git-lfs.github.com) and other free/decentralized storage mechanisms are appealing, they ultimately don’t match terabyte goals, e.g. satellite image, etc. digital preservation. Minio as cache system for IPFS-cluster seems a good solution.

Yes it is, I do this with my project Temporal. This is how we interface with minio, here is an example of taking data out of minio, and into IPFS. You would likely have to modify go-ipfs, or write your own code in order to implement a minio S3 datastore within the main ipfs daemon, as I don’t believe that is native functionality. However, I believe the go-ds-s3 library adheres to the datastore interface, so you should in theory be able to fork the go-ipfs codebase, and swap out the default datastore for an S3 backed one, connecting to minio.

What kind of “caching” are you looking for? You could potentially store your data in IPFS, and then dump the merkle-dag object and store that in minio. As minio is S3 compatible, you could potentially use something like go-ds-s3 and have minio act as the S3 datastore you talk to.

1 Like