IPFS for files over 100GB

Hi, I’m a Cosmos ecosystem validator, and I am attempting to distribute the data folders of cosmos blockchains using IPFS, and I have a problem:

 3.82 GiB / 140.64 GiB [=>-------------------------------------------]   2.71% 13h52m30s

Thing is, the files will get larger, not smaller.

This is on a very fast machine with NVMe disks in a BTRFS RAID 0.

  • 16 current gen amd cores
  • 128GB RAM
  • 4x nvme drives in raid 0 for increased read write bandwidth

…Unfortunately, the times to create a CID are currently too long to be viable for my use-case unless I dedicated a machine solely to this task (which is under consideration) … but I am wondering if there are any settings, etc that I can tweak to improve the overall CID creation time.

Thank you!

Try badger datastore…

  • badgerds

Configures the node to use the badger datastore.This is the fastest datastore. Use this datastore if performance, especially when adding many gigabytes of files, is critical. However:

  • This datastore will not properly reclaim space when your datastore is
    smaller than several gigabytes. If you run IPFS with ‘–enable-gc’ (you have
    enabled block-level garbage collection), you plan on storing very little data in
    your IPFS node, and disk usage is more critical than performance, consider using
    flatfs.
  • This datastore uses up to several gigabytes of memory.

This profile may only be applied when first initializing the node.


You’ll need to initialize a new node, since - AFAIK - there’s no method to switch datastore formats after initialization.

1 Like

Hey, this was 100% a solution to my problem, many thanks!

:heart: :pray: