This question comes up a lot and I don’t know very well what to say.
Using the Badger datastore I know that a single IPFS daemon can store data at least amounting a Terabyte without effort because I’m doing it myself. My questions are more:
- Is there anyone storing, say, more than 10 TB per node?
- What are the current issues that one can encounter with Badger DB? I have personally not seen many.
- What are the constraints imposed by the go-ipfs peers? Not just storing but handling it: I suppose doing a repo-format upgrade in 10TB of data will take a while at least. Advertising millions of keys to the DHT should also become problematic at some point (memory footprint? badwidth constraints?).
Dirty calculation: 1M keys at 256KiB per block (default) gives me around 250GiB of data. Badger is supposed to handle millions of keys. So 10TiB would mean around 40 Million keys.