Let’s assume that you want to store some data that is very important for a long time. IPFS has some pretty good preconditions for ensuring data integrity, with content-addressed storage with secure hashes as identifiers.
So how would I set up a number of IPFS nodes to actually do store important data? Assuming that IPFS nodes are physically distributed in different AWS availability zones / regions as well as different physical machines.
First of all, IPFS nodes should be configured in such a way that they validate data on lookup, so you have a guarantee that you will never get wrong data for a hash. I guess this is covered by the
HashOnRead config option, correct?
Second, there should be some kind of background process that regularly validates blocks, discards invalid blocks, and requests them again over the network. Basically a background repair mechanism similar to e.g. what is available for cassandra. Ideally, corrupted blocks should be stored somewhere (lost+found) in case you can’t recover the block from the network.
Does such a repair mechanism exist? If so, how would I configure it?