Actually, this may be doable with zksnarks if we can use them to prove SHA1 hashes “valid”. Of course, I know next to nothing about zksnarks so this may not be possible…
Backing up a bit, the naive way to do this would be to send chunks in reverse order as follows:
chunks := split(block)
for i = len(chunks)-1; i--; i > 0 {
send(SHA1(concat(chunks[:i])), chunks[i])
}
Given that SHA1 is a streaming hash algorithm, the receiver can validate each chunk as follows:
hash := hashOfEntireBlock
for {
hashOfPrefix, chunk := receive()
assert(Sha1Extend(hashOfPrefix, chunk) == hash)
hash = hashOfPrefix
}
assert(hash == sha1StartingHash) // all SHA1 hashes start at the same value.
Unfortunately, I’m almost positive that it’s entire possible for an attacker cook up chunks for some targetHash
as follows:
hash := targetHash
for {
hash, chunk := solveForHashChunkPair(hash)
send(hash, chunk)
}
Basically, as long as the attacker never tries to “finish” sending the file, they can keep sending us hunks.
However, we may be able to use zksnarks to prove that a sender knows how to finish sending a file (without actually sending it). To do this, we’d need to be able to use use zksnarks to prove that, in order to produce a proof P (zksnark) for some SHA1 hash H, the author of P must have known some D such that SHA1(D) == H
. This would allow a sender to prove to the receiver that the sender knows of a finite set of steps to finish sending the file.
Note: These proofs only need to be produced the first time an object is added to the network, not every time a peer sends an object to another peer. However, they would have to be produced at least once so adding large objects could take over an hour. The moral of the story is: don’t store 100MiB files in git!