How does IPFS handle redundancy?

IPFS description claims, and I can see the power of it, that if an added file has even a byte of difference from another, it has an entirely different hash.

Of course, it may also imply gigantic similarities between the two files, even though the hash is not similar at all.

Does IPFS have any method to avoid downloading similar parts twice? IIRC, the file storing method uses some kind of “blocks”, meaning that theoretically you can build up two different files by using common blocks, which exists only in one copy on the local filesystem.

Not sure if you follow, but I guess the matter is understandable.

Does IPFS adress this redundancy issue anyhow, If not, is there any possible way, or even currently adressed issue on Github?

1 Like

Yes,
The file is actually divided in smaller pieces (chunks) which should solve this problem :wink:

So the “build files from common chunks” nailed it what’s actually happening?

Just to make it clear :slight_smile:

I want to add a tutorial to the DWeb primer about this. If you want to encourage it, like the issue in the dweb primer repo

3 Likes