How long is "long term" though? I'm concerned because changing SHA256 to BLAKE2b does seem like a fairly frivolous change - SHA256 isn't broken or considered weak by any standard. The only reason I could gather was mostly a speed increase - nice, but not something I'd like to see in a stable network.
Thank you very much for the info, though it seems bare. For example, I can't seem to find much around how file chunks are wrapped in protobufs?
I'm hoping that the design is intended for more than just 'most people', unless the original intent was never really to support such usage. Note that I'm looking at this from an uploader's perspective, not a downloader, so presumably everything will be pinned.
If 10TB is unusual today, it won't be in 10 years time. Also, spare CPU or I/O time isn't always necessarily free, particularly in a cloud environment where you pay for what you use, or perhaps even more complex topologies which involve hybrid semi-cold storage (where access can actually be relatively expensive).
To put it another way, I don't think rehashing is always an acceptable solution unless there is absolutely no other choice.
A single core of an Atom C2750. But the C2750 is quite a powerful processor. Consider one of these - 1GHz Cortex A9 dual core with 2-6TB storage, which seems like very good candidate for IPFS 'seeding'. I'd be surprised if SHA256 ran faster than 50MB/s total across both cores.
Perhaps you're thinking of users running high performance 60+W desktop processors? If that's the case, then yes, CPU is likely not an issue, but on a low power 2W CPU, which I feel will be becoming more prevalent in the future, it can be quite different.
Unfortunately the fragmentation problem I refer to is seen frequently with torrents. A file is shared, then re-shared again and again. Often, .torrent files are not stored, so for less frequently accessed content, it may prompt some to re-create .torrent files from the actual content, enabling it to be re-shared.
Torrent files are identified by the info hash, but unfortunately, this can vary even if the underlying content is identical, for example, due to differences in selected piece sizes and the ordering of files. This causes fragmentation in the network as users attempting to access the content via the older .torrent file may not see peers accessing it via the newly shared .torrent (or torrents, if multiple versions of it are distributed). If Bittorrent had defined a strict hashing mechanism, say fixed chunk size (possibly in a tree fashion) and exact ordering of files, this problem wouldn't exist as the same content would always yield the same hash, and the network would stay efficient.
I was hoping that IPFS would address the issue, and the way the home page is presented led me to this believe, but thanks to the explanations here, this clearly does not seem to be the aim.