Adopting in today's web

I just started playing with ipfs tonight, and I’m intrigued. Specifically, I’m liking the CDN like qualities of the swarm distributing my content, but as I’ve searched around reddit and looked at other peoples content that they’ve placed out there, it’s raised some questions. I’m hoping someone can confirm these concepts I’ve got in my head about this:

Question 1: ipfs’s cdn-like qualities only really work if users are also running ipfs locally. For example, if I post a song and share the /ipfs/hashtaghere through a link on a website pointing to localhost:8080/ipfs/hashtaghere, then only those users with ipfs installed would get the file, and would ideally join the swarm sharing those files, right? But people without ipfs would have to have the link go through a gateway, like ipfs.io/ipfs/hashtaghere, in which case they would be of no benefit to the swarm, correct? If ipfs.io is hard coded as the source link, then basically everything just downloads from ipfs.io’s servers with no cdn-like benefits, right?

Question 2: Do blocks ever push to a swarm peer, or are they only shared when that data is requested?

Last one: I remember a project from about 14 years ago (can’t remember the name) that basically did the same kind of thing here, breaking up files, hashing the pieces, spreading them around. One of their claims was that if you shared a copyrighted song, lawyers wouldn’t be able to prove it because the piece of the file that they may have downloaded from you could technically have been created as a block from a different file, like a family photo. Since ipfs gives back the same hash for each similar content block, and it breaks files down into 256b chunks, theoretically could this same scenario happen here? Could a picture that is broken down into blocks overlap some of the same blocks from, say, a movie file?

Thanks in advance! :slight_smile:
Mike

Question 1:

When you’re requesting data from ipfs.io you’re still requesting it from the swarm. You’re simply requesting it from a public gateway, which handles serving files from IPFS to users who aren’t running an IPFS daemon. You can go to any public IPFS gateway, and you’ll still beq requesting data from the swarm, however you’ll likely be receiving it from a different peer. Anyone who wants data from IPFS will either need to run the IPFS daemon themselves, or request it through a gateway.

Question 2:

No blocks will never be pushed to the swarm, that’s not how bitswap (the block retrieval agent) works. Instead, bitswap maintains a wantlist that it uses to talk to peers in search of blocks it wants.

Unfortunately I can’t answer your third question.

Thanks for the response!

To Q1: Isn’t the ipfs.io gateway just running a peer behind the scenes? So if I sent a link to 5 friends using the ipfs.io link, it would just be the ipfs.io peer (and me) that has the file blocks, right? Even though 5 people downloaded the file from ipfs.io link. Contrast that to sending an IPFS hash to 5 friends who each run IPFS locally, they would each get a copy of the file blocks. Or am I misunderstanding.

Q2: Got it, thanks again!

Q1: In the case of ipfs.io yes it is likely you are receiving the content from the same peer. And in the case of sending the hash directly to your friends, their IPFS nodes would request from the closet peer to their node.

Lets say you and your friend have laptops on the same LAN, and you host a file, and the ipfs.io gateway hosts the same file. Your friend requests the content, and bitswap will go to the closest peer it can find, so if you and your friend have connectivity between your two laptops you’ll receive the data from each other.

Now if we were to replay the same situation, except your IPFS node wasn’t reachable by your friend, then your friend would retrieve the content via the ipfs.io gateway.