That’s basically the concept.
In case of otherwise available files, like a ISO for a Linux distribution, another user can also add the exact same file this way, and the Content-IDs would match.
This means you doesn’t have to transfer files from one node to a different one, by can also add the same file on different locations to get the same effect.
If you want to ‘hold’ specific content, the command is ‘pin add’ on the CLI. This will receive any data necessary from the network, according to the Content-ID and avoids that the data is deleted by the garbage collector.
Additionally all data you add in the files tab on the GUI will be spared from beeing garbage collected, when you run low on disk space.
The default disk space is set to 10 GB, if 9 GB is filled the garbage collector will try to make new room by dropping stuff you have not pinned.
So if someone accesses your file, it’s definitely in their cache, but it might not be pinned or just partly cached. If the user is running low on disk space, your data might be dropped again from his cache - since it’s not pinned, when just accessed.
Running a cluster on the other hand guarantees that a given amount of copies are hold in the network, allowing for redundancy and parallel downloads of the data.