That's not entirely true. We may end up importing the files with a different chunking algorithm.
This would benefit from an efficient transport for two ipfs nodes on the same machine (unix sockets or shared memory?).
We'd like this, not only for communicating between nodes on a single machine but for communicating between a running daemon and the CLI tool, but we don't have it yet.
One thing: how would you make sure that everything from the main ipfs is synced to the external drive ipfs, not just specific hashes or pinned hashes? I know the ipfs philosophy is to share data only on explicit demand, but in this case the option to share everything might be useful.
Not that I know of and, due to GC, it's probably best not to rely on this. Personally, I recommend either pinning or adding files you care about to your local mfs (
ipfs files ...).
Note: an alternative to all of this is to shutdown your local daemon and copy the repo. However, that format changes over time so it's less likely to be stable.
We've discussed having a file format called CAR for exporting/importing merkledags. We have some notes here: https://github.com/ipfs/archive-format. However, they're pretty out of date.
Basically, we want several properties from CARs:
- Seekable/Traversable: It should be possible traverse a DAG through a CAR in one pass (without necessarily reading the entire CAR).
- Simple/Stable: Importing/Exporting should be easy. That repo I linked to mentions things like signatures, metadata, etc. However, that's really a separate concern.
I'm currently writing a proposal but a CAR will likely be a concatenation of a topological sort of the IPLD DAG. Specifically:
magic-number root-cid [object [child-offset]*]*