I am wondering if IPFS get or some other command can obtain files over FTP/HTTP to retain backward compatibility (incase the hash/file is not available over IPFS).
Use case: a lot of large files are currently hosted on FTP/HTTP, but not on IPFS. The user already has IPFS installed. Using this command they can easily obtain the file through a standard ipfs command that fetches and hashes and seeds/pins the file to IPFS local node, regardless of what protocol is used to fetch the file. Then the file is served to the network. Because this process is just as simple as getting a file over HTTP/FTP/SCP/IPFS, and IPFS provides a method which abstracts the protocols. This leads more users to migrate over to IPFS, and when the file is hashed and seeded, it becomes available to everybody on the network (using a IPFS native protocol: ipfs get )… This in turn drives user adoption.
Perhaps it doesnt have to have “IPFS get”, it could be IPFS awesomeGet, or some other command which use ipfs get/ftp/http or other protocols and invokes a hash not to the desired file but the hash to another file (which contains the addresses of the file stored at the HTTP/FTP/SCP locations.).
To implement this, I was thinking we would need to have a meta-file, which would have the instructions for everywhere that this file can be obtained from:
file1.txt (with #hash1):
FTP: 126.96.36.1994/xyz.tar HTTP 188.8.131.524/xyz.tar: SCP: 3184.108.40.2064/xyz.tar IPFS Hash: #hash2 #if avalailable already.
On running “ipfs awesemeGet #hash1”, ipfs get the above file1.txt, parses and runs FTP/HTTP/SCP/IPFS and fetches files from the specific locations mentioned in the file. If its not obtained from the IPFS network, it then hashes the file, pins it the local directory and makes it available to the network.
Perhaps there are some better implementations?
Is there any wrapper function like this currently implemented in IPFS?