The reason Iām pushing to do the entire process as part of a script is because Iām writing a Deploy Plugin for the Ember CLI: http://ember-cli-deploy.com/
These plugins all follow a standard pattern: build -> deploy -> optionally "activate / set live"
To me, it seems like a great usecase for IPFS: build -> ipfs.files.add & ipfs pin -> optionally ipns name ("activate / set live")
To conform to the plugin architecture model, the script needs to be able to separate the ādeployā and āactivationā steps, and run one or both of them from the build machine (or CI). A remote VPS running a cron job means that the āactivationā is handled remotely, and not something ātriggerableā from the CLI plugin.
Thanks so much for everyoneās input here! I absolutely understand the IPFS world a lot better now, and have everything I need to continue.
For those interested - Iām going to expose the āinternalā IPFS HTTP API from the remote VPS node via NGINX, and protect it with a secret key that only the Deploy machine knows.
This way the deployer machine can control activations & pins over HTTPS, and thus conform to Ember CLIās plugin pattern!
Iāll look into IPFS cluster down the track to add failover / distribution to the pinning machine.
Thanks again - excited to make Ember CLI Apps simple to deploy to IPFS!
No, the activation is when you update the IPNS name. The server is just so youāre guaranteed at least one active node even if your computer is offline.
This is all interesting information but I want to a specific edge case, what happens if for example if the node fails and comes live again will the files still be pinned?
Hello,
I hope it is ok to write in an old topic, since its title perfecly match my intention.
Iām building an application that uses IPFS and saves and pins some chunks of data.
Current design is, that every new client pins every chunk of data he knows about
1. Do I understand correctly that it leads to huge data redundancy
2. Can I somehow detect by how many nodes selected hash is pinned and based on that make a logic deciding if I (as client) pin a hash or not (so application can decrease amount of disk space used by every user and still have decent redundancy)
This would lead to data redundancy, but it seems like thatās what youāre wanting?
Thereās a couple of different routes you could take here.
If you donāt care who is hosting what content, and only want a certain amount of nodes to be pinning each piece of content, consider checking out IPFS Cluster. You may find that it helps solve some of your IPFS pinset orchestration needs.
For a more manual approach, you can utilize the āipfs dht findprovsā api call. This allows you to see who has pinned a given piece of content, and you can make decisions based off of this information. Fair warning, this method would likely require a decent amount of work and edge case handling to correctly pull off correctly. A likely edge case is needing to handle what happens when gateway nodes cache your content due to the retrieval process. In this scenario, since you canāt tell if those nodes have actually pinned your content instead of just caching your content, you may get false positives as to how many people are actually pinning your content.