Trying to better understand the pinning concept!

Cool ok - understood.

The reason Iā€™m pushing to do the entire process as part of a script is because Iā€™m writing a Deploy Plugin for the Ember CLI: http://ember-cli-deploy.com/

These plugins all follow a standard pattern:
build -> deploy -> optionally "activate / set live"

To me, it seems like a great usecase for IPFS:
build -> ipfs.files.add & ipfs pin -> optionally ipns name ("activate / set live")

To conform to the plugin architecture model, the script needs to be able to separate the ā€œdeployā€ and ā€œactivationā€ steps, and run one or both of them from the build machine (or CI). A remote VPS running a cron job means that the ā€œactivationā€ is handled remotely, and not something ā€œtriggerableā€ from the CLI plugin.

Thanks so much for everyoneā€™s input here! I absolutely understand the IPFS world a lot better now, and have everything I need to continue.

For those interested - Iā€™m going to expose the ā€œinternalā€ IPFS HTTP API from the remote VPS node via NGINX, and protect it with a secret key that only the Deploy machine knows.

This way the deployer machine can control activations & pins over HTTPS, and thus conform to Ember CLIā€™s plugin pattern!

Iā€™ll look into IPFS cluster down the track to add failover / distribution to the pinning machine.

Thanks again - excited to make Ember CLI Apps simple to deploy to IPFS!

No, the activation is when you update the IPNS name. The server is just so youā€™re guaranteed at least one active node even if your computer is offline.

This is all interesting information but I want to a specific edge case, what happens if for example if the node fails and comes live again will the files still be pinned?

yes you can for example add a file stop your daemon with kill -9 and restart your daemon you have always your hash DHT

Thanks for the reply do you have maybe some resource that showā€™s this specific edge case

no sorry but i think you can install go-ipfs and test that.

Hello,
I hope it is ok to write in an old topic, since its title perfecly match my intention.

Iā€™m building an application that uses IPFS and saves and pins some chunks of data.

Current design is, that every new client pins every chunk of data he knows about

1. Do I understand correctly that it leads to huge data redundancy

2. Can I somehow detect by how many nodes selected hash is pinned and based on that make a logic deciding if I (as client) pin a hash or not (so application can decrease amount of disk space used by every user and still have decent redundancy)

Thank You for any help :slight_smile:

@adamskrodzki

  1. This would lead to data redundancy, but it seems like thatā€™s what youā€™re wanting?

  2. Thereā€™s a couple of different routes you could take here.

  • If you donā€™t care who is hosting what content, and only want a certain amount of nodes to be pinning each piece of content, consider checking out IPFS Cluster. You may find that it helps solve some of your IPFS pinset orchestration needs.

  • For a more manual approach, you can utilize the ā€œipfs dht findprovsā€ api call. This allows you to see who has pinned a given piece of content, and you can make decisions based off of this information. Fair warning, this method would likely require a decent amount of work and edge case handling to correctly pull off correctly. A likely edge case is needing to handle what happens when gateway nodes cache your content due to the retrieval process. In this scenario, since you canā€™t tell if those nodes have actually pinned your content instead of just caching your content, you may get false positives as to how many people are actually pinning your content.

1 Like

Thank You for an answer
I will dive into the links

Iā€™m using js-ipfs, application is expected to work fully in a browser

It might be that I will be running aside some nodejs ā€œserver nodesā€ but application should not relay on them

where data from ā€œBrowser IPFS Nodeā€ is stored?
What are the storage limits? How I can influence them?

simple PoC is currently under

https://ipfs.infura.io/ipfs/QmWEZagZbjxRrLSbmXp1EbDDEThHjcVj8Aj1ny8Sn7b6af/

1 Like