Direct manipulation of pins on cluster nodes

I’m curious about what the behavior would be on an IPFS cluster if you were to directly manipulate the pin set on an individual node.

There doesn’t seem to be anything stopping me from connecting to the IPFS API of an individual node and operating on it as though it was a stand alone node. What happens if I add or remove pins. Say I have a cluster of A, B, and C. and setup replication so it’s pinned on all three nodes and I pin MyCID so MyCID is pinning on A, B and C. Now I connect to C and unpin MyCID. Will the cluster manager see that and repin it? What if I add and pin MyOtherCID on node C? Will the cluster manager see that and try and unpin it or will it just shrug and leave it alone?

So what I’m thinking of doing is using the IPFS in almost the opposite way in which it’s intended to be used. Say I have some GDRP requirement to remove some file within 24hrs. Assume this is a private IPFS network running on an intranet. I would like to setup GC to run every 24hrs. and instead of using the cluster manager to make sure something is pinned, I’d like to use it to enforce that something is not pinned so that it gets GCed.

I’m also wondering what would happen if the local node added a recursive pin.

If you removed something that is “cluster-pinned”, cluster will try to repin it after a while (pins are re-checked every pin_recover_interval).

If you pin something directly on the node that is not “cluster-pinned”, nothing happens, it is just ignored.

So if I wanted to use it as a pin removal tool. I could pin it and then remove the pin at the cluster level and that would ensure that all nodes under it’s control had removed that pin?

Any idea about a recursive pin at the node level? Would that prevent the cluster from removing a pin?

I think the behavior I’m looking for is close but not exactly what the cluster manager is doing (no surprise there but I’ll take the quick hack if I can get it ) but it seems to be a good way to enforce data deletion requirements.

Assuming they have pinned the thing, they would remove the pin yes. But the item should have been pinned from cluster in the first place. If you have something pinned outside of cluster, then pin it in cluster, then immediately remove that pin, it may be that cluster does not trigger a pin rm on IPFS (i.e. if the pin operation was queued in cluster, it will just be cancelled).

So it sounds like a clever use of ipfs-cluster is out. If I understand it correctly a CID can have multiple pins. Is there any way to forcibly remove pins? ie. remove an indirect pin or multiple pins.

I’ve also been wondering how IPFS handles missing data. So you could go to pin a directory and for some reason one block isn’t available. The CID on the directory guarantees the directory contents but what’s never said is “if it’s there”. It seems like this odd state where you have verifiable data that might not be present. Do things fail? Do operations just make a best effort? If it’s best effort how does the user know it might be incomplete?

When IPFS tries to read a block it tries to read it from anywhere in the network where the local datastore is just the preferential place to look for it. Things fail when it cannot be read from the network.

Interesting. I guess I should be interpreting a pin as it’s protected from GC and that it’s available and complete. I was just thinking of it only as the former. It would have interesting implications for trying to do things like DCMA takedowns or GDPR. You’d have to republish CIDs for anything that was redacted. I guess what I’m thinking of is some sort of in-between mutable and immutable where you can say, “do your best but if you can’t find it all that’s cool”. It seems unfortunate that if you go to pin a very large directory that it would fail if it was unable to locate a single block.