Error in executing ipfs-cluster-service state export

How to fix this error I got after trying to export ipfs cluster state using ipfs-cluster-service state export?

error obtaining execution lock: lock /home/user/.ipfs-cluster/cluster.lock: someone else has the lock. If no other ipfs-cluster-service process is running, remove /home/user/.ipfs-cluster/cluster.lock, or make sure that the config folder is writable for the user running ipfs-cluster-service.

Stop ipfs-cluster-service first.

My bad. I am able to export the pinset now.
But, how to successfully recover the data which is pinned in the private network, and bring the peers up, in case .ipfs and .ipfs-cluster folder gets deleted and the identity of the peer gets damaged ?

Suppose I have two peers. I have exported the state from peer1. Now, peer1 gets damaged and all data gets deleted.
So, I have reinitialized ipfs and ipfs-cluster-service and added swarm key and cluster secret.
Then, before starting ipfs-cluster I have imported the state and started peer1. Now both peers are up.
I have stopped ipfs-cluster in peer2, imported the same pinset (state which I exported from peer1) to peer2 and restarted ipfs-cluster.
Still, when I try to check the status of the files I added, I am getting this error in peer1

12D3KooWC73Uy5M5cDKUsWzgMeJNn7NnxcgvE15JcHKr7RtA2c7K : REMOTE | 2021-03-29T21:46:53.986811246Z
12D3KooWCvtd7gnE6uvtVsQbxMhJnq4hEtBgS79JWikHZEJmGM4L : REMOTE | 2021-03-29T21:46:53.986811246Z
12D3KooWHMHmoWBvfs3DKdGvVw4XyVNiGB3uT3NX66ETfS4KjkDT : CLUSTER_ERROR: dial backoff | 2021-03-29T21:46:53.986811246Z

1st one is identity of peer2 ,2nd one is new identity of peer1 and 3rd one is old identity of peer1.

After checking the status of the same CID in peer2, I am getting this error.

12D3KooWC73Uy5M5cDKUsWzgMeJNn7NnxcgvE15JcHKr7RtA2c7K : REMOTE | 2021-03-29T21:47:23.130461457Z
12D3KooWCvtd7gnE6uvtVsQbxMhJnq4hEtBgS79JWikHZEJmGM4L : REMOTE | 2021-03-29T21:47:23.130461457Z
12D3KooWHMHmoWBvfs3DKdGvVw4XyVNiGB3uT3NX66ETfS4KjkDT : CLUSTER_ERROR: failed to dial 12D3KooWHMHmoWBvfs3DKdGvVw4XyVNiGB3uT3NX66ETfS4KjkDT: all dials failed

  • [/ip4/104.197.65.69/tcp/9096] failed to negotiate security protocol: peer IDs don’t match
  • [/ip4/10.128.0.9/tcp/9096] dial tcp4 0.0.0.0:9096->10.128.0.9:9096: i/o timeout | 2021-03-29T21:47:23.130461457Z

here also, 1st one is identity of peer2 ,2nd one is new identity of peer1 and 3rd one is old identity of peer1.

The identity of peer1 is getting changed, as I have reinitialised ipfs and ipfs-cluster-service in peer1. After recovering peer1, I can successfully see the status of those CIDS as pinned, which are pinned in all nodes (replication factor -1, allocations":[])

The above error is coming for those CIDs, which are only pinned in one peer (replication factor 1).
In the pinset for those CIDs (which are having replication factor 1) the allocation array contains the old identity of peer1 (12D3KooWHMHmoWBvfs3DKdGvVw4XyVNiGB3uT3NX66ETfS4KjkDT).

How to recover those CIDs which are having replication factor more than -1, if my peer identity gets changed ?

This might be due to a bug, which kept allocations in peers with with replication_factor = -1. There is a fix (Fixes #1319: Status wrongly shows pins as REMOTE by hsanjuan · Pull Request #1331 · ipfs/ipfs-cluster · GitHub).

Your easiest solution is to edit the state you exported (which is just a json) and replace all mentions of the old peer identities with the new peer identities (or remove members of the allocations array when replication factor is -1), and re-import.

Note while it says REMOTE, I think those peers are pinning the items anyway. That is just a presentation bug (also being fixed above).