How to run Garbage Collection in IPFS-Cluster through API?

Hi there!

Does anyone know how to forcibly run the garbage collector through API in the cluster?

I didn’t find related api endpoint in the documentation.

While it’s missing from the list, the instructions clearly explain how to find it:

ipfs-cluster-ctl --enc=json --debug ipfs gc
2020-09-27T13:27:06.210+0200    DEBUG   cluster-ctl     ipfs-cluster-ctl/main.go:143    debug level enabled
2020-09-27T13:27:06.211+0200    DEBUG   apiclient       client/request.go:62    POST: http://127.0.0.1:9094/ipfs/gc?local=false

So the endpoint is POST /ipfs/gc.

Thank you @hector !

Let me please clarify regarding /ipfs/gc endpoint, this command will launch the gc on a specific peer or on all peers included in the cluster?

It will trigger GC on all peers in the cluster sequentially. When adding ?local=true, it will just trigger in the ipfs peer associated to the cluster peer receiving the request.

1 Like

Thanks, Appreciate your help.

Hey @hector ,

This is not an issue for me, just to let you know, “local=false” parameter doesn’t work.

regardless of I trigger it with “local=false” [http://127.0.0.1:9094/ipfs/gc?local=false]
or without [http://127.0.0.1:9094/ipfs/gc] in both scenarios it is cleaning file from all peers in the cluster including this peer on which I trigger the command.

You need local=true for it to clean in a single node, not local=fase ?

Hmm, in case I’m passing local=true it doesn’t delete file at all from any peer. I have a cluster of two peers.

ipfs version - 0.4.18
ipfs-cluster-ctl version - 0.13.0-next
ipfs-cluster-service version - 0.13.0-next

Your IPFS version is very old. You should upgrade asap.

Other than that, can you share logs (preferably starting the cluster daemon with --loglevel=debug), around the time when you run with local=true ?

This is a log:

I will update my IPFS version to the last and let you know how it’ll go

The first line in that log indicates that it has sucessfully called ipfs repo gc. If there had been any error it would have logged it.

it doesn’t delete file

What do you mean? It seems that it is running GC just fine (at least for how it ran GC in your super old version).

If I trigger it with local=true, I’m still able to ipfs cat cid file and see its content from both peers.
If I trigger it with local=false, after a very long wait for the command response ipfs cat cid, I get

Error: context canceled

If you trigger with local=true, does ipfs --offline cat <cid> work on that same node?

Yes, ipfs --offline cat <cid> work on the same way on both nodes. I’m able to get file after trigger local=true

Also I’ve updated IPFS to the latest version 0.7, and it also doesn’t work with this version as you described.

No, wait…

If I run ipfs --offline cat <cid> on that node from which I’ve added file than I get

Error: merkledag: not found

but ipfs cat <cid> still returning file for me

I think there is some issue with this local=true/false feature and maybe it will be better to open a ticket/issue so you could thoroughly test and investigate it.

Let me know if I could help in some way.

It cleaned it from the local node but when you do ipfs cat it retrieves it from somewhere else, some other IPFS node that still has the copy.

3 Likes