Hi. I’m trying to maintain a node. I have only around 20 GB of data pinned, but my disk is getting filled. Gargabe collecting does not seem to be actually removing data. How can I fix that, or how can I at least clear up some space, or the whole data without restarting the node from scratch?
which datastore do you use? Flatfs or BadgerDS?
(see command below to check it - the Version: line shows which datastore is in use)
Do you have the ‘–enable-gc’ flag set on the daemon on startup?
How is your storage configured for the RepoSize?
Use this command to check it:
$ ipfs repo stat -H
Note that the GC will only clean up unpinned objects. So if you pin more than you have storage space configured IPFS will still pin these elements.
I have restarted my node, so I cannot tell you exactly what I was running. Version string is and probably was
I don’t think I have changed much from the standard configuration, and I’m running with
--enable-gc, although I wasn’t for a while.
I believe at some point my node was taking more space than “StorageMax”, which I had yet to increase. But I had never ran GC. Then after that, GC could not work properly somehow. Supposedly, this is for the most part data that has been pinned and then unpinned.
Anyways, sorry for the confusing report, I believe I might have hit some tricky bug, but I’m also changing things a lot. I have reset my node and we’ll see how long I can run it again without issues.
I have now been running this new node for about a week, the volume of pinned data is supposed to be around 50GB, yet 76GB are being used.
The node is running with
--enable-gc, and when I run
ipfs repo gc after a while I get a bunch of “removed xxx” messages, but disk usage does not seem to go down.
How can I make a test to figure out if unpinned data actually gets deleted or not? And what can I try to do, maybe a different datastore?
This is my repostat currently
NumObjects: 315776 RepoSize: 82 GB StorageMax: 75 GB RepoPath: /home/nic/.ipfs Version: fs-repo@12
The gc process does not only depends on
--enable-gc. It is also very much dependent on
You have tune these values suitable to the load your gateway is facing.
For me, I had to allow only 50% of max allowed space to gc. All other values above it causing disk full error.
Check before and after in blocks folder. You will see the emptiness there after gc. .
What storage you are using? A partitioned disk drive, externally attached drive or the same drive where your OS is installed? Windows or linux?
what do you mean by “max allowed space”, is your StorageGCWatermark half of your StorageMax?
StorageMax is what that much ipfs can use max.
StorageGCWatermark is what the % of StorageMax at which gc will trigger.
GCPeriod is that time interval after which (repeatedly, in interval) above 2 rules will be checked and if any of above is true, gc will be auto triggered.
Remember, none of the first two is enough for gc triggering. Any of them and the third one should be true to auto trigger gc.
I am using 120gb external drive for ipfs datastore. My StorageMax is 100gb. My StorageGCWatermark is 65%. My GCPeriod is 0.3hr. Any of them with slight higher values causing my gateway getting knocked off because of storage full error.
If your gateway is heavily loaded and famous enough, further and eventually you have to tune down these 3 values to prevent storage full error.
Hope, it’s clear for you now.