Workload or benchmark for Garbage collections

hello.

I want to reduce the garbage collection time of IPFS (e.g., ipfs repo gc).
I have optimized the garbage collection time, but I’m not sure how systematically meaningful this is.
For what workloads, benchmarks, scenarios (whatever), on IPFS, would it make sense to reduce garbage collection time?
In other words, how should the experiment be performed effectively?
It would be great if a blockchain such as Filecoin was used as a workload. Is there an effective way to use it?

I’m curious about your thoughts.
thank you.

I don’t think there are many cornering opportunities for GCing.

If it’s faster then it’s very likely faster or at least not worst in all cases.

Just send us a PR, for your benchmark this is probably good enough:

dd if=/dev/urandom count=$(( 1024 * 1024 * 8 / 32 )) bs=$(( 32 * 1024 )) iflag=fullblock | ipfs add --pin=false -
time ipfs repo gc