Go-ipfs badgerds memory usage and migrating back to file-based store

Hello,

We have an IPFS server in production with badgerds (private swarm). We have noticed very high memory usage. In the systemd unit file we set MemoryMax to 7G, and it gets killed due to oom at least once a day.

We believe the problem is badgerds, so we would like to migrate back to the file-based store. Is there a way to do this? Also, is this memory usage normal? If not, what would be needed to help diagnose the problem?

Thank you,

Mario Camou
Software Engineer

Actyx AG
Ridlerstraße 31B
80339 München

Tel.: +49 (0) 89 9439 7400

mario@actyx.io

Eingetragen AG München, HRB-Nr.: 223941
Vorsitzender des Aufsichtsrats/Chairman of the Supervisory Board: Raimund Diederichs
Vorstand/Board of Management: Oliver Stollmann (Vorsitzender/Chairman), Maximilian Fischer

1 Like

One option would be to create a separate IPFS node on the same machine that was initialized with the default datastore, then retrieve all of the content from the other local badger node using ipfs pin add (for example). Once the migration is complete, then shutdown and remove the badger IPFS node.

I think badger’s memory usage is dependent on how large the database is, so maybe (?).

Which version of IPFS are you running? How large is your IPFS repository (ipfs repo stat --human)?

Thanks for your quick reply @leerspace. We were running 0.4.18, I upgraded to 0.4.19 yesterday to see if there was any difference but we’re seeing the same thing.

$ ipfs repo stat --human
NumObjects:       132289
RepoSize (MiB):   12410
StorageMax (MiB): 976562
RepoPath:         /data/ipfs/repo
Version:          fs-repo@7

htop currently shows 3996M RSS, 19.9G VIRT, 161M SHR. The last restart was 8h ago.

That doesn’t seem very large. It might be worth opening an issue on github specifically for high memory usage (I thought there was one, but I can’t find it).