IPFS going out of memory

Hi there,

I’m running the IPFS daemon, and its memory footprint increases over time in a logarithmic way. At the same time, it seems that the number of peers also grows over time, so my intuition is that the daemon opens connections with any peer it finds, which eventually eats up all the RAM.

Is my intuition correct? How could I prevent that? Run a bigger instance and hope for a bounded number of peers? Am I missing something?

Cheers :slight_smile:

I am experiencing similar issues, but the good news is that improvements are on the close roadmap:

1 Like

I have the same issue, and I upgraded my VPS from 500MB to 1GB RAM but after a while, it starts swapping again.

Is there a way to configure my node to accept a max number of connections? If that is indeed the cause of the increasing memory use.

I assume there are many long running nodes out there, how much memory do their host machines have? I’m happy to upgrade again for a few filecoins :grinning:

1 Like

@whyrusleeping maybe you can help out here? Thx!

For reference: https://github.com/ipfs/go-ipfs/issues/3532

I think right now, the workaround is to restart the daemon regularly on daemons running on low memory hosts. However, I think memory-reduction features are on the works and close to release. I’m afraid I have lost track. Do you know @kubuxu ?

1 Like

The biggest issues around memory usage right now are the fact that we dont close connections, and that each connection uses yamux multiplexing (and yamux is quite the memory hog). We’re working on fixing both of these issues, patches for both problems are already written and should be integrated in either the next release or the one after that.

One thing thats really useful to us here is to provide detailed descriptions of your workloads, and also provide stack dumps of when memory usage is high (via curl 'localhost:5001/debug/pprof/goroutine?debug=2'). This helps us analyze parts of the codebase that need work.

2 Likes

Ah I just created a cron entry to kill and start the daemon every hour :slight_smile:

I’ll let it run another time until it swaps and make the dump. Where do I send that dump?

I had the same problem with k8s cluster. I have temporarily it by rolling back to the previous version.

What version is that? I’m on 0.4.10

0.4.9 seems better for CPU, and ok-ish for memory (reaches 60% after only 12 hours).

so far it is ok!

1 Like

I’ve also noticed that 0.4.10 is a bigger strain on the system. I’m not running out-of-memory (yet), but CPU etc. seems to have been a little better on 0.4.9.

2 Likes

@pors please file a github issue on go-ipfs and provide a link to the dumps.