Preventing IPFS filling all available storage

#1

Is there any inbuilt limiter, or manually definable config parameter in IPFS that prevents exceeding a set amount or percentage of available storage space? E.g…

(1) Users don’t want IPFS to take up so much SSD or HDD capacity that it slows down their PC

(2) What if a user initially installs an IPFS node on their local PC when they are using 100 / 500 GB available HDD storage.
- Then IPFS takes up 200 GB (so they have 200 GB remaining: 500 - 100 - 200 = 200).

  • The user needs to store 250 GB of files (so they need to reduce the space actively being used by IPFS)
#2

You can set ipfs config Datastore.StorageMax SOME_SIZE and then enable garbage collection by running the daemon with ipfs daemon --enable-gc. This will automatically run garbage collection when your node gets within ipfs config Datastore.StorageGCWatermark percent of SOME_SIZE.

Note: GC will delete all unpinned data. That is, all data except:

  1. Data added with ipfs add (unless --pin=false was specified).
  2. Data added with ipfs pin add.
1 Like
#3

Thank you @stebalien. I see in my settings:

"StorageGCWatermark": 90,
"StorageMax": "10GB"

Does everyone get StorageMax set to 10GB when they install an IPFS node, or do people with larger storage get a higher StorageMax amount set?

#4

Everyone gets it set to 10GB by default, but it’s configurable.

1 Like