Disk full running a node

Hi. I’m trying to maintain a node. I have only around 20 GB of data pinned, but my disk is getting filled. Gargabe collecting does not seem to be actually removing data. How can I fix that, or how can I at least clear up some space, or the whole data without restarting the node from scratch?

Hey @nlwo,

which datastore do you use? Flatfs or BadgerDS?

(see command below to check it - the Version: line shows which datastore is in use)

Do you have the ‘–enable-gc’ flag set on the daemon on startup?

How is your storage configured for the RepoSize?

Use this command to check it:

$ ipfs repo stat -H

Note that the GC will only clean up unpinned objects. So if you pin more than you have storage space configured IPFS will still pin these elements.

Here’s my ticket about that: IPFS should stop accepting pin requests/new files after hitting the StorageMax limit · Issue #7875 · ipfs/go-ipfs · GitHub

I have restarted my node, so I cannot tell you exactly what I was running. Version string is and probably was fs-repo@12.

I don’t think I have changed much from the standard configuration, and I’m running with --enable-gc, although I wasn’t for a while.

I believe at some point my node was taking more space than “StorageMax”, which I had yet to increase. But I had never ran GC. Then after that, GC could not work properly somehow. Supposedly, this is for the most part data that has been pinned and then unpinned.

Anyways, sorry for the confusing report, I believe I might have hit some tricky bug, but I’m also changing things a lot. I have reset my node and we’ll see how long I can run it again without issues.

I have now been running this new node for about a week, the volume of pinned data is supposed to be around 50GB, yet 76GB are being used.

The node is running with --enable-gc, and when I run ipfs repo gc after a while I get a bunch of “removed xxx” messages, but disk usage does not seem to go down.

How can I make a test to figure out if unpinned data actually gets deleted or not? And what can I try to do, maybe a different datastore?

This is my repostat currently

NumObjects: 315776
RepoSize:   82 GB
StorageMax: 75 GB
RepoPath:   /home/nic/.ipfs
Version:    fs-repo@12

@nlw0
The gc process does not only depends on --enable-gc. It is also very much dependent on "StorageMax" "StorageGCWatermark" and "GCPeriod"
You have tune these values suitable to the load your gateway is facing.
For me, I had to allow only 50% of max allowed space to gc. All other values above it causing disk full error.

Check before and after in blocks folder. You will see the emptiness there after gc. .

What storage you are using? A partitioned disk drive, externally attached drive or the same drive where your OS is installed? Windows or linux?

what do you mean by “max allowed space”, is your StorageGCWatermark half of your StorageMax?

StorageMax is what that much ipfs can use max.
StorageGCWatermark is what the % of StorageMax at which gc will trigger.
GCPeriod is that time interval after which (repeatedly, in interval) above 2 rules will be checked and if any of above is true, gc will be auto triggered.

Remember, none of the first two is enough for gc triggering. Any of them and the third one should be true to auto trigger gc.

I am using 120gb external drive for ipfs datastore. My StorageMax is 100gb. My StorageGCWatermark is 65%. My GCPeriod is 0.3hr. Any of them with slight higher values causing my gateway getting knocked off because of storage full error.
If your gateway is heavily loaded and famous enough, further and eventually you have to tune down these 3 values to prevent storage full error.

Hope, it’s clear for you now.

I’m also observing lots of peers hitting my node recently, pushing it up to 100% CPU and filling up disks after some hours. This started around June 26th. Currently I’m connected to ~2500-3000 peers. Garbage collection is active but doesn’t kick in, trying with a lower gc watermark now.

So, maybe I’m wrong but to me this looks like a spam attack…

1 Like

It could be.

Could you share information on which version of IPFS you are running and your configuration?

I’m running 0.12.2 (upgraded yesterday night). I’m running the gateway using NGINX for SSL. Parsing the webserver logs is awesome: I’m actually heavily bombarded with /ipfs/bafy… requests since some hours, no special IP, UA or geolocation pattern (acc to GoAccess mostly Asian though). 99% of them are reported as “499 Client cancelled request”


added some public peers to my config some months ago. Here is the config:

{
  "API": {
    "HTTPHeaders": {}
  },
  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5051",
    "Announce": [],
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": [
      "/ip4/10.0.0.0/ipcidr/8",
      "/ip4/100.64.0.0/ipcidr/10",
      "/ip4/169.254.0.0/ipcidr/16",
      "/ip4/172.16.0.0/ipcidr/12",
      "/ip4/192.0.0.0/ipcidr/24",
      "/ip4/192.0.2.0/ipcidr/24",
      "/ip4/192.168.0.0/ipcidr/16",
      "/ip4/198.18.0.0/ipcidr/15",
      "/ip4/198.51.100.0/ipcidr/24",
      "/ip4/203.0.113.0/ipcidr/24",
      "/ip4/240.0.0.0/ipcidr/4",
      "/ip6/100::/ipcidr/64",
      "/ip6/2001:2::/ipcidr/48",
      "/ip6/2001:db8::/ipcidr/32",
      "/ip6/fc00::/ipcidr/7",
      "/ip6/fe80::/ipcidr/10"
    ],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/quic",
      "/ip6/::/udp/4001/quic",
      "/ip4/127.0.0.1/tcp/8081/ws",
      "/ip6/::1/tcp/8081/ws"
    ]
  },
  "AutoNAT": {},
  "Bootstrap": [
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
    "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN"
  ],
  "DNS": {
    "Resolvers": {}
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 50,
    "StorageMax": "10GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": false,
      "Interval": 10
    }
  },
  "Experimental": {
    "AcceleratedDHTClient": false,
    "FilestoreEnabled": false,
    "GraphsyncEnabled": false,
    "Libp2pStreamMounting": false,
    "P2pHttpProxy": false,
    "ShardingEnabled": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "APICommands": [],
    "HTTPHeaders": {
      "Access-Control-Allow-Headers": [
        "X-Requested-With",
        "Range",
        "User-Agent"
      ],
      "Access-Control-Allow-Methods": [
        "GET"
      ],
      "Access-Control-Allow-Origin": [
        "*"
      ]
    },
    "NoDNSLink": false,
    "NoFetch": false,
    "PathPrefixes": [],
    "PublicGateways": null,
    "RootRedirect": "",
    "Writable": false
  },
  "Identity": {
    "PeerID": "12D3KooWAYhKPvH8JrBT7oTiY69LY2QksfvPH2sY8ECvR1KD9ZTo"
  },
  "Internal": {},
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": [],
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": [
      {
        "Addrs": [
          "/ip6/2606:4700:60::6/tcp/4009",
          "/ip4/172.65.0.13/tcp/4009"
        ],
        "ID": "QmcfgsJsMtx6qJb74akCw1M24X1zFwgGo11h1cuhwQjtJP"
      },
      {
        "Addrs": [
          "/dnsaddr/fra1-1.hostnodes.pinata.cloud"
        ],
        "ID": "QmWaik1eJcGHq1ybTWe7sezRfqKNcDRNkeBaLnGwQJz1Cj"
      },
      {
        "Addrs": [
          "/dnsaddr/fra1-2.hostnodes.pinata.cloud"
        ],
        "ID": "QmNfpLrQQZr5Ns9FAJKpyzgnDL2GgC6xBug1yUZozKFgu4"
      },
      {
        "Addrs": [
          "/dnsaddr/fra1-3.hostnodes.pinata.cloud"
        ],
        "ID": "QmPo1ygpngghu5it8u4Mr3ym6SEU2Wp2wA66Z91Y1S1g29"
      },
      {
        "Addrs": [
          "/dnsaddr/nyc1-1.hostnodes.pinata.cloud"
        ],
        "ID": "QmRjLSisUCHVpFa5ELVvX3qVPfdxajxWJEHs9kN3EcxAW6"
      },
      {
        "Addrs": [
          "/dnsaddr/nyc1-2.hostnodes.pinata.cloud"
        ],
        "ID": "QmPySsdmbczdZYBpbi2oq2WMJ8ErbfxtkG8Mo192UHkfGP"
      },
      {
        "Addrs": [
          "/dnsaddr/nyc1-3.hostnodes.pinata.cloud"
        ],
        "ID": "QmSarArpxemsPESa6FNkmuu9iSE1QWqPX2R3Aw6f5jq4D5"
      },
      {
        "Addrs": [
          "/dns/cluster0.fsn.dwebops.pub"
        ],
        "ID": "QmUEMvxS2e7iDrereVYc5SWPauXPyNwxcy9BXZrC1QTcHE"
      },
      {
        "Addrs": [
          "/dns/cluster1.fsn.dwebops.pub"
        ],
        "ID": "QmNSYxZAiJHeLdkBg38roksAR9So7Y5eojks1yjEcUtZ7i"
      },
      {
        "Addrs": [
          "/dns/cluster2.fsn.dwebops.pub"
        ],
        "ID": "QmUd6zHcbkbcs7SMxwLs48qZVX3vpcM8errYS7xEczwRMA"
      },
      {
        "Addrs": [
          "/dns/cluster3.fsn.dwebops.pub"
        ],
        "ID": "QmbVWZQhCGrS7DhgLqWbgvdmKN7JueKCREVanfnVpgyq8x"
      },
      {
        "Addrs": [
          "/dns/cluster4.fsn.dwebops.pub"
        ],
        "ID": "QmdnXwLrC8p1ueiq2Qya8joNvk3TVVDAut7PrikmZwubtR"
      },
      {
        "Addrs": [
          "/dns4/nft-storage-am6.nft.dwebops.net/tcp/18402"
        ],
        "ID": "12D3KooWCRscMgHgEo3ojm8ovzheydpvTEqsDtq7Vby38cMHrYjt"
      },
      {
        "Addrs": [
          "/dns4/nft-storage-dc13.nft.dwebops.net/tcp/18402"
        ],
        "ID": "12D3KooWQtpvNvUYFzAo1cRYkydgk15JrMSHp6B6oujqgYSnvsVm"
      },
      {
        "Addrs": [
          "/dns4/nft-storage-sv15.nft.dwebops.net/tcp/18402"
        ],
        "ID": "12D3KooWQcgCwNCTYkyLXXQSZuL5ry1TzpM8PRe9dKddfsk1BxXZ"
      },
      {
        "Addrs": [
          "/ip4/104.210.43.77"
        ],
        "ID": "QmR69wtWUMm1TWnmuD4JqC1TWLZcc8iR2KrTenfZZbiztd"
      }
    ]
  },
  "Pinning": {
    "RemoteServices": {}
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Enabled": true,
    "Router": ""
  },
  "Reprovider": {
    "Interval": "12h",
    "Strategy": "all"
  },
  "Routing": {
    "Type": "dht"
  },
  "Swarm": {
    "AddrFilters": [
      "/ip4/10.0.0.0/ipcidr/8",
      "/ip4/100.64.0.0/ipcidr/10",
      "/ip4/169.254.0.0/ipcidr/16",
      "/ip4/172.16.0.0/ipcidr/12",
      "/ip4/192.0.0.0/ipcidr/24",
      "/ip4/192.0.2.0/ipcidr/24",
      "/ip4/192.168.0.0/ipcidr/16",
      "/ip4/198.18.0.0/ipcidr/15",
      "/ip4/198.51.100.0/ipcidr/24",
      "/ip4/203.0.113.0/ipcidr/24",
      "/ip4/240.0.0.0/ipcidr/4",
      "/ip6/100::/ipcidr/64",
      "/ip6/2001:2::/ipcidr/48",
      "/ip6/2001:db8::/ipcidr/32",
      "/ip6/fc00::/ipcidr/7",
      "/ip6/fe80::/ipcidr/10"
    ],
    "ConnMgr": {
      "GracePeriod": "20s",
      "HighWater": 900,
      "LowWater": 600,
      "Type": "basic"
    },
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": true,
    "EnableAutoRelay": false,
    "EnableRelayHop": false,
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
  }
}
1 Like

If you’re running a public full gateway (IE: Not just restricted to CIDs / keys you control), extremely high usage is to be expected, and completely normal. It’s not necessarily spam, but it could be someone or some people pointing to your gateway to retrieve their IPFS data over HTTP.

1 Like

well, agreed on that but actually those cids are looking highy suspicious for a public gateway (if it’d be normal traffic I’d expect to see some CIDv0s in them) + as said, the clients all close their connection before the gatway responds. I’m running this machine since February and today morning traffic suddenly skyrocketed. Could be that someone decided to publish that gateway’s hostname on some public list but for me the traffic looks highly suspicious.

You could always check what data is inside of the CIDs. I’m betting the same thing as you, your gateway got published somewhere, and an application is using your gateway on the regular as a default now (hence the consistency in CID type).

checked ~10 requests, they’re all leading to binary content, nearly all of them start with bafybei / base32 dag-pb. Wondering if that’s maybe the Ceramic network :thinking: I’m not using that node for ceramic (did so on the same instance 1y ago but this is a new instance)

So this is cool, I checked one of the URLs that was attached to one of the requests. Seems this website might be using your gateway as one of their hardcoded gateways: https://www.dandanzan10.top. In short, your high volume of traffic and binary is likely video content.

Yeah, https://www.dandanzan10.top/ has has the following IPFS gateways hardcoded in its JS:

ipfslist = [
  'ipfs.wetez.io',
  'ipfs.getsplice.io',
  'ipfs.2read.net',
  '2eff.lukso.dev',
  'artion11.mypinata.cloud',
  'artion8.mypinata.cloud',
  'forart.mypinata.cloud',
  'odineum.mypinata.cloud',
  'ipfs.pixura.io',
  'craft-network.mypinata.cloud',
  'artion14.mypinata.cloud',
  'artion12.mypinata.cloud',
  'rarible.mypinata.cloud',
  'ipfs.plantyourtree.com',
  'ipfs.ternoa.dev',
  'dashboard.mypinata.cloud',
  'artion6.mypinata.cloud',
  'ravencoinipfs-gateway.com',
  'artion1.mypinata.cloud',
  'ipfs.thirdweb.com',
  'artion13.mypinata.cloud',
  'artion2.mypinata.cloud',
  'artion.mypinata.cloud',
  'nftscan.mypinata.cloud',
  'artion10.mypinata.cloud',
  'ipfs.flair.finance',
  'satellite.mypinata.cloud',
  'ipfs.forta.network',
  'ipfs.remixproject.org',
  'aspenlabs-nft.mypinata.cloud',
  'ipfs.filebase.io',
  'ipfs.fleek.co',
  'rmrk.mypinata.cloud',
  'artion9.mypinata.cloud',
  'anzh.mypinata.cloud',
  'primedao.mypinata.cloud',
  'artion4.mypinata.cloud',
  'akkoros.mypinata.cloud',
  'c4rex.co',
  'ipfs.chainsafe.io',
  'testlaunchmynft.mypinata.cloud',
  'artion5.mypinata.cloud',
  'ameegos.mypinata.cloud',
  'etczunks.mypinata.cloud',
  'artion3.mypinata.cloud',
  'gateway2.ipfscdn.io',
  'gateway.pinata.cloud',
  'gateway.ipfs.io',
  'artion7.mypinata.cloud',
  'ipfs.eth.aragon.network',
  'metaclubbers.mypinata.cloud',
  'gateway.ipfscdn.io',
  'crustipfs.xyz',
  'nftstorage.link',
  'ipfs.schollz.com',
  'gw.crustapps.net',
  'ipfs.infura.io',
  'via0.com',
  'ipfs.best-practice.se',
  'cloudflare-ipfs.com',
  'pollinations.ai',
  '4everland.io',
  'ipfs-gw.dkskcloud.com',
  'ipfs-gateway.cloud',
  'ipfs-gw.decloud.foundation',
  'infura-ipfs.io',
  'cf-ipfs.com',
  'dweb.link',
  'crustwebsites.net',
  'ipfs.io',
]

Do we have any public block lists for urls that abuse?

1 Like

uh wow, my machine is no 2 on that list :joy:

3 Likes