Ah, apparently yes. However there is a lot of rewiring needed to take advantage of this, but at least it is there now.
Do you have generally any recommendations regarding the sizing of the machines? or any graphs how the cluster performs in cpu/memory usage with 1k/10k/100k/1000k pins?
I’m afraid I don’t have relevant graphs. Normally, the heavy part of the binome IPFSCluster+IPFS is the IPFS daemon. The problem would be memory spikes upon certain operations (anything that triggers a pin-ls in cluster or in ipfs is the main thing I can think of).
Now, if you have 300k pins and 1.5GB ram spike is a problem and have only 2vCPUs, you are probably using something quite small. For our large storage cluster (86k pins) we have machines with 64GB, which is overkill (around 8GB used), and 12vCPUs. You should be fine with 16GB and 4CPUs for standard configurations of cluster + ipfs. But then your disk speed, the ipfs datastore you choose, the dht mode, the re-providing settings and the connection manager settings will all affect how much the IPFS daemon takes.
Setting this in the badger options helps with low memory environments (at the cost of speed I guess):