I read somewhere that badger has high memory usage. Since we are running on low-power and low-memory arm edge devices this might be a problem. However, I will roll it out on a few cloud nodes and a few developer devices and see how it goes. Thankfully we got infrastructure to painlessly roll out new ipfs versions.
I think it is best to stick with the defaults. We are using a pretty niche (for now) technology, so at least we want to stick to the settings other people are using. We have just adjusted our chunking algorithm to never exceed 4 megabytes.
It was exceedingly simple. We generate data on multiple devices. Due to some strange circumstances and an application level bug, one of the devices created an IPFS dag node that was >4mb, and we were not able to get that hash on the other devices.
So the whole system got stuck and it took me a while to figure out what was going on.