When i add a string or a json object that is larger than 4096 characters and then try to retrieve it, the retrieved string is cut off at 4096 characters.
Has anything changed regarding the max size of a string or json object that can be stored on ipfs?
This used to work for me in the past, but not anymore. I did recently upgrade to the new ipfshttpclient for python 3.
You should be able to add arbitrarily large objects split into DAG structures, but I thought there were limits on the size of each individual DAG block. I think that DAG nodes were limited to around 2MB each, but that may not be the latest. Ideally, it would make sense to use APIs that automatically take care of chunking and creating appropriate DAGs.
Does this look related to this open issue with for ipfshttpclient? It seems similar to me.
It looks like the py-ipfs-http-client only supports go-ipfs versions up to 0.4.19 (with newer versions having âcompatiblity problemsâ [sic]). So, if youâre using the latest version (latest is 0.4.22) I wonder if this could be one of the compatibility problems.
Hm. There was an issue < 0.4.19 where files could be truncated on upload due to a bug in the go http library. However, Iâm not aware of any issues in go-ipfs >= 0.4.19 that could have caused this.
There was an issue adding multiple files but that was fixed in 0.4.21 and shouldnât apply to single files.
seems there is another open issue regarding this problem, it mentions setting the chunk-size while connecting to the ipfs node.
I have tried setting chunk size to a larger number and this does seem to work.
However Iâm not sure if this could have any side effects if i set it to a very large number.