Need help organizing approach for IPFS I/O

I have a pretty simple goal, but I’m having trouble figuring out what to do here.

I have a simple python app, here:


that uses JSON files. I want the user to be able to download the file from a directory I’ve created using Pinata:

https://gateway.pinata.cloud/ipfs/QmPJasCJD41xjDhYgSgjpU2oGN722C9pPNWouDHQNsG7Fd

then at the end of the session, upload them back to IPFS.

Things I’ve tried so far:

I can open a json file in python like this:
url = “https://ipfs.io/ipfs/QmPJasCJD41xjDhYgSgjpU2oGN722C9pPNWouDHQNsG7Fd/testcards(chinese%20elements).json

with urllib.request.urlopen(url) as url1:
data = json.loads(url1.read())

But I can’t seem to figure out how to get a list of the files in a directory, then isolate individual urls for the user to be able to pick one to load. The pinata hosted urls don’t seem to be readable by python either.

So that’s my input problem.

For output, I got my python script sending POST requests successfully to the Pinata API. The problem is that it doesn’t add to my original directory and I can’t find anything in the pinata documentation that explains how to do that.

I’m also a little confused at how my users will be able to find and browse the files.
Thanks for your help and patience,

I think your have to wrap your files and directory to keep the metadata, using the Files API:

ipfs.files.add(....)

Never used it myself, though.

hi if you use a JSON-FILES why do not use ipld ? https://medium.com/towardsblockchain/understanding-ipfs-in-depth-2-6-what-is-interplanetary-linked-data-ipld-c8c01551517b