Sharing the IPFS data directory and daemon between multiple users

I’m looking at how to use IPFS for multiple users on a single Linux system.

The users might not be real people, some may simply be processes running with different user accounts.

A single IPFS daemon would run on the system and content required by any of the users would be pooled in a single data directory. This would avoid keeping duplicate copies of any object.

As far as I can tell, if a single IPFS daemon runs with FUSE, any user can read things from /ipfs so that part of the problem, read access, is already solved.

Every user who wants to publish things with ipfs add needs to have write access to the data directory. This runs the risk that a user could break something in that directory. Is there any more fine-grained way to control access to IPFS in such an environment, so that users can run ipfs commands without having direct write access to the data storage?

I could think of ways to make a wrapper script for passing files to the daemon through a queue directory if there isn’t already a built-in solution.

This sounds like the general issue of user tokens and auth in the IPFS API which I’m not sure is done or in the works:

See this: https://github.com/ipfs/go-ipfs/issues/2389

The approach I’ve used is to run a docker container where IPFS resides, and it’s HTTP API is not made available to the outside world directly, but I use my own web app on the back end to do writes (to IPFS) so that my web app is like a proxy that does the user auth itself. So I’m using IPFS the same way you’d use a database, where end users don’t even have the ability to talk directly to the database.

I think most Pinning Services are doing something similar where they invent their own proprietary way of authenticating users, and have their own back end proprietary code for doing the actual writes to IPFS.