Just in case somebody is curious, here is what we ended up doing:
We have an AWS EC2 instance running an ipfs node. This is connected to our private swarm, but the same approach would also work if we were connected to the global ipfs swarm. On this node we have configured various keys for assets and applications, and the various stages (dev, qa, release).
We have configured our travis-ci.com build with a ssh key that is allowed to ssh into the ipfs node.
On the node, we have a script ipfs-publish.sh that accepts a piped tar.gz file and publishes it to one of the keys in the keystore of the node running on the ec2 instance.
To push to that node, assuming the public key part of the key configured in travis in the authorized_keys of the ipfs node, you just need to execute
tar -cjvf - -C dirname . | ssh ipfs.company.net ipfs-publish.sh keyname --
from the .travis.yml of the build.
But we also have a script ipfs-publish-remote.sh that simplifies this and that can be called from the deploy script.
The only thing left is to add the target host to the known hosts, which can be done in the .travis.yml: