How to get the Qm... hash of a file/dir after ipfs add?

I added a dir with 2 files in it using ipfs add:
# ipfs add -r dir4
added Qm…CJV dir4/fourKFile
added Qm…Hib dir4/threeKFile
added Qm…eYE dir4

If I don’t immediately copy the Qm… hashes returned (say the console scrolled by or something) - is there any way to find them again?
ie: knowing the names of the files/dir I have added how do I find the corresponding Qm… hashes?

New to ipfs - maybe I’m missing something obvious. Your patience is appreciated.

Thx,
SJ

You can use the following command to list the current pins and it will be on the list, but it has no way of knowing which one it is:
ipfs pin ls --type=recursive
You could use the following command to add the files instead. Then it’s easy to look it up, if you know what you named the folder:
ipfs files --help

How do I add the files using the ‘ipfs files’ interface?
AFAIK I have to do ‘ipfs add’ followed by an ‘ipfs files cp’ for which I need the aforementioned Qm… hash from the add operation. This is the root of the problem I’m having.
The only thing I see is under ‘ipfs files’ is a ‘write’ operation - but that does not seem to be meant for adding whole dirs/files.

thx,
SJ

If you want simple, use the “Import Folder” feature in webui.ipfs.io (it’s also part of IPFS Desktop). It places files/folders in the same location as the “ipfs files” command (which means you can use those commands to see files you added with import and vice versa).

Thx for the suggestion but I am trying to do this via CLI / API.
It seems I need an ipfs files add command (something that combines the ipfs add and ipfs files cp functionality. :slight_smile:

At the end of the add command put

“> /home/user/file.txt” without quotes for example. It will dump the output there.

You can add the same files over again this time with the output going to a text file. It should add quickly since the files will be seen to already exist. That’s been my experience at least. Update folders sometimes and just readd the whole thing recursively again to get a new hash for current contents, but it goes fast since many files and sub directories didn’t change.