I'll try to take a shot.
You can do this using IPNS by updating the IPNS address with the newest hash each week, but you might need to set up a script or something on the clients to pin the new hash that IPNS points to every week and optionally unpin older hashes if you don't want those files hanging around.
You might currently have to do some fine-tuning if trying to pin 1.5TB at a time, but I'm under the impression that some performance fixes are coming in v0.4.11. If only a small subset of files are changing each week, adding the data to IPFS should be much faster after the initial add. By default, I'd expect you to see excessive bandwidth utilization while adding and pinning very large datasets.
You will probably also want to use IPFS filestore to add the files into IPFS since by default any added content gets copied into IPFS' managed datastore; if you don't use the filestore, you'd consume ~1.5TB of local storage (maybe less, depending on if there are duplicate blocks) for every 1.5TB of content added.
There might be a way to get some information on transfer progress, but I'm not aware of anything currently that would be similar to the visualizations available in torrent clients or P2P file transfer tools like Resilio Sync.
In general, not that I'm aware of. IPFS doesn't treat content cached locally from files available remotely. For any given IPFS hash you can interact with it as if it's in the local filesystem by setting up an IPFS mount and navigating to
/ipfs/<hash>. While it might seem like it should work, doing something like
ls /ipfs/ on the IPFS mount will not list only pinned IPFS hashes or locally cached hashes.