I wanted to share a use case I've been using experimenting with. The core idea is that I want to separate peer review from publication.
Anyone should be able to publish their data anywhere, and that data should be able to be used anywhere.
But since we've de-coupled quality control from publication, how can we learn about the quality of that data or whether or not it has been reviewed since it no longer has a single point of access (aka a specific book or siloed website) or identifiable origin.
Enter ipfs. Instead of thinking about the data's origin, we can think about its content.
To demonstrate, I've built a little review registry. http://dll-review-registry.scta.info/
It was originally built for the Digital Latin Library and intended to be a review registry for reviews of latin editions, but there's no reason it can't support reviews of any file or any kind of data. If there is a url for the data, a review can be created.
Anyone can log in with their github credential to leave a review about any piece of data. They simply have to provide a link to the data and the text of their review.
When they hit submit, the system will retrieve the content of the provided url. It then uses ipfs to pin that content to that node, making it available on the ipfs network. (note, I'm having trouble getting port 4001 exposed at the present, so pinned data is for the moment only available at the scta gateway (http://gateway.scta.info). But it is enough to give you the idea.)
Once a review is made, the registry becomes a discovery endpoint that any application can use to discover whether or not the content has been reviewed. See my early and primitive API docs: http://dll-review-registry.scta.info/docs/index.html
Any application can send the endpoint the link to a file (or a pre-computed 256 or ipfs hash) and the registry will return any reviews for that exact content/hash.
The beauty is, in typical, IPFS fashion, you don't have to know the "location" of the reviewed data. You can just send the service the data you have, the system will compute the hash and check to see if there any reviews for the identical content.
Here are two screen shots of independent applications using the review registry and reviewed data pinned to the ipfs network. In each case, you can see a little "green" review badge that has been separately retrieved from the service.
You can see the request in action here: http://scta-staging.lombardpress.org/text/lectio1 or see the screen shot below.
This second screen shot includes the ipfs hash as an indicator of the precise data that was reviewed. (See the bottom right corner)
I'd be interested to hear from others who are interested in similar questions: How could we use ipfs to create a global review registry of distributed content? How does this fit with related work you're already doing? How can we collaborate further?