For me, the problem with searching the DHT or the vast ocean of hashes people will be generating in the future, is the amount of useless hashes that need to resolved to discover content. We should focus more on searching IPNS.
As for searching ipfs objects directly…this will also be inefficient. I add my website every time I change something, creating a new hash. Does anyone really want to search through 50 copies nearly the same content? or each time you add the same folder with new content.
Add a distributed database to an IPFS client like Siderus Orion or IPFS Manager. Alternatively, files could be added/indexed/verified with a service like Stamp.io…But I prefer native interactions.
When adding files, the users could add tags i.e. title/description/username, and the client adds the tags file size/type/date added (very important).
The local DB is updated with a new hash which could be broadcast to/syncronized with connected peers. In this way, when a user searches the DB they will be directed to the latest version of the content, with the choice to also view older version (if the file still exists somewhere.
Websites or companies with large data storing capabilities and a lot of content such as Google, Instagram, Wiki, Linkedin, would be huge contributors to the trusted hash list. Having synced with large “trusted” entities, the users would get the latest search results.
Other options such as rating content as “safe, explicit, offensive, harmful” could help remove negative content from the web.