IPFS and Libp2p seem to exist somewhat separately, allowing and encouraging Libp2p to be used for other p2p projects. This seems inefficient however, as it would lead to overlapping nodes on the same computer. Each Libp2p peer would correspond to a piece of software instead of a computer.
One solution could be to restructure Libp2p implementations to be more similar to something like FUSE, and have applications such as IPFS be a separate process. In the same way that IPFS replaces HTTP as a content distribution system, Libp2p could be a replacement for existing networking systems. An IP address and port would be replaced with a peer ID and protocol identifier respectively. Instead of arbitrarily assigning ports to purposes (eg. 22 for ssh), services would be identified by protocol. Something like the
ipfs p2p commands could be used similar to a gateway server to allow easier adoption.
This would allow IPFS and other decentralized applications to exist without any reference to TCP or IP, even multiaddresses built on them. Much of the IPFS networking such as finding an address for a peer could use any Libp2p nodes, but IPFS-specific processes would only happen between the sub set of nodes that are also running IPFS. This seems more appropriate than having a different node and network for every decentralized software.
Does this make sense? It would require a restructuring of IPFS implementations but in theory could be compatible with existing nodes over the wire.