Multiple peers on the same machine for testing

I am trying to setup multiple IPFS peers on the same Windows machine, in order to test file sharing and pubsub service.

I have created a different .ipfs folder for each peer, that is .ipfs1, .ipfs2.
In each config file i have replaced the ports 4001, 5001 and 8080 to not overlap.

So when i want to run all the daemons at the same time i open 2 console windows and input in each one:

set IPFS_PATH=C:\Users\MyName\.ipfsX (X = the peer number)
ipfs daemon --enable-pubsub-experiment

When i want to execute commands in a specific peer i open a new console window and type:

set IPFS_PATH=C:\Users\MyName\.ipfsX (X = the peer number)
cmd

So let’s get to the problem. I want to run 2 peers, subscribe both to the same pubsub channel and exchange messages.
I have 6 open console windows, 3 for each peer:

  • 1 for the running daemon
  • 1 for executing sub and listening for messages
  • 1 for inputing commands

The issue is that when i send a pubsub message, only the same peer receives it.
Only Peer1 listens to messages created by Peer1, etc.

Is there something wrong with my multi-peer setup?
Any help would be appreciated.

Are you sure they are connected to each other?

Try doing ipfs swarm peers in one peer and check if the ID of the others are there.

If they are not connected, do ipfs id, take the swarm multiaddr you want to connect via, and run ipfs swarm connect <multiaddr> to connect them. Then try the pubsub commands again.

@VictorBjelkholm thanks for your answer.

It looks like my multi-peer approach is flawed.
I can get connected peers only for Peer1 and no one else. All other peers have no connected peers.
I guess just the replacement of port numbers is not working as expected.
Below i quote the daemon output for Peer1 and Peer2.
If you could take a look i’d really appreciate it.

Peer1:

Initializing daemon...
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/my_external_ip4_1/tcp/4001
Swarm listening on /ip4/my_external_ip4_2/tcp/4001
Swarm listening on /ip4/192.168.1.64/tcp/4001
Swarm listening on /ip4/192.168.56.1/tcp/4001
Swarm listening on /ip6/my_external_ip6_1/tcp/4001
Swarm listening on /ip6/my_external_ip6_2/tcp/4001
Swarm listening on /ip6/::1/tcp/4001
Swarm listening on /p2p-circuit/ipfs/my_peer1_id
Swarm announcing /ip4/127.0.0.1/tcp/4001
Swarm announcing /ip4/my_external_ip4_1/tcp/4001
Swarm announcing /ip4/my_external_ip4_2/tcp/4001
Swarm announcing /ip4/192.168.1.64/tcp/4001
Swarm announcing /ip4/192.168.56.1/tcp/4001
Swarm announcing /ip6/my_external_ip6_1/tcp/4001
Swarm announcing /ip6/my_external_ip6_2/tcp/4001
Swarm announcing /ip6/::1/tcp/4001
API server listening on /ip4/127.0.0.1/tcp/5001
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8081
Daemon is ready

Peer2:

Initializing daemon...
Swarm listening on /ip4/127.0.0.1/tcp/4002
Swarm listening on /ip4/my_external_ip4_1/tcp/4002
Swarm listening on /ip4/my_external_ip4_2/tcp/4002
Swarm listening on /ip4/192.168.1.64/tcp/4002
Swarm listening on /ip4/192.168.56.1/tcp/4002
Swarm listening on /ip6/my_external_ip6_1/tcp/4002
Swarm listening on /ip6/my_external_ip6_2/tcp/4002
Swarm listening on /ip6/::1/tcp/4002
Swarm listening on /p2p-circuit/ipfs/my_peer2_id
Swarm announcing /ip4/127.0.0.1/tcp/4002
Swarm announcing /ip4/my_external_ip4_1/tcp/4002
Swarm announcing /ip4/my_external_ip4_2/tcp/4002
Swarm announcing /ip4/192.168.1.64/tcp/4002
Swarm announcing /ip4/192.168.56.1/tcp/4002
Swarm announcing /ip6/my_external_ip6_1/tcp/4002
Swarm announcing /ip6/my_external_ip6_2/tcp/4002
Swarm announcing /ip6/::1/tcp/4002
API server listening on /ip4/127.0.0.1/tcp/5002
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8082
Daemon is ready

What version of go-ipfs are you using? There was a similar issue here https://github.com/ipfs/go-ipfs/issues/5146 but it doesn’t look like it should be related.

Could you also post your daemon configs?

With some more digging i see that webui is not working either, on ports other than the default 5001.
When both Peer1 and Peer2 daemons are running, http://127.0.0.1:5001/webui works, but http://127.0.0.1:5002/webui keeps loading forever.
Maybe the port numbers are hardcoded somewhere and changing them in config is not enough for multiple peers.
Am i missing something, or virtualization is the only solution?

Why not run one IPFS node locally and the second one inside a Docker container that only has local access or public if you want?

@stebalien

        "AgentVersion": "go-ipfs/0.4.17/",
        "ProtocolVersion": "ipfs/0.1.0"

EDIT: In config files nothing was touched except changing the port numbers and enabling the filestore option.

@jlubbers thanks for the note. I’m just taking my first steps with IPFS, so i tried the most straightforward (for me) solution.
if this doesn’t play out i guess i will have to look into docker.

Well, welcome to the IPFS community I’m fairly new here too and had some troubles getting started and wanted to test multiple peer connections too and docker was a pretty easy way to go. If you need some help with getting docker IPFS setup feel free to send me a PM and I’m happy to work with you to get a good testing environment setup. Maybe we then can write up some good instructions and have an easy to use repo for others too!!

@jlubbers thanks for the offer. Unfortunately Docker will not install on my Windows 10 Home. It requires Pro or Enterprise version.
So it’s an one-way road to VirtualBox…

you can also try combinations of go-ipfs, IPFS-Manager and Siderus Orion to run multiple nodes at the same time.

I’ve also used the virtual box route. Ubuntu 18 running in the virtual box. Worked very well.

With IPFS-Manager you can run an installation for a flash drive.

IPFS definitely supports multiple instances on the same machine (we do this all the time when testing).

This is really odd. Could you try from, e.g., git bash instead of CMD?

@roscoevanderboom thanks for the suggestions.
So far i have been testing multiple VirtualBoxes.
I encoundered a few problems with Windows VMs, sometimes pubsub was not functioning at all.
But all the Linux VMs i tried worked flawlessly.

@stebalien my multiple instances seemed to work at first, daemons reported no errors.
When i tried to pubsub between them they were completely deaf to messages.
I will give it another try when i reach the large-scale phase where VMs will not suffice.

It work fine for me. I adopted the same configuration as @plexus mentioned but both node running on docker container.