Multiple IPFS Cluster Nodes in the same machine using Docker

Is there any updated guide on how to setup what the title describes?
Since ipfs is not included any more in cluster docker image, i cannot find any detailed resources for the steps needed to achieve this.

The process is not different than running a single peer, you just need to run several containers. What is the problem your are facing?

@hector first of all i cannot figure out how to setup docker-compose.yml to create go-ipfs nodes.
What i’d like to do is having something like that:

ipfsnode1 : 192.168.1.101, 4001, 5001, 8080
ipfsnode2 : 192.168.1.102, 4001, 5001, 8080
ipfsnode3 : 192.168.1.103, 4001, 5001, 8080

For now i create my nodes creating multiple containers on 127.0.0.1 with different ports, but using docker-compose would be ideal.

hey, have a look here on how that would look like: Setting up 2 peer IPFS cluster on docker

Our bad that we haven’t documented a working docker-compose setup yet. I will make it a priority.

1 Like

@hector I already tried that.
I stripped out all cluster stuff and I setup only the ipfs nodes.
I keep getting an error:
The network ipfsnodes_vpcbr cannot be used with services. Only networks scoped to the swarm can be used, such as those created with the overlay driver.

The docker-compose i used:

version: '3'

networks:
  vpcbr:
    driver: bridge
    ipam:
     config:
       - subnet: 192.168.1.0/24
         
services:
  ipfs_01:
    image: ipfs/go-ipfs
    ports:
          - "4101:4001"
          - "5101:5001"
          - "8180:8080"
    volumes:
      - ./nodes/ipfs_01/data:/data/ipfs/
      - ./nodes/ipfs_01/staging:/export
    networks:
      vpcbr:
        ipv4_address: 192.168.1.101
    
  ipfs_02:
    image: ipfs/go-ipfs
    ports:
          - "4201:4001"
          - "5201:5001"
          - "8280:8080"
    volumes:
      - ./nodes/ipfs_02/data:/data/ipfs/
      - ./nodes/ipfs_02/staging:/export
    networks:
      vpcbr:
        ipv4_address: 192.168.1.102

I’m not very fluent in docker compose. but as pointed out in the other thread, you don’t need to provide fixed IP address if docker sets dns names for you.

@plexus here you go:

1 Like

Thanks a lot @hector!
I’m sure under Linux your docker-compose works like a charm.
But Docker for Windows is another poison :wink:

The only way to get it running is by disabling all volumes, because Docker VMs are having a hard time linking to host’s shares.
It produces errors no matter what i’ve tried:

ERROR cmd/ipfs: error from node construction: failure writing to dagstore: sync /data/ipfs/blocks: invalid argument daemon.go:332

(some directories and files are written there though)

If anyone here has encountered and solved this problem the help would be appreciated, because it’s the only way to achieve persistence.

The second issue is a bit easier. How can i modify the compose file to make the ipfs services run the daemon with --enable-pubsub-experiment?

Thanks in advance for helping me with my first steps in this great project.

(PS: I also get an error for the container name definitions. It roughly states that naming a service container is not allowed)

make the ipfs services run the daemon with --enable-pubsub-experiment?

You’ll need to override the command. Check the Dockerfile and entrypoints from go-ipfs.

OK, i switched to Ubuntu and all glitches vanished, thanks @hector !

Also what is the fastest way to check that pinset coordination works?
I guess i have to pin a file on a node and then check if it is pinned on the others.
But how can i get a list of the hashes that are pinned on a node (assuming i have access to that node’s cli)?

EDIT: Solved. With pin ls you can get all pinned hashes. Cluster works independedly from ipfs daemon so this command will give different results for each of them.