Share pin and data using IPFS-cluster in different network

Hello. I trying to share pin and data using a collaborate cluster in different networks. I want to connect the internal network from wifi (192.168.1.X) - public cloud services (161.X , 172.X).
I’m doing uses UPnP in wifi router for cluster port. Can see ipfs-clster-ctl peers in public cloud services. but It doesn’t share pin and data. error is:
ERROR crdt: expected 1 as the cid version number, got: 10 crdt.go:308
ERROR crdt: reading varint: buffer too small crdt.go:308
ERROR p2p-gorpc: failed to dial : all dials failed

  • dial tcp4 0.0.0.0:9096->161.X.X.X:9096: i/o timeout call.go:64

There are not raspberry pi or mac-mini devices issues. Because, when the same internal network, it can share pin and data. Do you have any ideas? How to fix that?

Thank you

Hello,

how did you install ipfs cluster? Are you running on different versions, one being from master?

Thank you for reply.
Two devices version is same: ipfs version 0.4.23, ipfs-cluster-service 0.12.1
mac-mini is Hosting cluster. and mac-mini install using go (https://cluster.ipfs.io/download)
raspberry pi 3b+ or 4 is just arm file download (https://dist.ipfs.io/#ipfs-cluster-ctl)
Also, 2 raspberry pi can share pin and add in internal network.

can you send the output of ipfs-cluster-ctl --enc=json peers ls ?

[
    {
        "id": "12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
        "addresses": [
            "/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/192.168.1.19/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/127.0.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/192.168.1.19/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/172.17.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ"
        ],
        "cluster_peers": [
            "12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm"
        ],
        "cluster_peers_addresses": [
            "/ip4/10.12.0.6/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
            "/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
            "/ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
            "/ip4/172.18.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
            "/ip4/192.241.X.X/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm"
        ],
        "version": "0.12.1",
        "commit": "",
        "rpc_protocol_version": "/ipfscluster/0.12/rpc",
        "error": "",
        "ipfs": {
            "id": "QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82",
            "addresses": [
                "/ip4/127.0.0.1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82",
                "/ip4/192.168.1.19/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82",
                "/ip4/172.17.0.1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82",
                "/ip6/::1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82",
                "/ip6/fd50:a32:62b4::d55/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82",
                "/ip6/fd50:a32:62b4:0:ca69:91ca:bb46:7b30/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82"
            ],
            "error": ""
        },
        "peername": "03pi4"
    },
    {
        "id": "12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
        "addresses": [
            "/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
            "/ip4/192.241.X.X/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
            "/ip4/10.12.0.6/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
            "/ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm",
            "/ip4/172.18.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm"
        ],
        "cluster_peers": [
            "12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm"
        ],
        "cluster_peers_addresses": [
            "/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/127.0.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/172.17.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/192.168.1.19/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ",
            "/ip4/192.168.1.19/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ"
        ],
        "version": "0.12.1",
        "commit": "",
        "rpc_protocol_version": "/ipfscluster/0.12/rpc",
        "error": "",
        "ipfs": {
            "id": "QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5",
            "addresses": [
                "/ip4/127.0.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5",
                "/ip4/192.241.X.X/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5",
                "/ip4/10.12.0.6/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5",
                "/ip4/172.17.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5",
                "/ip4/172.18.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5",
                "/ip6/::1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5"
            ],
            "error": ""
        },
        "peername": "schema"
    }
]

hi, this only shows two peers ( I am guessing the internal ones). You mentioned an external one?

ERROR crdt: expected 1 as the cid version number, got: 10 crdt.go:308
ERROR crdt: reading varint: buffer too small crdt.go:308

This error worries me. This means some peer is broadcasting something that your peer cannot understand and I think this could be a peer running with a new go-ds-crdt version or something really strange is happening.

ERROR p2p-gorpc: failed to dial : all dials failed

This error might mean some peer is not reachable at least at some point, but this worries me less. You should check that connectivity can be stablished between the peers. In particular, it seems port 9096 on 161.X.X.X is not reachable.

Thank you for your comment. it’s help for my work.
This work has already connected 2 node from external network. Not internal network:

ip4/192.168.1.19/tcp/9096/p2p/12D3KooWSJPyg7BR8 (Pi 4 in ineternal network)
/ip4/192.241.X.X/tcp/9096/p2p/12D3KooWSk (Digital Ocean in external network)

I also worried that. So I’m tried to reinstall ipfs-cluster.but again same error:
ERROR crdt: expected 1 as the cid

This log seems to connected. but why is not pinned and shared?

Thank you for your help

That looks a lot like an internal LAN IP though?

Can you run with --debug and share the logs?

root@schema:~# ipfs-cluster-ctl --debug peers ls
2020-04-27T06:23:09.615Z	DEBUG	cluster-ctl	ipfs-cluster-ctl/main.go:142	debug level enabled
2020-04-27T06:23:09.616Z	DEBUG	apiclient	client/request.go:62	GET: http://127.0.0.1:9094/peers
2020-04-27T06:23:09.760Z	DEBUG	apiclient	client/request.go:96	

Response body: [{“id”:“12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ”,“addresses”:["/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/192.168.1.19/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/127.0.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/192.168.1.19/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/172.17.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ"],“cluster_peers”:[“12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ”,“12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm”],“cluster_peers_addresses”:["/ip4/10.12.0.6/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm","/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm","/ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm","/ip4/172.18.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm","/ip4/192.241.X.X/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm"],“version”:“0.12.1”,“commit”:"",“rpc_protocol_version”:"/ipfscluster/0.12/rpc",“error”:"",“ipfs”:{“id”:“QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82”,“addresses”:["/ip4/127.0.0.1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82","/ip4/192.168.1.19/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82","/ip4/172.17.0.1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82","/ip6/::1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82","/ip6/fd50:a32:62b4::d55/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82","/ip6/fd50:a32:62b4:0:ca69:91ca:bb46:7b30/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82"],“error”:""},“peername”:“03pi4”},{“id”:“12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm”,“addresses”:["/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm","/ip4/192.241.X.X/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm","/ip4/10.12.0.6/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm","/ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm","/ip4/172.18.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm"],“cluster_peers”:[“12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ”,“12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm”],“cluster_peers_addresses”:["/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/127.0.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/172.17.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/192.168.1.19/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ","/ip4/192.168.1.19/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ"],“version”:“0.12.1”,“commit”:"",“rpc_protocol_version”:"/ipfscluster/0.12/rpc",“error”:"",“ipfs”:{“id”:“QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5”,“addresses”:["/ip4/127.0.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5","/ip4/192.241.X.X/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5","/ip4/10.12.0.6/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5","/ip4/172.17.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5","/ip4/172.18.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5","/ip6/::1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5"],“error”:""},“peername”:“schema”}]

12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ | 03pi4 | Sees 1 other peers
  > Addresses:
    - /ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ
    - /ip4/127.0.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ
    - /ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ
    - /ip4/172.17.0.1/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ
    - /ip4/192.168.1.19/tcp/9096/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ
    - /ip4/192.168.1.19/udp/9096/quic/p2p/12D3KooWSJPyg7BR8Dw6ZTimqiSM7aZ9hTdzQDR7cVPs9nzxmCGZ
  > IPFS: QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82
    - /ip4/127.0.0.1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82
    - /ip4/172.17.0.1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82
    - /ip4/192.168.1.19/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82
    - /ip6/::1/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82
    - /ip6/fd50:a32:62b4:0:ca69:91ca:bb46:7b30/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82
    - /ip6/fd50:a32:62b4::d55/tcp/4001/p2p/QmcnNFHQLgtAY93x4VKNg1dFn4oBzvLJUxho83Snk6te82
12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm | schema | Sees 1 other peers
  > Addresses:
    - /ip4/10.12.0.6/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm
    - /ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm
    - /ip4/172.17.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm
    - /ip4/172.18.0.1/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm
    - /ip4/192.241.X.X/tcp/9096/p2p/12D3KooWSkiDfJnzhn6hgeDXoF7yRzzhqJxow3is63FVNVoymwgm
  > IPFS: QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5
    - /ip4/10.12.0.6/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5
    - /ip4/127.0.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5
    - /ip4/172.17.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5
    - /ip4/172.18.0.1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5
    - /ip4/192.241.X.X/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yfj9cQitwQpCEFFdz5
    - /ip6/::1/tcp/4001/p2p/QmcJw57d1X38X4kknLWHh4gKNhB4yf

Additionally, Two nodes work between 03pi4 and schema using a private network.
Thanks

Sorry, I meant ipfs-cluster-service --debug daemon until the crdt error happens (does it regularly happen every minute or so?). You may need to upload the log somewhere, or just email them to hector@ipfs.io .

Hello,

the logs you sent show that you are NOT running the 0.12.1 release. You are running something newer. I missed that you said you had installed manually using go. You probably installed from master and the issue is that that peer is running with the new crdt version that old peers cannot understand.

This was confusing because the version string is still 0.12.1 (as no newer release has been made). If you want to get things working, checkout v0.12.1 and build that one.

WOW… I can’t fix it before. Thanks
I tried clean new version

go clean ipfs-cluster-ctl ipfs-cluster-follow ipfs-cluster-service

and then reinstall from homepage. It’s working.

Thank you so much!