docker swarm throwing an error "swarm already part of swarm"

Question:

docker swarm throwing an error “swarm already part of swarm” when I am joining the new node to the existing node

I am running the docker swarm in my local machine and trying to init the swarm and getting executed very well but when i am trying to add the new worker or node to the existing manager node then it is throwing an error like swarm already part of the node you have to leave the node.
$docker swarm init
Swarm initialized: current node (fn405d6jtk8mxbpvdrftr0np1) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-5tyw8ux789wpa7yyt75qbilb669tiw53pxriyxu48niznpmaka-7u63l4hom3h60myvtyw8p1mcj 192.168.2.219:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

=>And again am using above token as a worker and join then i gotta error like this..

$    docker swarm join --token SWMTKN-1-5tyw8ux789wpa7yyt75qbilb669tiw53pxriyxu48niznpmaka-7u63l4hom3h60myvtyw8p1mcj 192.168.2.219:2377

Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
Asked By: Hushen

||

Answers:

The Docker swarm is a collection of one or more machines (physical or virtual, called nodes) that can run your containers as services. Nodes in the swarm can be managers or workers. Only on manager nodes can you see/modify the swarm status. Worker nodes only run containers. In order to run a container in the swarm you must create a service; that service will have zero or more containers depending on the scale that you set for the service.

To create a swarm, you run the docker swarm init on the machine that will be a manager node. Then, on the other machines that you own you run the docker swarm join command in order to add them to the swarm. You cannot add to the swarm a machine that already is on the swarm. In your case, you try to add to the swarm the manager that created the swarm.

When you initiate a swarm (with docker swarm init), the machine from that you initiated the swarm is already connected to the swarm, you don’t need to do anything else to connect it to the swarm.

After you initiate the swarm, you may (and should) add other machines as managers or workers.

At any point after you have created the swarm, you can create services and/or networks or deploy stacks.

Answered By: Constantin Galbenu

The node you run docker swarm init on automatically becomes a Swarm Manager. Those join tokens are created for adding new nodes to the swarm to ensure you have a highly-available environment that is as resilient as possible.

Based on the comments I read above, you confirmed you were running the worker join token on the node you ran docker swarm init. I’d advise you read over the basics of Docker and Docker Swarm.

Answered By: user3170226

I think you are using same Manager node for worker, I think it gonna make error. You can join a seperate node (can be virtual) as worker.

Type docker info and see the swarm state, you can find the state of swarm.

From the worker(different node) if you face “This node is already part of a swarm” , you should leave from swarm by ” docker swarm leave –force”. and try connecting again.

Answered By: Sarath Kumar

follow the steps to add a node to a swarm
lets have 1 master1 and 3 slave nodes.

  $ docker-machine create -d virtualbox master1
  $ docker-machine create -d virtualbox slave1
  $ docker-machine create -d virtualbox slave2
  $ docker-machine create -d virtualbox slave3

now ssh to the master1

  $ docker ssh master1

now initialize the swarm cluster with master1 as a Leader

  $ docker swarm --init <master1-ip>  // get the ip as $ docker-machine ip master1
  $ exit

the above command gives token for the workers to join the swarm.
to use this token SSH into the slave nodes. now come out of the master1 ssh

  $ docker-machine ssh slave1 
  $ docker swarm join-token <token> <ip>:<port>
  $ exit

now similarly for other slave nodes

  $ docker-machine ssh slave2
  $ docker swarm join-token <token> <ip>:<port>
  $ exit

  $ docker-machine ssh slave3
  $ docker swarm join-token <token> <ip>:<port>
  $ exit

to get a better know of the current master and slave nodes
ssh into the maste1 node

  $ docker-machine ssh master1
  $ docker node ls

the above command is were u will come to know which nodes are already part of the swarm

to leave the swarm

  $ docker swarm leave --force
Answered By: Abhishek D K

I had faced a similar issue, I had one slave node which was a part of the docker swarm and then my node machine got rebooted after that I tried to add this same node again back to the swarm where it gave me below error
I had to use docker swarm leave form this node and then I again added it and it worked fine

Error:-

[######@slave ~]$  docker swarm join --token SWMTKN-1-3nvdabrgv2sg03j9u3ww5or8ujamztqv0tihzo9ip26ewtc0vq-dw5ah2sj4nwbtuvd7lmpecjzp 192.168.xxx.xxx:2377

Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.

Resolution:-

[#####n@slave ~]$ docker swarm leave
Answered By: pravin bhande

I’m trying to do same. Have 3 nodes. On Master node , Ran ‘docker swarm init’. Now on Slave node ran command sudo docker swarm join --token <Token-ID> <IP>:2377 But gets an error: "Error response from daemon: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connec tcp :2377: connect: no route to host"
". Can someone assist please

Answered By: Boodle
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.