Docker discovery-swarm to build docker cluster

Origin: http://www.cnblogs.com/520playboy/p/7873903.html

Preface

Swarm was an independent project prior to Docker version 1.12. After the release of Docker version 1.12, the project was merged into Docker and became a subcommand of Docker. Doker swarm is a tool for creating a server cluster, which requires only a few commands to create a server cluster.It has built-in tools needed by server clusters, such as service lookup, network, load balancing, and so on.

1. Environment

     centos 7.3

    Docker version 1.12.6

ip role
192.168.6.130 manager
192.168.6.131 worker
192.168.6.132 worker

 

 

 

 

2. Clusters

2.1, Initialize cluster in 192.168.6.130

[root@jacky jacky]# docker swarm init --advertise-addr 192.168.6.130:2377
Swarm initialized: current node (4devzwb6rpimfpteqr17h2jx9) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-5r3ujri3th4038jp7q66zrfo56eqo0sabyage8ahc10121evog-ah9bptj9d7txdu6y91w7qxd81 \
    192.168.6.130:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

[root@jacky jacky]#

Description: --advertise-addr Sets listen Ip and port number

2.2, Initialize, View Node List on Cluster

[root@jacky jacky]# docker node ls
ID                           HOSTNAME           STATUS  AVAILABILITY  MANAGER STATUS
4devzwb6rpimfpteqr17h2jx9 *  jacky.localdomain  Ready   Active        Leader

Description: There is currently only one node, status Ready, is a Leader

2.3, 192.168.6.131, Join Cluster

First, switch to 192.168.6.131, then enter

[root@jacky jacky]#  docker swarm join \
> --token SWMTKN-1-5r3ujri3th4038jp7q66zrfo56eqo0sabyage8ahc10121evog-ah9bptj9d7txdu6y91w7qxd81 \
> 192.168.6.130:2377
This node joined a swarm as a worker.
[root@jacky jacky]#

2.4, 192.168.6.132, Join Cluster

First, switch to 192.168.6.132, then enter

[root@jacky jacky]# docker swarm join \
> --token SWMTKN-1-5r3ujri3th4038jp7q66zrfo56eqo0sabyage8ahc10121evog-ah9bptj9d7txdu6y91w7qxd81 \
> 192.168.6.130:2377
This node joined a swarm as a worker.
[root@jacky jacky]#

2.5, then back to 192.168.6.130 to view node information in the cluster

[root@jacky jacky]# docker node ls
ID                           HOSTNAME           STATUS  AVAILABILITY  MANAGER STATUS
4devzwb6rpimfpteqr17h2jx9 *  jacky.localdomain  Ready   Active        Leader
5mjtda2uzzu43v2xuxdco5ogr    jacky.localdomain  Ready   Active        
a8wdtux82dolsbgmv6ff0uu94    jacky.localdomain  Ready   Active

Explain:
AVAILABILITY column:
Shows whether the dispatcher can assign tasks to nodes:

  • Active means that the dispatcher can assign tasks to nodes.
  • Pause means that the dispatcher will not assign new tasks to the nodes, but the existing tasks are still running.
  • Drain means that the dispatcher will not assign new tasks to the nodes.The dispatcher closes all existing tasks and schedules them on available nodes.

MANAGER STATUS column
Display node belongs to manager or worker

    • No value indicates a work node that does not participate in group management.
    • Leader means that this node is the primary manager node that enables all group management and orchestration decisions for a group.
    • Reachable means that the node is the manager node participating in the Raft consensus.If a leader node is unavailable, it is eligible to be selected as a new leader.
    • Unavailable means that a node is a manager that cannot communicate with other managers.If the manager node is unavailable, you should either join the cluster with a new manager node or upgrade the worker node to manager.

2.6. Creating a cluster network overlay

[root@jacky jacky]# docker network create --driver overlay skynet
843z9qb9c6douf7ir7l3iimqn
[root@jacky jacky]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ad5125729239        bridge              bridge              local               
5a15f008fb38        host                host                local               
6echvokyh2m3        ingress             overlay             swarm               
28068704e605        none                null                local               
843z9qb9c6do        skynet              overlay             swarm

You can see that the newly created network has the name skynet and the type overlay

3. Deployment Testing

3.1. Execution in Management Node

docker service create -p 80:80 --name webserver --replicas 5 httpd

3.2. Viewing services in a cluster

[root@node1 jacky]# docker service ls
ID            NAME       REPLICAS  IMAGE                 COMMAND
0blhke4vywh8  viz        0/1       manomarks/visualizer  
7batkp4zv9f3  portainer  1/1       portainer/portainer   -H unix:///var/run/docker.sock
7kw3ovihgqgb  webserver  5/5       httpd

3.3. Viewing webserver services in a cluster

[root@node1 jacky]# docker service ps webserver
ID                         NAME             IMAGE  NODE             DESIRED STATE  CURRENT STATE          ERROR
e0jqrg479muha7ow8bf34rv39  webserver.1      httpd  node1.jacky.com  Running        Running 2 hours ago    
23n9df2vww079h5rgkxlri4uy   \_ webserver.1  httpd  node1.jacky.com  Shutdown       Complete 2 hours ago   
8b8cs13u9yjsoru3ybyzzv9e6   \_ webserver.1  httpd  node2.jacky.com  Shutdown       Rejected 20 hours ago  "No such image: httpd:latest"
8lvx0ynaohlcfyp11jgji4m3q  webserver.2      httpd  node1.jacky.com  Running        Running 2 hours ago    
0q8lrzlybo1exl3bngwwfy386   \_ webserver.2  httpd  node1.jacky.com  Shutdown       Complete 2 hours ago   
eoq4a2sqx80a0hly6tizt5ucf   \_ webserver.2  httpd  node3.jacky.com  Shutdown       Shutdown 4 hours ago   
10juv2jp1ay9rjbu5hgw6yhs3   \_ webserver.2  httpd  node3.jacky.com  Shutdown       Failed 20 hours ago    "starting container failed: er..."
7xa8uoa8775i5nl0zzi373xbt   \_ webserver.2  httpd  node3.jacky.com  Shutdown       Failed 20 hours ago    "starting container failed: er..."
6puw8t22w0exgiwqzt5vi8fc1  webserver.3      httpd  node1.jacky.com  Running        Running 2 hours ago    
74osfdgl5ovp1c3e5s012b3f6   \_ webserver.3  httpd  node1.jacky.com  Shutdown       Complete 2 hours ago   
cwzuewjsolewap28ctvy3jxaa   \_ webserver.3  httpd  node3.jacky.com  Shutdown       Shutdown 4 hours ago   
9bb5q38zqk153uqdex1yfcu4e   \_ webserver.3  httpd  node3.jacky.com  Shutdown       Failed 20 hours ago    "starting container failed: er..."
1uvoczhsfz5ncp1emdueljwqa   \_ webserver.3  httpd  node3.jacky.com  Shutdown       Failed 20 hours ago    "starting container failed: er..."
0j9hsq18v3pzoecmrjg0qtynh  webserver.4      httpd  node1.jacky.com  Running        Running 2 hours ago    
dyhyy55xlkm3abw2cqm0k8y6h   \_ webserver.4  httpd  node1.jacky.com  Shutdown       Complete 2 hours ago   
8dymtjskyxhw5zj0ombpv0pm1   \_ webserver.4  httpd  node2.jacky.com  Shutdown       Shutdown 4 hours ago   
1b6u7rtknpgmwyfn3j3p94wm6   \_ webserver.4  httpd  node2.jacky.com  Shutdown       Rejected 20 hours ago  "No such image: httpd:latest"
1af72d5vpu1xg3u0qypnvlier   \_ webserver.4  httpd  node2.jacky.com  Shutdown       Rejected 20 hours ago  "No such image: httpd:latest"
897au9dm88i94l0scg69slu6z  webserver.5      httpd  node1.jacky.com  Running        Running 2 hours ago    
eqt7g4bk6e2kqy6qbnr13gfeh   \_ webserver.5  httpd  node1.jacky.com  Shutdown       Complete 2 hours ago   
7vq8u2eycraafwzlmcgg1e80d   \_ webserver.5  httpd  node3.jacky.com  Shutdown       Shutdown 4 hours ago   
ehu6f1xun8ha6lw7cyex7jjrw   \_ webserver.5  httpd  node2.jacky.com  Shutdown       Shutdown 4 hours ago

3.4, Visit http://192.168.6.130, http://192.168.6.131 or http://192.168.6.132

Note: 192.168.6.131 and 192.168.6.132 can be accessed by dogs without deploying httpd. swarm successfully tested docker cluster

4. Install Portainer, graphical management docker

4.1, Close selinux

setenforce 0

4.2. Execute in the manager node as follows:

docker service create \
--name portainer \
--publish 9000:9000 \
--constraint 'node.role == manager' \
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
portainer/portainer \
-H unix:///var/run/docker.sock

4.3, Visit 192.168.6.130.9000

5. Other operations of swarm cluster

5.1. Update the visibility state of nodes

[root@node1 jacky]# docker node ls
ID                           HOSTNAME         STATUS  AVAILABILITY  MANAGER STATUS
4devzwb6rpimfpteqr17h2jx9 *  node1.jacky.com  Ready   Active        Leader
5mjtda2uzzu43v2xuxdco5ogr    node3.jacky.com  Ready   Active        
a8wdtux82dolsbgmv6ff0uu94    node2.jacky.com  Ready   Active        
[root@node1 jacky]# docker node update --availability Drain node2.jacky.com
node2.jacky.com
[root@node1 jacky]# docker node ls
ID                           HOSTNAME         STATUS  AVAILABILITY  MANAGER STATUS
4devzwb6rpimfpteqr17h2jx9 *  node1.jacky.com  Ready   Active        Leader
5mjtda2uzzu43v2xuxdco5ogr    node3.jacky.com  Ready   Active        
a8wdtux82dolsbgmv6ff0uu94    node2.jacky.com  Ready   Drain         
[root@node1 jacky]# docker node update --availability Active node2.jacky.com
node2.jacky.com
[root@node1 jacky]# docker node ls
ID                           HOSTNAME         STATUS  AVAILABILITY  MANAGER STATUS
4devzwb6rpimfpteqr17h2jx9 *  node1.jacky.com  Ready   Active        Leader
5mjtda2uzzu43v2xuxdco5ogr    node3.jacky.com  Ready   Active        
a8wdtux82dolsbgmv6ff0uu94    node2.jacky.com  Ready   Active        
[root@node1 jacky]#

5.2, Upgrade or Downgrade Nodes

You can elevate the worker node to the manager role.This is useful when the manager node is unavailable or you want to take the manager offline for maintenance.Similarly, you can demote a manager node to a worker role.
Whether you upgrade or downgrade nodes, you should always maintain an odd number of manager nodes in the group.

  • Upgrade Node
[root@node1 jacky]# docker node promote node3.jacky.com node2.jacky.com
[root@node1 jacky]# docker node ls
ID                           HOSTNAME         STATUS  AVAILABILITY  MANAGER STATUS
4devzwb6rpimfpteqr17h2jx9 *  node1.jacky.com  Ready   Active        Leader
5mjtda2uzzu43v2xuxdco5ogr    node3.jacky.com  Ready   Active        Unreachable
a8wdtux82dolsbgmv6ff0uu94    node2.jacky.com  Ready   Active        Reachable
  • Demotion Node

[root@node1 jacky]# docker node demote node3.jacky.com node2.jacky.com
Manager node3.jacky.com demoted in the swarm.
Manager node2.jacky.com demoted in the swarm.
[root@node1 jacky]# docker node ls
ID                           HOSTNAME         STATUS  AVAILABILITY  MANAGER STATUS
4devzwb6rpimfpteqr17h2jx9 *  node1.jacky.com  Ready   Active        Leader
5mjtda2uzzu43v2xuxdco5ogr    node3.jacky.com  Ready   Active        
a8wdtux82dolsbgmv6ff0uu94    node2.jacky.com  Ready   Active

5.3, Exit docker swarm cluster

Execute the following command on the corresponding worker node

[root@node1 jacky]# docker swarm leave

If you want to delete the node information on the manager node that has exited the cluster, the following:

[root@node1 jacky]# docker node rm node2.jacky.com

 

 

 

165 original articles have been published. 559. 3.75 million+
Private letter follow

Tags: Docker network Unix CentOS

Posted on Wed, 01 Apr 2020 10:14:08 -0700 by stormszero