一、简述

(一)基础环境

  • 四个节点:192.168.1.171~174。
  • 操作系统:centos7。
  • docker版本:19.03.8。

(二)目标

  • 无状态服务多实例伸缩部署。
  • 挂载存储多实例部署。
  • 对集群外开放访问端口。
  • 定义集群网络,以服务名访问服务。
  • 多manager节点。

二、部署

(一)部署准备

更改主机名

swarm使用主机名来展示节点,更改主机名为有意义的名字,这里改为test171~test174。

[root@localhost ~]# hostnamectl set-hostname test171
[root@test171 ~]#

修改/etc/hosts文件

添加:

192.168.1.171 test1
192.168.1.172 test2
192.168.1.173 test3
192.168.1.174 test4

打开必须端口

#管理端口
firewall-cmd --zone=public --add-port=2377/tcp --permanent
#节点间通信端口
firewall-cmd --zone=public --add-port=7946/tcp --permanent
firewall-cmd --zone=public --add-port=7946/udp --permanent
#overlay 网络端口
firewall-cmd --zone=public --add-port=4789/tcp --permanent
firewall-cmd --zone=public --add-port=4789/udp --permanent
#起效
firewall-cmd --reload

安装docker

不详述。

(二)部署swarm

创建集群

在test171上创建集群,test171将作为manager节点。

[root@test171 ~]# docker swarm init --advertise-addr 192.168.1.171
Swarm initialized: current node (jp9mooyi4go7ytibxbcdevmmj) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-59w2aklummvra2w6cjjjagssoywqbgmnpc2qq01389bmhf2l8x-abh90k1sojtpgvsdolp52k88n 192.168.1.171:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

节点加入集群

在其他节点上执行上面的提示,加入集群

docker swarm join --token SWMTKN-1-59w2aklummvra2w6cjjjagssoywqbgmnpc2qq01389bmhf2l8x-abh90k1sojtpgvsdolp52k88n 192.168.1.171:2377

在test171这个manager节点上:
执行docker node ls可以查看集群节点

[root@test171 data]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jp9mooyi4go7ytibxbcdevmmj *   test171             Ready               Active              Leader              19.03.8
0zovd07bxee4cvk9papukatnm     test172             Ready               Active                                  19.03.8
i1ep20qu560v2omuxpi8x2xe8     test173             Ready               Active                                  19.03.8
zww4t1lebp2dtftareudqpx53     test174             Ready               Active                                  19.03.8

执行docker swarm join-token -q worker可以打印集群token

[root@test171 data]# docker swarm join-token -q worker
SWMTKN-1-59w2aklummvra2w6cjjjagssoywqbgmnpc2qq01389bmhf2l8x-abh90k1sojtpgvsdolp52k88n

三、使用

(一) 创建overlay network网络

使用overlay network网络。overlay network的最实用的好处,一是在整个swarm集群内实现跨主机通信;而是可以通过service名称进行访问而不用指定容器ip地址。在manager节点上执行

[root@test171 data]# docker network create --driver overlay --opt encrypted --subnet 192.168.84.0/24 test_net
reu4c2euepogf7cux08cij3bw
[root@test171 data]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
73b7f9e4b2f4        bridge              bridge              local
cdc88f3bec17        docker_gwbridge     bridge              local
3f2652b5241e        host                host                local
8166p62br08o        ingress             overlay             swarm
d1135a68eb99        none                null                local
reu4c2euepog        test_net            overlay             swarm
  • –opt encrypted:通信加密
  • –subnet xxx:指定IP地址段
  • test_net:指定的网络名称

(二)创建本地仓库

创建本地仓库用于在各节点间同步镜像文件

#给第一个节点打个标签:key是func,值是registry
docker node update --label-add func=registry node1
#只在第一个节点上运行
docker service create --constraint 'node.labels.func == registry' --replicas 1 --network test_net --name registry --publish 5000:5000 registry:latest

访问http://192.168.1.171:5000/v2/_catalog,在四个节点的ip地址上任意一个都可以访问这个仓库。
修改/etc/docker/daemon.json,添加"insecure-registries": [“192.168.1.171:5000”]

(三)创建无状态服务的多实例

在manager节点上执行。创建一个服务,命名为testdemo,从本地的仓库里拉取一个镜像,共起五个实例。这个镜像提供restful api访问服务,访问8888端口。

创建服务

[root@test171 data]# docker service create --replicas 5 --network test_net --name testdemo 192.168.1.164:8001/dev/demo:build
image 192.168.1.164:8001/dev/demo:build could not be accessed on a registry to record
its digest. Each node will access 192.168.1.164:8001/dev/demo:build independently,
possibly leading to different nodes running different
versions of the image.

q2x00zgi5tzu5ieaftga879hh
overall progress: 5 out of 5 tasks 
1/5: running   [==================================================>] 
2/5: running   [==================================================>] 
3/5: running   [==================================================>] 
4/5: running   [==================================================>] 
5/5: running   [==================================================>] 
verify: Service converged

查看服务状态

[root@test171 data]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                               PORTS
q2x00zgi5tzu        testdemo            replicated          5/5                 192.168.1.164:8001/dev/demo:buil
[root@test171 data]# docker service ps testdemo
ID                  NAME                IMAGE                               NODE                DESIRED STATE       CURRENT STATE            ERROR                              PORTS
r2rbujc4ydhe        testdemo.1          192.168.1.164:8001/dev/demo:build   test174             Running             Running 42 seconds ago                                      
kzjr70sqquio        testdemo.2          192.168.1.164:8001/dev/demo:build   test171             Running             Running 43 seconds ago                                      
3zh2ets41dp9        testdemo.3          192.168.1.164:8001/dev/demo:build   test172             Running             Running 43 seconds ago                                      
ef9zn7v4j0nq        testdemo.4          192.168.1.164:8001/dev/demo:build   test172             Running             Running 43 seconds ago                                      
s2anzuf09g3b        testdemo.5          192.168.1.164:8001/dev/demo:build   test173             Running             Running 37 seconds ago                                      
ubqfejj0i235         \_ testdemo.5      192.168.1.164:8001/dev/demo:build   test173             Shutdown            Failed 43 seconds ago    "starting container failed: fa…"

可以看到test172上起了两个实例,一共5个实例。

查看网络访问

[root@test171 data]# docker network inspect test_net
[
    {
        "Name": "test_net",
        "Id": "reu4c2euepogf7cux08cij3bw",
        "Created": "2020-03-24T16:29:04.51653612+08:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "192.168.84.0/24",
                    "Gateway": "192.168.84.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "604da0371c0a21404778f65ea1791219333d88ae070ee0edd5145e6404cff7f2": {
                "Name": "testdemo.2.kzjr70sqquiodpiyaadaxf52q",
                "EndpointID": "4ec2a9ba55585233373f01c660b06881dc9e6759065c75fc9ec2404ec8290c9a",
                "MacAddress": "02:42:c0:a8:54:87",
                "IPv4Address": "192.168.84.135/24",
                "IPv6Address": ""
            },
            "lb-test_net": {
                "Name": "test_net-endpoint",
                "EndpointID": "6d3b7a6d28b52d6261b7384732dfd553255bb7c610fc4ab25d88302eaf4047ba",
                "MacAddress": "02:42:c0:a8:54:7d",
                "IPv4Address": "192.168.84.125/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097",
            "encrypted": ""
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "43be15a3ab8b",
                "IP": "192.168.1.171"
            }
        ]
    }
]

动态调整实例数目

由5个实例调整为2个。

[root@test171 data]# docker service ps testdemo
ID                  NAME                IMAGE                               NODE                DESIRED STATE       CURRENT STATE            ERROR                              PORTS
r2rbujc4ydhe        testdemo.1          192.168.1.164:8001/dev/demo:build   test174             Running             Running 11 minutes ago                                      
kzjr70sqquio        testdemo.2          192.168.1.164:8001/dev/demo:build   test171             Running             Running 11 minutes ago                                      
3zh2ets41dp9        testdemo.3          192.168.1.164:8001/dev/demo:build   test172             Running             Running 11 minutes ago                                      
ef9zn7v4j0nq        testdemo.4          192.168.1.164:8001/dev/demo:build   test172             Running             Running 11 minutes ago                                      
s2anzuf09g3b        testdemo.5          192.168.1.164:8001/dev/demo:build   test173             Running             Running 11 minutes ago                                      
ubqfejj0i235         \_ testdemo.5      192.168.1.164:8001/dev/demo:build   test173             Shutdown            Failed 11 minutes ago    "starting container failed: fa…"   
[root@test171 data]#  docker service scale testdemo=2
testdemo scaled to 2
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: starting container failed: failed to get network during CreateEndpoint: ne… 
verify: Service converged 
[root@test171 data]# docker service ps testdemo
ID                  NAME                IMAGE                               NODE                DESIRED STATE       CURRENT STATE            ERROR                              PORTS
r2rbujc4ydhe        testdemo.1          192.168.1.164:8001/dev/demo:build   test174             Running             Running 12 minutes ago                                      
kzjr70sqquio        testdemo.2          192.168.1.164:8001/dev/demo:build   test171             Running             Running 12 minutes ago                                      
ubqfejj0i235        testdemo.5          192.168.1.164:8001/dev/demo:build   test173             Shutdown            Failed 12 minutes ago    "starting container failed: fa…"

(四)开放端口

创建服务时,加上–publish,即可将端口开放。如docker service create --replicas 1 --network test_net --name testnginx -p 80:80 nginx,在集群内所有机器上都可以通过80端口访问nginx,而实际上实例只有一个。

(五)绑定卷

创建服务时,使用--mount type=volume,src=testvolume,dst=/zjz或者--mount type=bind,target=/container_data/,source=/host_data/参数即可绑定卷或者主机目录。
需要注意的是,每一个运行了服务容器的节点都将会开启卷或者是绑定目录,从而实现数据同步。

(六)容器内访问

容器内访问时可直接访问服务名,在本例中科通过访问testdemo来访问提供的服务接口,实质上通过服务名获取一个代理地址,代理扫描提供服务的容器来提供服务。

(七)portainer管理

使用portainer可视化管理docker-swarm集群,只需在在添加endpoint时添加集群的manager节点即可。

四、附录

(一)常用命令

#列举节点
docker node ls
#上线节点
docker node update --availability active test172
#下线节点
docker node update --availability drain test172
#删除节点
docker node rm --force test172
#创建overlay网络
docker network create --driver overlay --opt encrypted --subnet 192.168.84.0/24 test_net
#列举网络
docker network ls
#查看网络详情
docker network inspect test_net
#创建服务
docker service create --replicas 5 --network test_net --name testnginx --publish 80:80 nginx
#列举服务
docker service ls
#查看服务状态
docker service ps testdemo
docker service inspect --pretty testdemo
#动态调整服务实例数目
docker service scale my_nginx=1
#更新服务的镜像
docker service update --image nginx:new my_nginx
#清除无效镜像
docker rmi $(docker images -f "dangling=true" -q)