序言

当有多台物理机的时候,就要考虑使用集群的模式了,那么docker如何来使用集群来进行管理呢?在这里主要使用的是docker自带的swarm mode,也就是docker集群的管理和编排。所谓的编排就是指多台集群的管理,主机的配置,容器的调度等。

    swarm mode是docker engine中自带的一种模式,很容易使用,并且无须安装其他的软件。



swarm mode的使用

在使用swarm mode的时候,几台主机上都要先安装好docker,架构如下所示:

将主机名为docker-ce的机器作为manager节点,也就是管理节点,而docker1和docker2作为工作节点。

1、 创建swarm集群

[root@docker-ce swarm]# docker swarm init --advertise-addr 192.168.1.222 (初始化集群,节点之间相互通信的ip地址为192.168.1.222,默认端口为2377)
Swarm initialized: current node (pk4p936t4e03cpse3izuws07s) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-60h71geyd7z297jfy2icektmq3ha3n5nego2znytgrzqix768e-f36psbhrnrdn9h0bop6np22xm 192.168.1.222:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@docker-ce swarm]# docker info|grep -i swarm(swarm模式已经激活)
Swarm: active
[root@docker-ce swarm]# netstat -tnlp|grep docker(默认监听两个端口,tcp2377端口为集群的管理端口,tcp7946为节点之间的通讯端口)
tcp6       0      0 :::2377                 :::*                    LISTEN      66488/dockerd       
tcp6       0      0 :::7946                 :::*                    LISTEN      66488/dockerd       
[root@docker-ce swarm]# docker network ls(默认会创建一个overlay的网络ingress,还会创建一个桥接的网络docker_gwbridge)
NETWORK ID          NAME                DRIVER              SCOPE
641eeb86f6a4        bridge              bridge              local
c23afa61afaa        docker_gwbridge     bridge              local
65f6eed9f144        host                host                local
n8i6cpizzlww        ingress             overlay             swarm
b4d6492a85d5        none                null                local
[root@docker-ce swarm]# docker node ls(查看集群中的节点,当有多个manager节点的时候,是通过raft协议来选取主节点,也就是leader节点)
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
pk4p936t4e03cpse3izuws07s *   docker-ce           Ready               Active              Leader
[root@docker-ce swarm]# ls -l(swarm的配置文件都在/var/lib/docker/swarm目录中,会有相关的证书和manager的配置文件,使用的是raft协议)
total 8
drwxr-xr-x. 2 root root  75 Jan 26 10:13 certificates(使用的tls来进行安全通信)
-rw-------. 1 root root 151 Jan 26 10:13 docker-state.json(用来记录通信的地址和端口,也会记录本地的地址和端口)
drwx------. 4 root root  55 Jan 26 10:13 raft(raft协议)
-rw-------. 1 root root  69 Jan 26 10:13 state.json(manager的ip和端口)
drwxr-xr-x. 2 root root  22 Jan 26 10:13 worker(记录工作节点下发的任务信息)

[root@docker2 ~]# docker swarm join --token SWMTKN-1-60h71geyd7z297jfy2icektmq3ha3n5nego2znytgrzqix768e-f36psbhrnrdn9h0bop6np22xm 192.168.1.222:2377(其他的机器加入swarm集群)
This node joined a swarm as a worker.


当忘记了加入集群的token的时候,可以使用如下的指令找到token,然后在node节点上直接执行,就可以加入worker节点或者是manager节点。

docker mongodb 配置危机 docker swarm mode_nginx

    查看集群如下:

docker mongodb 配置危机 docker swarm mode_nginx_02

节点之间的角色可以随时进行变换(使用update进行更新):

docker mongodb 配置危机 docker swarm mode_docker_03

2、 开放防火墙

    在各个节点进行通信的时候,必须开放相关的防火墙策略,其中包括通信的tcp的2377端口,tcp和udp的7946端口,还有网络overlay的udp端口4789端口。

[root@docker-ce ~]# firewall-cmd --add-port tcp/2377 --permanent

[root@docker-ce ~]# firewall-cmd --add-port tcp/7946 --permanent

[root@docker-ce ~]# firewall-cmd --add-port udp/7946 --permanent

[root@docker-ce ~]# firewall-cmd --add-port udp/4789--permanent

[root@docker-ce ~]# systemctl restart firewalld

3、 运行服务

    服务为service,也就是一组task的集合,而一个task则表示为一个容器,从基本概念来说,运行一个service,可能有几个task,例如运行几个nginx的服务,从而会拆解为几个nginx的容器在各个节点上进行运行。

[root@docker-ce ~]# docker service create --name web nginx(create表示创建一个服务,名称为web,镜像为nginx)
oy2y8sb31c2jpn9owk6gdt7nk
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service create --name frontweb --mode global nginx(创建一个名称问frontweb的服务,模式为global,镜像为nginx)
ld835zsd9x1x4rdaj6u1i1rfy
overall progress: 3 out of 3 tasks 
pk4p936t4e03: running   [==================================================>] 
xvkxa7z22v75: running   [==================================================>] 
6xum2o1iqmya: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ls(查看运行的服务)
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
ld835zsd9x1x        frontweb            global  
oy2y8sb31c2j        web                 replicated          1/1                 nginx:latest        
[root@docker-ce ~]# docker service ps web(查看运行的详细信息,默认情况下manager节点也可以运行容器)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
li2bfdt1dfjs        web.1               nginx:latest        docker-ce           Running             Running 13 minutes ago                       
[root@docker-ce ~]# docker service ps frontweb(查看运行的详细信息)
ID                  NAME                                 IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
s96twac1s4av        frontweb.6xum2o1iqmyaun2khb4b5z57h   nginx:latest        docker2             Running             Running 34 seconds ago                       
qtr35ehwuu26        frontweb.xvkxa7z22v757jnptndvtcc4t   nginx:latest        docker1             Running             Running 37 seconds ago                       
jujtu01q49o2        frontweb.pk4p936t4e03cpse3izuws07s   nginx:latest        docker-ce           Running             Running 55 seconds ago

在创建服务的时候,会经过几个状态,一个是prepared,表示准备,主要是从仓库拉取镜像,然后启动容器,也就是starting,最后会进行验证容器状态,从而最后变成running状态。

    在查看服务的时候,会出现一个mode,也就是服务的类型,可以分为两种,一种replicated,表示副本,默认情况下是使用replicated模式,并且默认情况只会创建一个副本,主要使用的目的是为了高可用;另外一种为global,也就是必须在每个机器上运行一个task也就是容器,可以看到在使用global的模式的时候创建了三个容器。

docker mongodb 配置危机 docker swarm mode_docker mongodb 配置危机_04

4、服务的扩缩容

    在使用服务的时候,由于是集群,那么就必然会涉及到高可用,从而会有服务的扩容和缩容,在swarm中还是很容易的。

[root@docker-ce ~]# docker service scale web=3(扩容为3,也就是运行三个容器)
web scaled to 3
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ls(查看服务,可以看到replicas副本数量为3个)
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS       
oy2y8sb31c2j        web                 replicated          3/3                 nginx:latest        
[root@docker-ce ~]# docker service ps web(可以看到三个节点上都运行了每个都各运行了一个容器task)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
li2bfdt1dfjs        web.1               nginx:latest        docker-ce           Running             Running 25 minutes ago                       
8dsrshssyd6t        web.2               nginx:latest        docker2             Running             Running 46 seconds ago                       
4i7vgzspdpts        web.3               nginx:latest        docker1             Running             Running 46 seconds ago
在默认情况下,管理的机器也是可以运行容器的,从而在manager节点上也运行一个容器。

[root@docker-ce ~]# docker service scale web=2(将web服务缩容为2个)
web scaled to 2
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ps web(查看运行的容器)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
4i7vgzspdpts        web.3               nginx:latest        docker1             Running             Running 9 minutes ago                            
56s441jtydq4        web.5               nginx:latest        docker-ce           Running             Running about a minute ago
    当要让swarm的manager节点不运行容器的时候,只要更改节点的状态,从Active变成Drain即可,如果在manager上运行容器,那么当manager宕机的时候,如果不是多节点的manager,会导致此服务无法进行调度。

[root@docker-ce ~]# docker node update --availability drain docker-ce(将manager节点的状态修改为drain状态,从而不会执行相关的task任务)
docker-ce
[root@docker-ce ~]# docker node ls(查看节点的状态从active变成drain)
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
xvkxa7z22v757jnptndvtcc4t     docker1             Ready               Active              
6xum2o1iqmyaun2khb4b5z57h     docker2             Ready               Active              
pk4p936t4e03cpse3izuws07s *   docker-ce           Ready               Drain
[root@docker-ce ~]# docker service ps web(本来运行在docker-ce上的容器会被关闭,然后自动迁移到其他的worker节点上)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                 ERROR               PORTS
4i7vgzspdpts        web.3               nginx:latest        docker1             Running             Running 12 minutes ago                            
x2w8qdxuv2y5        web.5               nginx:latest        docker2             Running             Running about a minute ago                        
56s441jtydq4         \_ web.5           nginx:latest        docker-ce           Shutdown            Shutdown about a minute ago 
5、 自动故障转移
    在集群中,当有机器发生宕机了咋办,swarm可以做到自动迁移,但是在生产环境中需要考虑的一个问题就是,如果出现了自动的failover,那么其他的机器是否有足够的资源来创建这些容器,所以,在进行运行容器的时候,就要考虑剩余资源的问题,例如cpu和内存。

[root@docker-ce ~]# docker service ps web(查看服务的分布)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
nrkfvb2js6p1        web.1               nginx:latest        docker1             Running             Running 10 seconds ago                       
q17kbbwd3ewr        web.2               nginx:latest        docker2             Running             Running 10 seconds ago                       
yvpijmfr4qrm        web.3               nginx:latest        docker2             Running             Running 10 seconds ago                       
[root@docker2 ~]# systemctl stop docker(关闭docker服务,模拟机器宕机)
[root@docker-ce ~]# docker node ls(查看node的信息,发现docker2主机标记为down,也就是主机宕机)
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
xvkxa7z22v757jnptndvtcc4t     docker1             Ready               Active              
6xum2o1iqmyaun2khb4b5z57h     docker2             Down                Active              
pk4p936t4e03cpse3izuws07s *   docker-ce           Ready               Drain               Leader
[root@docker-ce ~]# docker service ps web(查看自动迁移后的服务,会将宕机的服务全部标记为shutdown关闭)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
nrkfvb2js6p1        web.1               nginx:latest        docker1             Running             Running about a minute ago                       
i76d3p7bmdht        web.2               nginx:latest        docker1             Running             Running 28 seconds ago                           
q17kbbwd3ewr         \_ web.2           nginx:latest        docker2             Shutdown
ohg093dh9zvt        web.3               nginx:latest        docker1             Running             Running 28 seconds ago                           
yvpijmfr4qrm         \_ web.3           nginx:latest        docker2             Shutdown            Running 50 seconds ago
[root@docker2 ~]# systemctl start docker(将主机进行上线)

[root@docker-ce ~]# docker node ls(主机状态变成ready,表示可运行task任务)
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
xvkxa7z22v757jnptndvtcc4t     docker1             Ready               Active              
6xum2o1iqmyaun2khb4b5z57h     docker2             Ready               Active              
pk4p936t4e03cpse3izuws07s *   docker-ce           Ready               Drain               Leader

[root@docker-ce ~]# docker service ps web(再次查看服务的分布,服务不会再次进行迁移到不同的机器上,维持原状)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                 ERROR               PORTS
nrkfvb2js6p1        web.1               nginx:latest        docker1             Running             Running 5 minutes ago                             
i76d3p7bmdht        web.2               nginx:latest        docker1             Running             Running 4 minutes ago                             
q17kbbwd3ewr         \_ web.2           nginx:latest        docker2             Shutdown            Shutdown about a minute ago                       
ohg093dh9zvt        web.3               nginx:latest        docker1             Running             Running 4 minutes ago                             
yvpijmfr4qrm         \_ web.3           nginx:latest        docker2             Shutdown            Shutdown about a minute ago 
6、 访问服务
    访问服务的时候,主要分为两种,一种是内部访问的服务,也就是不对外开放端口,一种是对外的服务,会向外开放端口,也就是主机映射端口。
[root@docker-ce ~]# docker service create --name web --replicas=2 httpd(创建副本为2的httpd服务)
60c9i7de4mu4x9n3ia03nrh52
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ps web(查看运行容器的主机)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
wb56gj6zbtlw        web.1               httpd:latest        docker2             Running             Running 18 seconds ago                       
n6hupzwlchss        web.2               httpd:latest        docker1             Running             Running 18 seconds ago
[root@docker1 ~]# docker ps (登录本地主机,查看运行的容器)
CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS               NAMES
19a6ed6ec945        httpd:latest        "httpd-foreground"   23 seconds ago      Up 22 seconds       80/tcp              web.2.n6hupzwlchss01fbcec6jsp27
[root@docker1 ~]# docker exec 19a6ed6ec945 ip addr show(查看运行容器的ip地址)
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
201: eth0@if202: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@docker1 ~]# curl 172.17.0.2(根据ip地址访问,只能在本节点上进行访问,属于内部网络,也就是docker_gwbrige网络)
<html><body><h1>It works!</h1></body></html>

[root@docker-ce ~]# docker service update --publish-add 8000:80 web
web(添加主机映射端口,从而将服务对外放开,从而外部能访问此服务)
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ps web(将原来的容器关闭,重新运行容器)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
pktheo7u3178        web.1               httpd:latest        docker2             Running             Running 11 seconds ago                        
wb56gj6zbtlw         \_ web.1           httpd:latest        docker2             Shutdown            Shutdown 13 seconds ago                       
j5mrcqlozlhz        web.2               httpd:latest        docker1             Running             Running 16 seconds ago                        
n6hupzwlchss         \_ web.2           httpd:latest        docker1             Shutdown            Shutdown 20 seconds ago                       
[root@docker-ce ~]# curl 192.168.1.222:8000(无论是manager节点还是worker节点,均监听了8000端口,均可以访问)
<html><body><h1>It works!</h1></body></html>
[root@docker-ce ~]# curl 192.168.1.32:8000
<html><body><h1>It works!</h1></body></html>
[root@docker-ce ~]# curl 192.168.1.33:8000
<html><body><h1>It works!</h1></body></html>
[root@docker-ce ~]# netstat -ntlp|grep 8000(集群中的每个机器都会监听8000端口,无论是否运行了这个容器)
tcp6       0      0 :::8000                 :::*                    LISTEN      66488/dockerd
    在这里主要使用了routing mesh的功能,在swarm内部实现了负载均衡,使用的网络为swarm自动创建的overlay网络。
    当使用publish端口的时候,最大的坏处就是对外暴露了端口号,而且在使用的时候,如果进行了故障转移,那么多个服务运行在同一个主机上面,会造成端口占用冲突。
7、 服务发现
    在使用集群的时候,如果进行了自动转移,那么ip地址会发生变化,如果指定了ip地址,那么就会影响其他服务的使用,从而需要服务发现的功能,也就是自动将服务进行dns的解析,然后负载到正确的服务中。

[root@docker-ce ~]# docker network create --driver overlay kel(创建overlay网络,默认的ingress网络未实现服务发现的功能)
nomp3f50to1s4gn6ke4zpxn8n
[root@docker-ce ~]# docker service create --name web --replicas=2 --network=kel nginx(将服务挂载到自创建的overlay网络中,从而能实现名称解析)
uo5kuxad0wojn5j24moudi4l5
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service create --name ds --network=kel busybox sleep 100000(创建另外一个服务依赖于其他的服务)
hwcn4esjpgqyae7r9bqcn8a9m
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ps ds(查看运行的服务)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
7gknpd69eti1        ds.1                busybox:latest      docker2             Running             Running 12 seconds ago                       

[root@docker2 ~]# docker exec 66ece5551fc5 ping -c 2 web(自动实现了dns的解析功能)
PING web (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: seq=0 ttl=64 time=0.833 ms
64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.323 ms
--- web ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.323/0.578/0.833 ms
[root@docker2 ~]# docker exec 66ece5551fc5 nslookup web(查看vip地址)
Server:    127.0.0.11
Address 1: 127.0.0.11
Name:      web
Address 1: 10.0.0.5
[root@docker2 ~]# docker exec 66ece5551fc5 nslookup tasks.web(查看各个主机的ip地址)
Server:    127.0.0.11
Address 1: 127.0.0.11
Name:      tasks.web
Address 1: 10.0.0.7 b2f14e41a97a.kel
Address 2: 10.0.0.6 web.1.78a2eugjholl3c35xa361kdaj.kel
8、滚动更新rolling update
当需要进行更新的时候,swarm可以设定相应的策略来进行更新,例如并行更新多少个容器,更新之间的间隔时间等。
[root@docker-ce ~]# docker service update --image nginx:1.10 web(更新web的镜像)
web
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ps web(更新的时候先关闭,然后再更新)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
512sgoen7yu9        web.1               nginx:1.10          docker1             Running             Running 6 minutes ago                        
rz9thj8s1iv6         \_ web.1           nginx:1.9           docker1             Shutdown            Shutdown 6 minutes ago                       
5hsl7q2486iz        web.2               nginx:1.10          docker2             Running             Running 6 minutes ago                        
czgdnq3lmhgu         \_ web.2           nginx:1.9           docker2             Shutdown            Shutdown 7 minutes ago  
[root@docker-ce ~]# docker service update --image nginx:1.11 web
web
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service update --update-parallelism 2 --update-delay 1m web(更新的时候,设定更新的并发数和更新的间隔时间)
web
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service inspect web --pretty(查看具体的信息,可以看到更新的一些参数配置)
ID: cxsk18saisqsycc9bn1m768xx
Name: web
Service Mode: Replicated
 Replicas: 2
Placement:
UpdateConfig:
Parallelism: 2
 Delay: 1m0s
 On failure: pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism: 1
 On failure: pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image: nginx:1.11@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582
Resources:
Endpoint Mode: vip
9、label控制service运行的节点
    主要是用来控制每个task运行的节点,从而可以为每个节点设置属性label,然后根据label来指定task运行的位置(生产环境中可能要手动将容器分布在不同的机器上,从而达到高可用的目的)。
[root@docker-ce ~]# docker node update --label-add ncname=docker1 docker1(更新节点的属性,为节点添加标签)
docker1
[root@docker-ce ~]# docker node update --label-add ncname=docker2 docker2(更新节点的属性,为节点添加标签)
docker2
[root@docker-ce ~]# docker node inspect docker1 --pretty(查看设置的属性)
ID: xvkxa7z22v757jnptndvtcc4t
Labels:
 - ncname=docker1
[root@docker-ce ~]# docker service create --name web --constraint node.labels.ncname==docker1 --replicas 2 nginx(指定机器来运行相关的任务,主要是根据label的值)
m0126tvjof5owsn9crke00w63
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ps web(运行在指定的机器上)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
igfs9wrv2j0w        web.1               nginx:latest        docker1             Running             Running 55 seconds ago                       
ibcdq1i6hjkx        web.2               nginx:latest        docker1             Running             Running 55 seconds ago                       
[root@docker-ce ~]# docker service update --constraint-rm node.labels.ncname==docker1 web(去掉指定的标签)
web
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service update --constraint-add node.labels.ncname==docker2 web(服务迁移到另外一台主机上)
web
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
[root@docker-ce ~]# docker service ps web(查看结果,这种修改也是可以回滚的,也就是使用rollback进行回滚)
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
0uhx5oewt7tp        web.1               nginx:latest        docker2             Running             Running 9 seconds ago                         
ykykhvuydnmu         \_ web.1           nginx:latest        docker1             Shutdown            Shutdown 11 seconds ago                       
igfs9wrv2j0w         \_ web.1           nginx:latest        docker1             Shutdown            Shutdown 43 seconds ago                       
r2t52s9py825        web.2               nginx:latest        docker2             Running             Running 13 seconds ago                        
vssp6qqc5eez         \_ web.2           nginx:latest        docker2             Shutdown            Shutdown 15 seconds ago                       
ibcdq1i6hjkx         \_ web.2           nginx:latest        docker1             Shutdown            Shutdown 47 seconds ago                       
10、 健康检查
    如何来进行业务层面的健康检查呢,容器的状态是不可以的。在运行服务的时候,也是可以设定相关参数的,具体的参数如下:
    使用dockerfile的参数:

• --interval=DURATION (default: 30s)
• --timeout=DURATION (default: 30s)
• --start-period=DURATION (default: 0s)
• --retries=N (default: 3)
使用docker service create的参数:--health-cmd string                  Command to run to check health
--health-interval duration           Time between running the check (ms|s|m|h)
--health-retries int                 Consecutive failures needed to report unhealthy
--health-start-period duration       Start period for the container to initialize before counting retries towards unstable (ms|s|m|h)
--health-timeout duration            Maximum time to allow one check to run (ms|s|m|h)
    使用dockerfile的例子如下:

[root@docker-ce kel]# cat dockerfile (在dockerfile里面进行健康检查进行设置)
FROM redis 
RUN echo '/usr/local/bin/redis-cli info |grep "role:master"'>/usr/local/bin/kel
RUN chmod u+x /usr/local/bin/kel
HEALTHCHECK CMD  kel
[root@docker-ce kel]# docker service ps kel(查看服务信息)
ID                  NAME                IMAGE                                NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
k2ewbrauv0dv        kel.1               kellyseeme/redishealthcheck:latest   docker2             Running             Running about a minute ago                       
yzadjp443091        kel.2               kellyseeme/redishealthcheck:latest   docker1             Running             Running about a minute ago  

[root@docker1 ~]# docker inspect f4759f2dcfbc(查看健康检查的配置和相关的日志)
            "Health": {
                "Status": "healthy",
                "FailingStreak": 0,
                "Log": [
                    {
                        "Start": "2018-01-27T16:51:44.945773759+08:00",
                        "End": "2018-01-27T16:51:45.633754663+08:00",
                        "ExitCode": 0,
                        "Output": "role:master\r\n"
            "Healthcheck": {
                "Test": [
                    "CMD-SHELL",
                    "kel"