文章目录

  • docker Compose
  • docker Compose简介和安装
  • docker compose实现
  • 通过docker compose创建容器
  • docker-compose常见操作
  • scale扩缩容
  • docker Swarm
  • install Swarm
  • 搭建Swarm集群
  • Swarm基本操作
  • Service
  • Internal


docker Compose

docker compose在单机下管理容器

docker Compose简介和安装

官网:https://docs.docker.com/compose/

Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

docker compose实现

同样的前期准备

1、新建目录,比如composetest

2、进入目录,编写app.py代码

3、创建requirements.txt文件

4、编写Dockerfile
编写docker-compose.yaml文件

默认名称,当然也可以指定,docker-compose.yaml

version: '3'
services:
  web:
    build: .
    ports:
      - "5000:5000"
    networks:
      - app-net

  redis:
    image: "redis:alpine"
    networks:
      - app-net

networks:
  app-net:
    driver: bridge

通过docker compose创建容器

docker-compose up -d

docker-compose常见操作

(1)查看版本

docker-compose version

(2)根据yml创建service

docker-compose up

指定yaml:docker-compose up -f xxx.yaml

后台运行:docker-compose up

(3)查看启动成功的service

docker-compose ps

也可以使用docker ps

(4)查看images

docker-compose images

(5)停止/启动service

docker-compose stop/start

(6)删除service[同时会删除掉network和volume]

docker-compose down

(7)进入到某个service

docker-compose exec redis sh

scale扩缩容

(1)修改docker-compose.yaml文件,主要是把web的ports去掉,不然会报错

version: '3'
services:
  web:
    build: .
    networks:
      - app-net

  redis:
    image: "redis:alpine"
    networks:
      - app-net

networks:
  app-net:
    driver: bridge

(2)创建service

docker-compose up -d

(3)若要对python容器进行扩缩容

docker-compose up --scale web=5 -d
docker-compose ps
docker-compose logs web

docker Swarm

install Swarm

环境准备:

(1)根据Vagrantfile创建3台centos机器

[大家可以根据自己实际的情况准备3台centos机器,不一定要使用vagrant+virtualbox]

(2)进入到对应的centos里面,使得root账户能够登陆,从而使用XShell登陆

vagrant ssh manager-node/worker01-node/worker02-node
sudo -i
vi /etc/ssh/sshd_config
修改PasswordAuthentication yes
passwd    修改密码
systemctl restart sshd

(3)在mac(win)上ping一下各个主机,看是否能ping通

ping 192.168.0.11/12/13

(4)在每台机器上安装docker engine

搭建Swarm集群

(1)进入manager

提示:manager node也可以作为worker node提供服务

docker swarm init --advertise-addr=192.168.0.11

注意观察日志,拿到worker node加入manager node的信息

docker swarm join --token SWMTKN-1-0a5ph4nehwdm9wzcmlbj2ckqqso38pkd238rprzwcoawabxtdq-arcpra6yzltedpafk3qyvv0y3 192.168.0.11:2377

(2)进入两个worker

docker swarm join --token SWMTKN-1-0a5ph4nehwdm9wzcmlbj2ckqqso38pkd238rprzwcoawabxtdq-arcpra6yzltedpafk3qyvv0y3 192.168.0.11:2377

日志打印

This node joined a swarm as a worker.

(3)进入到manager node查看集群状态

docker node ls

(4)node类型的转换

可以将worker提升成manager,从而保证manager的高可用

docker node promote worker01-node
docker node promote worker02-node

#降级可以用demote
docker node demote worker01-node

Swarm基本操作

Service

(1)创建一个tomcat的service

docker service create --name my-tomcat tomcat

(2)查看当前swarm的service

docker service ls

(3)查看service的启动日志

docker service logs my-tomcat

(4)查看service的详情

docker service inspect my-tomcat

(5)查看my-tomcat运行在哪个node上

docker service ps my-tomcat

日志

ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
u6o4mz4tj396        my-tomcat.1         tomcat:latest       worker01-node       Running             Running 3 minutes ago

(6)水平扩展service

docker service scale my-tomcat=3
docker service ls
docker service ps my-tomcat

日志:可以发现,其他node上都运行了一个my-tomcat的service

[root@manager-node ~]# docker service ps my-tomcat
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
u6o4mz4tj396        my-tomcat.1         tomcat:latest       worker01-node       Running             Running 8 minutes ago                        
v505wdu3fxqo        my-tomcat.2         tomcat:latest       manager-node        Running             Running 46 seconds ago                       
wpbsilp62sc0        my-tomcat.3         tomcat:latest       worker02-node       Running             Running 49 seconds ago

此时到worker01-node上:docker ps,可以发现container的name和service名称不一样,这点要知道

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
bc4b9bb097b8        tomcat:latest       "catalina.sh run"   10 minutes ago      Up 10 minutes       8080/tcp            my-tomcat.1.u6o4mz4tj3969a1p3mquagxok

(7)如果某个node上的my-tomcat挂掉了,这时候会自动扩展

[worker01-node]
docker rm -f containerid

[manager-node]
docker service ls
docker service ps my-tomcat

(8)删除service

docker service rm my-tomcat

Internal

swarm就是采用Internal技术来做的负载均衡。

之前在实战wordpress+mysql的时候,发现wordpress中可以直接通过mysql名称访问

这样可以说明两点,第一是其中一定有dns解析,第二是两个service的ip是能够ping通的

思考:不妨再创建一个service,也同样使用上述tomcat的overlay网络,然后来实验

docker service create --name whoami -p 8000:8000 --network my-overlay-net -d jwilder/whoami

(1)查看whoami的情况

docker service ps whoami

(2)在各自容器中互相ping一下彼此,也就是容器间的通信

#tomcat容器中ping whoami
docker exec -it 9d7d4c2b1b80 ping whoami
64 bytes from bogon (10.0.0.8): icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from bogon (10.0.0.8): icmp_seq=2 ttl=64 time=0.080 ms


#whoami容器中ping tomcat
docker exec -it 5c4fe39e7f60 ping tomcat
64 bytes from bogon (10.0.0.18): icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from bogon (10.0.0.18): icmp_seq=2 ttl=64 time=0.080 ms

(3)将whoami进行扩容

docker service scale whoami=3
docker service ps whoami     #manager,worker01,worker02

(4)此时再ping whoami service,并且访问whoami服务

#ping
docker exec -it 9d7d4c2b1b80 ping whoami
64 bytes from bogon (10.0.0.8): icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from bogon (10.0.0.8): icmp_seq=2 ttl=64 time=0.084 ms

#访问
docker exec -it 9d7d4c2b1b80 curl whoami:8000  [多访问几次]
I'm 09f4158c81ae
I'm aebc574dc990
I'm 7755bc7da921

小结:通过上述的实验可以发现什么?whoami服务对其他服务暴露的ip是不变的,但是通过whoami名称访问8000端口,确实访问到的是不同的service,就说明访问其实是像下面这张图。

也就是说whoami service对其他服务提供了一个统一的VIP入口,别的服务访问时会做负载均衡。