目录

1.理解Docker 0

2.--link

3.自定义网络

4.网络连通

5.实战:部署Redis集群


1.理解Docker 0

清空实验环境

# 删除所有容器
docker rm -f $(docker ps -aq)
# 删除所有镜像
docker rmi -f $(docker images -aq)

unraid docker多网卡 unraid docker网络_容器

问题: docker 是如何处理容器网络访问的?

# 测试 运行一个alpine
# 查看容器内部网络地址 发现容器启动的时候会得到一个 eth0@if13 ip地址,docker分配!
[root@fedora ~]# docker run -it --name alpine01 alpine
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@fedora ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
   ......
   ......
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:b1:ad:28:e2 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
 ......
 # 主机中多出一个网卡与docekr容器相对应
13: veth398a65c@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 62:10:aa:cb:5b:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::6010:aaff:fecb:5bc8/64 scope link 
       valid_lft forever preferred_lft forever
# 思考? linux能不能ping通容器内部? 可以! 
#        容器内部可以ping通外界吗?  可以!

#进入容器
[root@fedora ~]# docker exec -it alpine01 /bin/sh
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
......
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 192.168.10.128
PING 192.168.10.128 (192.168.10.128): 56 data bytes
64 bytes from 192.168.10.128: seq=0 ttl=64 time=0.490 ms
64 bytes from 192.168.10.128: seq=1 ttl=64 time=0.206 ms

# 回到Linux
[root@fedora ~]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) 字节的数据。
64 字节,来自 172.17.0.2: icmp_seq=1 ttl=64 时间=0.137 毫秒
64 字节,来自 172.17.0.2: icmp_seq=2 ttl=64 时间=0.155 毫秒

原理

1、我们每启动一个docker容器,docker就会给docker容器分配一个ip。

我们只要安装了docker,就会有一个 docker 0 。桥接模式,使用的技术是veth-pair技术!

2 、再启动一个容器测试,发现又多了一对网络

docker run -it --name alpine02 /bin/sh

ip addr

我们发现这个容器的网卡,都是一对对的

veth-pair 就是一对的虚拟设备接口,它们都是成对出现的,一端连着协议,一端彼此相连

正因为有这个特性 veth-pair 充当一个桥梁,连接各种虚拟网络设备的

OpenStack,Docker容器之间的连接,OVS的连接,都是使用evth-pair技术

3、测试下alpine01和alpine02是否可以ping通

可以互相ping通!

[root@fedora ~]# docker exec -it alpine02 /bin/sh
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=4.337 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.221 ms

[root@fedora ~]# docker exec -it alpine01 /bin/sh
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.247 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.221 ms

结论:alpine01和alpine02共用一个路由器,docker 0

所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用ip。

小结: Docker使用的是Linux的桥接,宿主机是一个Docker容器的网桥 docker0

unraid docker多网卡 unraid docker网络_运维_02

Docker中所有网络接口都是虚拟的,虚拟的转发效率高(内网传递文件)

只要容器删除,对应的网桥一对就没了!

思考一个场景:我们编写了一个微服务,database url=ip: 项目重启,数据ip换了。我们希望可以处理这个问题,可以通过名字来进行访问容器?

2.--link

[root@fedora ~]# docker exec -it alpine02 ping alpine01
ping: bad address 'alpine01'  # ping不通

# 运行一个alpine03, --link alpine02
[root@fedora ~]# docker run -it --name alpine03 --link alpine02 alpine
# alpine03 ping alpine02,可以ping通
/ # ping alpine02
PING alpine02 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.540 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.188 ms

# alpine02 ping alpine03,ping不通
[root@fedora ~]# docker exec -it alpine02 ping alpine03
ping: bad address 'alpine03'

探究: docker network inspect 网络id 网段相同

[root@fedora ~]# docker inspect alpine03
......
"Links": [
                "/alpine02:/alpine03/alpine02"
            ],
......

查看alpine03里面的/etc/hosts发现有alpine02的配置

[root@fedora ~]# docker exec -it alpine03 cat /etc/hosts
127.0.0.1	localhost
......
......
172.17.0.3	alpine02 032cb6efc4a0
172.17.0.4	a5e81b380663

--link 本质就是在hosts配置中添加映射

现在使用Docker已经不建议使用--link了!

自定义网络,不使用 docker 0 !

docker 0问题:不支持容器名连接访问!

3.自定义网络

docker network --help
connect    -- Connect a container to a network
create     -- Creates a new network with a name specified by the
disconnect -- Disconnects a container from a network
inspect    -- Displays detailed information on a network
ls         -- Lists all the networks created by the user
prune      -- Remove all unused networks
rm         -- Deletes one or more networks
# 查看docker的所有网络
[root@fedora ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
837647f8ed23   bridge    bridge    local
652ac854120d   host      host      local
07e1ea9c8832   none      null      local

网络模式

bridge :桥接 docker(默认,自己创建也是用bridge模式)

none :不配置网络,一般不用

host :和所主机共享网络

container :容器网络连通(用得少!局限很大)

测试

我们直接启动的命令 --net bridge,而这个就是我们的docker0

bridge就是docker0

docker run -d -P --name tomcat01 tomcat 等价于 => docker run -d -P --name tomcat01 --net bridge tomcat

docker0,特点:默认,域名不能访问。 --link可以打通连接,但是很麻烦!

我们可以 自定义一个网络

[root@fedora ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynetwork
f5a9afb1c2d5f51632c0faefa971125d26dc0d187e2c85fba118ae739b1ad9ae
[root@fedora ~]# docker network ls
NETWORK ID     NAME        DRIVER    SCOPE
837647f8ed23   bridge      bridge    local
652ac854120d   host        host      local
f5a9afb1c2d5   mynetwork   bridge    local
07e1ea9c8832   none        null      local
[root@fedora ~]# docker network inspect mynetwork 
[
    {
        "Name": "mynetwork",
        "Id": "f5a9afb1c2d5f51632c0faefa971125d26dc0d187e2c85fba118ae739b1ad9ae",
        "Created": "2022-06-25T15:16:19.081226162+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
......
......

启动两个alpine,再次查看网络情况

[root@fedora ~]# docker run -it --name alpine01 --net mynetwork alpine
[root@fedora ~]# docker run -it --name alpine02 --net mynetwork alpine

[root@fedora ~]# docker exec alpine01  ping alpine02
PING alpine02 (192.168.0.3): 56 data bytes
64 bytes from 192.168.0.3: seq=0 ttl=64 time=0.203 ms
64 bytes from 192.168.0.3: seq=1 ttl=64 time=0.231 ms

[root@fedora ~]# docker exec alpine02 ping alpine01
PING alpine01 (192.168.0.2): 56 data bytes
64 bytes from 192.168.0.2: seq=0 ttl=64 time=0.415 ms
64 bytes from 192.168.0.2: seq=1 ttl=64 time=0.252 ms

# 在自定义的网络下,alpine01和alpine02可以互相ping通,不用使用--link

我们自定义的网络 docker帮我们维护好了对应的关系,推荐我们平时这样使用网络!

好处:

redis -不同的集群使用不同的网络,保证集群是安全和健康的

mysql-不同的集群使用不同的网络,保证集群是安全和健康的 

unraid docker多网卡 unraid docker网络_运维_03

4.网络连通

[root@fedora ~]# docker network connect --help
Usage:  docker network connect [OPTIONS] NETWORK CONTAINER
Connect a container to a network
Options:
      --alias strings           Add network-scoped alias for the container
      --driver-opt strings      driver options for the network
      --ip string               IPv4 address (e.g., 172.30.100.104)
      --ip6 string              IPv6 address (e.g., 2001:db8::33)
      --link list               Add link to another container
      --link-local-ip strings   Add a link-local address for the container

测试两个不同的网络连通

再启动两个alpine 使用默认网络,即docker 0

[root@fedora ~]# docker run -it --name alpine03 alpine
[root@fedora ~]# docker run -it --name alpine04 alpine
# 此时03、04与01、02 ping不通

要将alpine03 连通 alpine01 ,连通就是将 alpine03加到 mynetwork网络

一个容器两个ip(alpine03)

[root@fedora ~]# docker network connect mynetwork alpine03
# 可以ping通alpine03
[root@fedora ~]# docker exec alpine01 ping alpine03
PING alpine03 (192.168.0.4): 56 data bytes
64 bytes from 192.168.0.4: seq=0 ttl=64 time=1.056 ms
64 bytes from 192.168.0.4: seq=1 ttl=64 time=0.242 ms
# 无法ping通alpine04
[root@fedora ~]# docker exec alpine01 ping alpine04
ping: bad address 'alpine04'

docker network connect mynetwork alpine03

就是将alpine03放到了mynetwork网络下

[root@fedora ~]# docker network inspect mynetwork 
[
    {
        "Name": "mynetwork",
 ......
 ......
        "ConfigOnly": false,
        "Containers": {
            "2ce3639012bf666250d52c5824ebc4e3e534e82fbd12deca4ea762bc8cd8741a": {
                "Name": "alpine03",
                "EndpointID": "66be1a83216e63df6f09cdb286022438857212c76b1ebdf8a052b260a1cda8f7",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/16", # alpine03 的IP地址
                "IPv6Address": ""
            },
          ......
          ......

结论:假设要跨网络操作别人,就需要使用docker network connect 连通! 

unraid docker多网卡 unraid docker网络_容器_04

5.实战:部署Redis集群

unraid docker多网卡 unraid docker网络_linux_05

创建网卡

docker network create redis --subnet 172.36.0.0/16

通过脚本创建六个redis配置

for port in $(seq 1 6);\
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >> /mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.36.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
# 查看是否创建成功
[root@fedora ~]# cat /mydata/redis/node-1/conf/redis.conf 
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.36.0.11
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes

通过脚本运行六个redis

for port in $(seq 1 6); \
docker run -p 637${port}:6379 -p 1667${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.36.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
[root@fedora ~]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS              PORTS                                                                                      NAMES
aa021497ace3   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   7 seconds ago        Up 4 seconds        0.0.0.0:6376->6379/tcp, :::6376->6379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp   redis-6
6073568e10f0   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   42 seconds ago       Up 39 seconds       0.0.0.0:6375->6379/tcp, :::6375->6379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp   redis-5
fda3a96e6dd8   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6374->6379/tcp, :::6374->6379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp   redis-4
588624f8e021   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6373->6379/tcp, :::6373->6379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp   redis-3
0af545002308   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes        0.0.0.0:6372->6379/tcp, :::6372->6379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp   redis-2
cc2c8f51fae8   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   3 minutes ago        Up 3 minutes        0.0.0.0:6371->6379/tcp, :::6371->6379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp   redis-1

创建集群

# 进入容器
[root@fedora ~]# docker exec -it redis-1 /bin/sh
/data # ls
appendonly.aof  nodes.conf
# 创建集群
/data # redis-cli --cluster create 172.36.0.11:6379  172.36.0.12:6379 172.36.0.13:6379 172.36.0.14:6379 172.36.0.15:6379 172.36.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
......
......
# yes
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
......
......
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 测试

# 进入集群
/data # redis-cli -c
# 查看info
127.0.0.1:6379> cluster info
cluster_state:ok
......
......
# 查看nodes
127.0.0.1:6379> cluster nodes
f74ba1e844faf414ec4a239af904d6158a591ef3 172.36.0.11:6379@16379 myself,master - 0 1656149126000 1 connected 0-5460
e41c8d0ab52da58d35cf1312e63294bc12eb6470 172.36.0.14:6379@16379 slave 47f37b35e1c1c784ba31f10cc07d3227a88abb3e 0 1656149126878 4 connected
9faf007ef9e2952fc1e27d3699d7fc1eb9c71d2a 172.36.0.16:6379@16379 slave 39d82fcd6237217fb9aa9fb526a6493df36e78cb 0 1656149125865 6 connected
39d82fcd6237217fb9aa9fb526a6493df36e78cb 172.36.0.12:6379@16379 master - 0 1656149126000 2 connected 5461-10922
47f37b35e1c1c784ba31f10cc07d3227a88abb3e 172.36.0.13:6379@16379 master - 0 1656149126572 3 connected 10923-16383
eedd91218d1a7b43cf8521e7d84bfe855573ff8d 172.36.0.15:6379@16379 slave f74ba1e844faf414ec4a239af904d6158a591ef3 0 1656149124554 5 connected

# 测试
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.36.0.13:6379
OK

# 新开一个终端,关闭redis-3
[root@fedora ~]# docker stop redis-3 
redis-3

# 重新进入集群
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.36.0.14:6379
"b"

# 查看nodes
172.36.0.14:6379> cluster nodes
9faf007ef9e2952fc1e27d3699d7fc1eb9c71d2a 172.36.0.16:6379@16379 slave 39d82fcd6237217fb9aa9fb526a6493df36e78cb 0 1656149582277 6 connected
f74ba1e844faf414ec4a239af904d6158a591ef3 172.36.0.11:6379@16379 master - 0 1656149584295 1 connected 0-5460
39d82fcd6237217fb9aa9fb526a6493df36e78cb 172.36.0.12:6379@16379 master - 0 1656149583791 2 connected 5461-10922
47f37b35e1c1c784ba31f10cc07d3227a88abb3e 172.36.0.13:6379@16379 master,fail - 1656149319066 1656149318000 3 disconnected
eedd91218d1a7b43cf8521e7d84bfe855573ff8d 172.36.0.15:6379@16379 slave f74ba1e844faf414ec4a239af904d6158a591ef3 0 1656149582000 5 connected
e41c8d0ab52da58d35cf1312e63294bc12eb6470 172.36.0.14:6379@16379 myself,master - 0 1656149583000 7 connected 10923-16383
# redis-3故障,其从机redis-4成为master