文章目录

  • docker网络详解
  • 一、理解Docker网络
  • (1)、docker0
  • 思考?
  • (2)、启动容器并查看容器内部的网络地址
  • 思考:
  • (3)、原理:
  • 测试两个容器之间的连通性
  • (4)、Docker网络模型图
  • 结论:
  • link容器互联技术
  • 二、link
  • 总结:
  • 自定义网络
  • 三、容器互联
  • 优点:
  • 四、网络联通


docker网络详解

一、理解Docker网络

(1)、docker0

在这之前我们需要先把docker镜像清空;

[root@localhost ~]# ip addr

#本地主机的环回口
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

#本地主机的以太网网卡
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:9f:7d:9e brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.146/24 brd 192.168.1.255 scope global noprefixroute dynamic ens33
       valid_lft 42834sec preferred_lft 42834sec
    inet6 fe80::a187:ef6:4ff9:31b1/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

#docker0的地址
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:06:41:a6:f2 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:6ff:fe41:a6f2/64 scope link 
       valid_lft forever preferred_lft forever

思考?

docker是如何处理容器网络之间的访问的。

docker容器link失败 docker —link_docker

(2)、启动容器并查看容器内部的网络地址

eth0@if9是容器分配的地址

[root@localhost ~]# docker run -it centos /bin/bash
[root@f2d1c8270869 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

思考:

容器是否可以ping通容器?
Linux本地主机可以ping通容器内部

[root@localhost ~]# ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.101 ms
64 bytes from 172.17.0.1: icmp_seq=3 ttl=64 time=0.085 ms
^C
--- 172.17.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.054/0.080/0.101/0.019 ms

(3)、原理:

  • 1、我们每启动一个docker容器,docker就会给docker分配一个ip
    我们只要安装了docker就会有一个网段docker0
  • 2、使用evth-pair技术!

使用ip addr命令测试

[root@localhost ~]# ip addr

#可以看到这个docker容器的网卡和IP地址
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:06:41:a6:f2 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:6ff:fe41:a6f2/64 scope link 
       valid_lft forever preferred_lft forever

再启动一个容器做个测试

[root@localhost ~]# docker run -it --name tomcat02  5d0da3dc9764 /bin/bash
[root@2fa5b0978310 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#在宿主机上查看网卡和ip
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:9f:7d:9e brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.146/24 brd 192.168.1.255 scope global noprefixroute dynamic ens33
       valid_lft 41013sec preferred_lft 41013sec
    inet6 fe80::a187:ef6:4ff9:31b1/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:06:41:a6:f2 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:6ff:fe41:a6f2/64 scope link 
       valid_lft forever preferred_lft forever
9: vethc55515e@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 2e:39:0a:1c:4b:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::2c39:aff:fe1c:4b07/64 scope link 
       valid_lft forever preferred_lft forever
11: vethff4eb7b@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 56:34:1f:9f:11:9f brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::5434:1fff:fe9f:119f/64 scope link 
       valid_lft forever preferred_lft forever
  • 1、 由此发现只要我们每启动一个新的容器,那么就会多一张网卡
  • 2、我们发现这个容器启动带来的网卡,都是成对成对的,这个技术就是evth-pair,每个容器的网卡都是在同一网段内
  • 3、因为这个技术的特性,我们通过这个技术充当宿主机和容器的网桥,连接各种网络设备
  • 4、OpenStack、Docker容器之间的连接都是采用的这种(evth-pair)技术进行不同的主机通信、连接

测试两个容器之间的连通性

[root@2fa5b0978310 /]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.187 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.083 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.069 ms
^C
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.069/0.113/0.187/0.052 ms

[root@f2d1c8270869 /]# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.072 ms
64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.124 ms
^C
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.072/0.092/0.124/0.025 ms

(4)、Docker网络模型图

docker容器link失败 docker —link_运维_02

结论:

①、tomcat01和tomcat02连接同一个路由器

②、docker会给容器分配一个默认可用ip

③、每个不同的主机会在docker0生成一个路由条目,docker0通过路有条目进行数据转发

④、docker使用的额Linux系统的桥接,宿主机中是一个Docker容器的网桥

docker容器link失败 docker —link_运维_03

Docker所有的网络接口都是虚拟的,虚拟的网络接口数据转发传递更快
如果将容器的进程删除,那么这个虚拟的网卡也就消失了


文章目录

  • docker网络详解
  • 一、理解Docker网络
  • (1)、docker0
  • 思考?
  • (2)、启动容器并查看容器内部的网络地址
  • 思考:
  • (3)、原理:
  • 测试两个容器之间的连通性
  • (4)、Docker网络模型图
  • 结论:
  • link容器互联技术
  • 二、link
  • 总结:
  • 自定义网络
  • 三、容器互联
  • 优点:
  • 四、网络联通


link容器互联技术

二、link

思考,我们是否可以ping通docker的主机名来访问容器

#试一下是否可以ping通主机名
[root@localhost ~]# ping tomcat01
ping: tomcat01: 未知的名称或服务

#如何解决这个问题
[root@localhost ~]# docker run -d -P --name tomcat01 tomcat
f440704a5c728a178b8160871cb16832f4cc3146d0b344df55cd5536d5b5acb0
[root@localhost ~]# docker run -d -P --name tomcat02 tomcat
e0add1ccae76a6c7f9b5a1c5cd8ab85306b64fcfe2f673f00be69ab565743841
[root@localhost ~]# docker run -d -P --name tomcat03 --link  tomcat02 tomcat
7f512a217afb82ead4f1d55186c8f8bb424eaef01f953eea62c7fcf75f3e7460
[root@localhost ~]# docker exec -it tomcat03 ping tomcat02
64 bytes from tomcat02(172.17.0.3): icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from tomcat02(172.17.0.3): icmp_seq=2 ttl=64 time=0.101 ms
64 bytes from tomcat02(172.17.0.3): icmp_seq=3 ttl=64 time=0.085 ms

docker run -d -P --name tomcat01 tomcat

docker run -d -P --name tomcat02 tomcat

docker run -d -P --name tomcat03 --link tomcat02 tomcat

docker exec -it tomcat03 ping tomcat02

[root@localhost ~]# docker network list
NETWORK ID     NAME      DRIVER    SCOPE
96ba06c75abe   bridge    bridge    local
68f5de4225aa   host      host      local
ec63ffc090a8   none      null      local
 
[root@localhost ~]# docker inspect 96ba06c75abe
[
    {
        "Name": "bridge",
        "Id": "96ba06c75abe006319ecb1d3ae9600d2c9e60149e4539b7b9f68fcebb617e958",
        "Created": "2022-04-28T15:59:31.583158744+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "7f512a217afb82ead4f1d55186c8f8bb424eaef01f953eea62c7fcf75f3e7460": {
                "Name": "tomcat03",
                "EndpointID": "2055e1622bf6b7b5b733607095bdb7f6a03b3cd7e2855391f341ea861825f70e",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "e0add1ccae76a6c7f9b5a1c5cd8ab85306b64fcfe2f673f00be69ab565743841": {
                "Name": "tomcat02",
                "EndpointID": "23a0d5da1c97bc31b518cefa46b69ec5ca43834a479035b60de11c1a1b21949d",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "f440704a5c728a178b8160871cb16832f4cc3146d0b344df55cd5536d5b5acb0": {
                "Name": "tomcat01",
                "EndpointID": "9c2ee14976f99d7ee9bf19ab38772080420193ac3d19607b4ea402b2708e3c48",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]


#查看容器的 hosts文件
[root@localhost ~]# docker exec -it 7f512a217afb82 cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.3	tomcat02 e0add1ccae76
172.17.0.4	7f512a217afb

总结:

①、–link 本质上就是在hosts配置文件里面添加了一个主机名和IP地址的映射信息
②、我们现在已经很少使用–link了!我们一般使用自定义网络,不适用于docker0!

自定义网络

三、容器互联

(1)、网络模式
①、桥接:bridge(默认)
②、none:不配置网络
③、host:和宿主机共享网络
④、container:容器网络连接,一般用的少

# 查看所有的网络
[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
96ba06c75abe   bridge    bridge    local
68f5de4225aa   host      host      local
ec63ffc090a8   none      null      local

#创建一个docker自定义网络
[root@localhost ~]# docker network create --driver bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 bridge.net
82c03380c771cef2601a80d4791982391e559a312764a6cf6ba37254970f743e
[root@localhost ~]# docker network ls
NETWORK ID     NAME         DRIVER    SCOPE
69cd942f935c   bridge       bridge    local
82c03380c771   bridge.net   bridge    local
68f5de4225aa   host         host      local
ec63ffc090a8   none         null      local

#查看刚刚创建网络的元数据
[root@localhost ~]# docker inspect 82c03380c771
[
    {
        "Name": "bridge.net",
        "Id": "82c03380c771cef2601a80d4791982391e559a312764a6cf6ba37254970f743e",
        "Created": "2022-04-28T17:03:24.589340118+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/24",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

#启动两个容器并检测是否能互相ping通域名
[root@localhost ~]# docker run -it --name centos01 --net bridge.net centos:7
[root@14fcf36edaca /]# ping centos02
PING centos02 (192.168.0.3) 56(84) bytes of data.
64 bytes from centos02.bridge.net (192.168.0.3): icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from centos02.bridge.net (192.168.0.3): icmp_seq=2 ttl=64 time=1.50 ms
64 bytes from centos02.bridge.net (192.168.0.3): icmp_seq=3 ttl=64 time=0.116 ms
--- centos02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 0.068/0.562/1.502/0.664 ms

[root@localhost ~]# docker run -it --name centos02 --net bridge.net centos:7
[root@8561e2a44890 /]# ping centos01
PING centos01 (192.168.0.2) 56(84) bytes of data.
64 bytes from centos01.bridge.net (192.168.0.2): icmp_seq=1 ttl=64 time=0.183 ms
64 bytes from centos01.bridge.net (192.168.0.2): icmp_seq=2 ttl=64 time=0.129 ms
64 bytes from centos01.bridge.net (192.168.0.2): icmp_seq=3 ttl=64 time=0.068 ms
--- centos01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.068/0.126/0.183/0.048 ms

我们自定义的网络docker都已经配置好了域名之间的映射关系,使用自定义网络会比较方便

优点:

①、不同的集群之间使用不同的自定义网络,可以很好的保证集群之间的安全和高可用

四、网络联通

(1)、测试docker0到bridge.net之间的网络联通

#重新启动一个容器,网络设置为默认网络(bridge)
[root@localhost ~]# docker run -it --name centos03 --net bridge centos:7

#设置我们自定义的网络(bridge.net)和默认网络(bridge)的联通
[root@localhost ~]# docker network connect bridge.net centos03

#在centos03这个镜像测试能不能ping通另外一个网段的两台容器主机
[root@7dc41047d827 /]# ping centos01
PING centos01 (192.168.0.2) 56(84) bytes of data.
64 bytes from centos01.bridge.net (192.168.0.2): icmp_seq=1 ttl=64 time=0.217 ms
64 bytes from centos01.bridge.net (192.168.0.2): icmp_seq=2 ttl=64 time=0.105 ms
64 bytes from centos01.bridge.net (192.168.0.2): icmp_seq=3 ttl=64 time=0.071 ms
^C
--- centos01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 0.071/0.131/0.217/0.062 ms
[root@7dc41047d827 /]# 
[root@7dc41047d827 /]# ping centos02
PING centos02 (192.168.0.3) 56(84) bytes of data.
64 bytes from centos02.bridge.net (192.168.0.3): icmp_seq=1 ttl=64 time=0.367 ms
64 bytes from centos02.bridge.net (192.168.0.3): icmp_seq=2 ttl=64 time=0.173 ms
64 bytes from centos02.bridge.net (192.168.0.3): icmp_seq=3 ttl=64 time=0.209 ms
^C
--- centos02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2013ms
rtt min/avg/max/mdev = 0.173/0.249/0.367/0.086 ms

#这个时候我们再查看一下我们自定义网络的元数据(bridge.net)
[root@localhost ~]# docker inspect bridge.net
... ...
"ConfigOnly": false,
        "Containers": {
            "14fcf36edacaaecf548a4a85704811a872886c1e573c8c424eae7453deef2098": {
                "Name": "centos01",
                "EndpointID": "1342c06fc2f8a964e1b8e5721e1466584204a3aebc9fad2f7633a704cb093292",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/24",
                "IPv6Address": ""
            },
            "7dc41047d8277d7da8790f8015195110f51c3ef2f1b2c22a3808e4e5d342d6b9": {
                "Name": "centos03",
                "EndpointID": "66bfbe731cd282ffe763815d02f902da03c49f10e2ee55c036700fd332834795",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/24",
                "IPv6Address": ""
            },
            "8561e2a448900498a1745c0277bcd131c156c88135e798281e42cc56ad4075e9": {
                "Name": "centos02",
                "EndpointID": "fbc45784007548153cb4de72f00d856c38454e631c6a600f955ff6e46280b0df",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/24",
                "IPv6Address": ""
            }
        },
... ... 
==>可以看到联通就是将默认网络(bridge)的centos03放到了我们自定义的网络下(bridge.net)
一个容器两个ip