上文提到,docker的网络模式一共有五种,birdge 、host 、overlay、nacvlan、none、Network plugin 六种模式,这里主要介绍网桥(bridge)默认桥接模式。
一、简介
在网络概念中,桥接网络是一种链路层设备,它在网络段之间转发通信。网桥是运行在主机内核上的一个硬件设备或者软件设备。在docker中,桥接网络是使用软件桥接,连接到同一桥接网络上的容器直接可相互通信,而且是全端口的,而与未连接到该桥接网络的容器直接隔离,如此,桥接网络管理同一主机上所有容器的连接与隔离。docker的桥接驱动程序自动在主机上安装规则,同一网桥上的网络可相互通信,不同网桥网络容器相互隔离。
桥接网络适用于同一主机docker daemon生成的容器,对于不同主机daocker daemon的容器间交互,要么使用操作系统级的路由操作,要么使用overlay网络驱动。
当我们启动docker时,systemctl start docker, 一个默认的桥接网络(birbr0)将自动被创建,连接docker daemon 与宿主机器,同时创建一个网络docker0,后续生产的容器将自动连接到该桥接网络(docker0)上。我们可查看本地网桥 , 每创建一个容器,将新建一个桥接,连接容器与默认的桥接网络。
[root@localhost hadoop]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242b47b550d no
virbr0 8000.52540092a4f4 yes virbr0-nic
[root@localhost hadoop]#
网桥 virbr0 连接docker0与宿主机器,在网桥上创建了接口virbr0-nic,该接口接收docker0网络的数据。每生成一个容器,在docker0上新建一个接口,并且docker0的地址被设置成容器的网关。
[root@localhost hadoop]# docker run --rm -tdi nvidia/cuda:9.0-base
[root@localhost hadoop]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f9c2b80062f nvidia/cuda:9.0-base "/bin/bash" 15 seconds ago Up 14 seconds quizzical_mcnulty
[root@localhost hadoop]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242b47b550d no vethabef17b
virbr0 8000.52540092a4f4 yes virbr0-nic
[root@localhost hadoop]#
查看本地网络 信息 ifconfig -a
[root@localhost hadoop]# ifconfig -a
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.1 netmask 255.255.240.0 broadcast 192.168.15.255
inet6 fe80::42:b4ff:fe7b:550d prefixlen 64 scopeid 0x20<link>
ether 02:42:b4:7b:55:0d txqueuelen 0 (Ethernet)
RX packets 37018 bytes 2626776 (2.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 46634 bytes 89269512 (85.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.252.130 netmask 255.255.255.0 broadcast 192.168.252.255
ether 00:25:90:e5:7f:20 txqueuelen 1000 (Ethernet)
RX packets 14326014 bytes 17040043512 (15.8 GiB)
RX errors 0 dropped 34 overruns 0 frame 0
TX packets 10096394 bytes 3038002364 (2.8 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xfb120000-fb13ffff
eth1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 00:25:90:e5:7f:21 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xfb100000-fb11ffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 3304 bytes 6908445 (6.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3304 bytes 6908445 (6.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
oray_vnc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1200
inet 172.1.225.211 netmask 255.0.0.0 broadcast 172.255.255.255
ether 00:25:d2:e1:01:00 txqueuelen 500 (Ethernet)
RX packets 1944668 bytes 227190815 (216.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2092320 bytes 2232228527 (2.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethabef17b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::e47d:4eff:fe87:39d3 prefixlen 64 scopeid 0x20<link>
ether e6:7d:4e:87:39:d3 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:92:a4:f4 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0-nic: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 52:54:00:92:a4:f4 txqueuelen 500 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker 结合网桥与路由规则,设置同一宿主机器内,各容器间交互,docker 容器桥接网络驱动下,网络接入方式如下图所示:
如果启动容器时,指定了端口映射,将内部端口 80 映射到主机端口8080,也可0.0.0.0:8080方式,指定网卡,如下
docker run --rm -ti -p 8080:80 nvidia/cuda:9.0-base
然后查看路由表,
iptables -t nat -vnL
可看到增加了路由转发规则:
[root@localhost hadoop]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 55 packets, 2470 bytes)
pkts bytes target prot opt in out source destination
161K 8056K PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
161K 8056K PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
161K 8056K PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
3442 258K OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 192.168.0.0/20 0.0.0.0/0
0 0 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24
0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255
0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24
3442 258K POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
3442 258K POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
3442 258K POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 192.168.0.3 192.168.0.3 tcp dpt:80
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:192.168.0.3:80
默认的端口类型为 TCP。
二、容器间访问配置
首先启动两个容器,然后进入到容器内,查看个容器IP信息,
[root@localhost hadoop]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
462751a70444 nvidia/cuda:9.0-base "/bin/bash" 17 minutes ago Up 17 minutes 0.0.0.0:8080->80/tcp sad_heyrovsky
9f9c2b80062f nvidia/cuda:9.0-base "/bin/bash" 41 minutes ago Up 41 minutes quizzical_mcnulty
[root@localhost hadoop]#
我这里启动两个容器,然后调用 docker inspect 容器ID 查看容器IP
docker inspect -f {{.NetworkSettings.IPAddress}} 容器ID
我们这两个容器 为 192.168.0.2 192.168.0.3
进入其中的一个容器, ping 另外一台机器,会发现,仅仅只能采用地址模式才能ping的通 ping 192.168.0.3,
docker exec -ti 9f9c2b80062f /bin/bash
如果采用在/etc/hosts内追加 别名, 然后ping 名字,发现无法ping通。
192.168.0.3 node1
至于原因,下篇将继续讲解用户自定义桥接网络,解决该问题。