calico

 

calico是一个纯三层的虚拟网络方案,calico为每个容器分配一个ip,每个host都是router,把不同host的容器连接起来。与vxlan不同的是,calico不对数据包做额外封装,不需要nat和端口映射,扩展性和性能都很好

与其他容器网络方案相比,calico的另一大优势:network policy,用户可以动态定义acl规则,控制进出容器的数据包,实现业务需求

 

实验环境:

calico依赖etcd在不同主机间共享和交换信息,存储calico网络状态。

calico网络中的每个主机都需要运行calico组件,提供容器interface管理、动态路由、动态acl、报告状态等功能

 

etcd节点
10.1.1.73
calico节点
10.1.1.11 vm11
10.1.1.12 vm12

 

启动etcd

etcd的运行方法flannel可以看到

vm11和vm12的docker daemon配置文件

[root@vm11 ~]# cat /usr/lib/systemd/system/docker.service |grep "^ExecStart"
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 --containerd=/run/containerd/containerd.sock --cluster-store=etcd://10.1.1.73:2379

  

重启docker

systemctl daemon-reload
systemctl restart docker

  

部署calico-2.6

wget -O /usr/local/bin/calicoctl https://github.com/projectcalico/calicoctl/releases/download/v1.6.5/calicoctl

chmod +x /usr/local/bin/calicoctl

 

在vm11和vm12上启动calico前注意俩点:

1)calico要下载镜像 quay.io/calico/node,需要提前上传
2)默认情况下,calicoctl将查找配置文件/etc/calico/calicoctl.cfg。该文件可以是YAML或JSON格式。它必须有效且可读

yaml示例:

/etc/calico/calicoctl.cfg

apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
  datastoreType: "etcdv2"
  etcdEndpoints: "http://10.1.1.73:2379"

  

启动calico

vm11

[root@vm11 ~]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12
Running command to load modules: modprobe -a xt_set ip6_tables
Enabling IPv4 forwarding	#设置主机网络,例如:enable ip forwarding
Enabling IPv6 forwarding
Increasing conntrack limit
Removing old calico-node container (if running).
Running the following command to start calico-node:		#下载并启动calico-node容器,calico会以容器的形式运行,(类似weave)

docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=vm11 -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e ETCD_ENDPOINTS=http://10.1.1.73:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.12

Image may take a short time to download if it is not available locally.
Container started, checking progress logs.

2019-09-03 13:55:06.375 [INFO][8] startup.go 173: Early log level set to info
2019-09-03 13:55:06.376 [INFO][8] client.go 202: Loading config from environment
2019-09-03 13:55:06.376 [INFO][8] startup.go 83: Skipping datastore connection test
2019-09-03 13:55:06.384 [INFO][8] startup.go 259: Building new node resource Name="vm11"
2019-09-03 13:55:06.384 [INFO][8] startup.go 273: Initialise BGP data
2019-09-03 13:55:06.386 [INFO][8] startup.go 467: Using autodetected IPv4 address on interface ens34: 10.1.1.11/24
2019-09-03 13:55:06.386 [INFO][8] startup.go 338: Node IPv4 changed, will check for conflicts
2019-09-03 13:55:06.408 [INFO][8] startup.go 530: No AS number configured on node resource, using global value
2019-09-03 13:55:06.411 [INFO][8] etcd.go 111: Ready flag is already set
2019-09-03 13:55:06.412 [INFO][8] client.go 139: Using previously configured cluster GUID
2019-09-03 13:55:06.435 [INFO][8] compat.go 796: Returning configured node to node mesh
2019-09-03 13:55:06.449 [INFO][8] startup.go 131: Using node name: vm11
2019-09-03 13:55:06.625 [INFO][12] client.go 202: Loading config from environment
Starting libnetwork service
Calico node started successfully		#calico启动成功

  

[root@vm11 ~]# docker ps
CONTAINER ID        IMAGE                         COMMAND             CREATED              STATUS              PORTS               NAMES
61b4273eceba        quay.io/calico/node:v2.6.12   "start_runit"       About a minute ago   Up About a minute                       calico-node

  

vm12

[root@vm12 ~]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12
Running command to load modules: modprobe -a xt_set ip6_tables
Enabling IPv4 forwarding
Enabling IPv6 forwarding
Increasing conntrack limit
Removing old calico-node container (if running).
Running the following command to start calico-node:

docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=vm12 -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e ETCD_ENDPOINTS=http://10.1.1.73:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.12

Image may take a short time to download if it is not available locally.
Container started, checking progress logs.

2019-09-03 13:55:57.548 [INFO][8] startup.go 173: Early log level set to info
2019-09-03 13:55:57.549 [INFO][8] client.go 202: Loading config from environment
2019-09-03 13:55:57.549 [INFO][8] startup.go 83: Skipping datastore connection test
2019-09-03 13:55:57.558 [INFO][8] startup.go 259: Building new node resource Name="vm12"
2019-09-03 13:55:57.558 [INFO][8] startup.go 273: Initialise BGP data
2019-09-03 13:55:57.560 [INFO][8] startup.go 467: Using autodetected IPv4 address on interface ens34: 10.1.1.12/24
2019-09-03 13:55:57.560 [INFO][8] startup.go 338: Node IPv4 changed, will check for conflicts
2019-09-03 13:55:57.587 [INFO][8] startup.go 530: No AS number configured on node resource, using global value
2019-09-03 13:55:57.590 [INFO][8] etcd.go 111: Ready flag is already set
2019-09-03 13:55:57.593 [INFO][8] client.go 139: Using previously configured cluster GUID
2019-09-03 13:55:57.622 [INFO][8] compat.go 796: Returning configured node to node mesh
2019-09-03 13:55:57.639 [INFO][8] startup.go 131: Using node name: vm12
2019-09-03 13:55:57.813 [INFO][12] client.go 202: Loading config from environment
Starting libnetwork service
Calico node started successfully

  

[root@vm12 ~]# docker ps 
CONTAINER ID        IMAGE                         COMMAND             CREATED             STATUS              PORTS               NAMES
a2f16e4802cc        quay.io/calico/node:v2.6.12   "start_runit"       40 seconds ago      Up 38 seconds                           calico-node

  

calico状态变化

[root@vm11 ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+----------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |   INFO   |
+-----------------+-------------------+-------+----------+----------+
| 192.168.200.120 | node-to-node mesh | start | 09:59:10 | OpenSent |
+-----------------+-------------------+-------+----------+----------+

IPv6 BGP status
No IPv6 peers found.

[root@vm11 ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.200.120 | node-to-node mesh | up    | 09:59:16 | Established |
+-----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

  

创建calico网络

[root@vm11 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
9084cfd0cb32        bridge              bridge              local
d2847ceac9a3        host                host                local
11b44b36f998        none                null                local

  

在vm11上创建calico网络cal_net1

[root@vm11 ~]# docker network create --driver calico  --ipam-driver calico-ipam  cal_net1
443f6e95909da5b46d8a1b0a4ed4bfefc244f99dc9b34dd1a6e27aa6be347a87

  

--driver calico 指定使用calico的libnetwork CNM driver

--ipam-driver calico-ipam 指定使用calico的IPAM driver管理ip

calico为global网络,etcd会将cal_net同步到所有主机

[root@vm12 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c55fa3d13b9a        bridge              bridge              local
443f6e95909d        cal_net1            calico              global
0051c32567e5        host                host                local
873fca8fbc1c        none                null                local

  

运行容器并分析calico的网络结构

在vm11中运行容器bbox1并连接到cal_net1

[root@vm11 ~]# docker container run -itd --net cal_net1 --name bbox1  busybox
c3da74472e3a3b07ec1a7e2e0663432e9ac700c33e018c4e301a91c468d4705a

  

当前docker环境

[root@vm11 ~]# docker version
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: docker-1.13.1-102.git7f2769b.el7.centos.x86_64
 Go version:      go1.10.3
 Git commit:      7f2769b/1.13.1
 Built:           Mon Aug  5 15:09:42 2019
 OS/Arch:         linux/amd64

Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: docker-1.13.1-102.git7f2769b.el7.centos.x86_64
 Go version:      go1.10.3
 Git commit:      7f2769b/1.13.1
 Built:           Mon Aug  5 15:09:42 2019
 OS/Arch:         linux/amd64
 Experimental:    false

  

注意:

docker版本不支持,会报错,如:docker-ce版本为18.09.0-ce

[root@vm11 ~]# docker container run -itd --net cal_net1 --name bbox41  busybox
ab13d83389ecb44805839a682b0d9ff27cbd3ee6366b107166aba2d108b9863f
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\"2019-09-03T18:01:34+08:00\\\\\\\" level=fatal msg=\\\\\\\"failed to add interface temp8577f6eba8d to sandbox: error setting interface \\\\\\\\\\\\\\\"temp8577f6eba8d\\\\\\\\\\\\\\\" routes to [\\\\\\\\\\\\\\\"169.254.1.1/32\\\\\\\\\\\\\\\" \\\\\\\\\\\\\\\"fe80::c29:b5ff:fed2:73ed/128\\\\\\\\\\\\\\\"]: permission denied\\\\\\\"\\\\n\\\"\"": unknown.

  

查看bbox1的网络配置

[root@vm11 ~]# docker exec bbox1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: cali0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
    inet 192.168.212.64/32 scope global cali0
       valid_lft forever preferred_lft forever
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

[root@vm11 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:a1:48:24 brd ff:ff:ff:ff:ff:ff
    inet 172.20.10.101/24 brd 172.20.10.255 scope global noprefixroute dynamic ens33
       valid_lft 84562sec preferred_lft 84562sec
    inet6 fe80::95aa:c880:565b:12e5/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:a1:48:2e brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.11/24 brd 10.1.1.255 scope global noprefixroute ens34
       valid_lft forever preferred_lft forever
    inet6 fe80::88f6:79bf:e403:82e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:54:da:d4:dd brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
6: cali467b02af53f@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether f2:ea:03:9b:dc:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::f0ea:3ff:fe9b:dc58/64 scope link 
       valid_lft forever preferred_lft forever

  

cali0是calico interface,分配的地址为192.168.212.64,cali0对应vm11编号6的interface cali467b02af53f@if5

[root@vm11 ~]# docker exec bbox1 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         169.254.1.1     0.0.0.0         UG    0      0        0 cali0
169.254.1.1     0.0.0.0         255.255.255.255 UH    0      0        0 cali0


[root@vm11 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.20.10.1     0.0.0.0         UG    100    0        0 ens33
10.1.1.0        0.0.0.0         255.255.255.0   U     101    0        0 ens34
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.20.10.0     0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.212.64  0.0.0.0         255.255.255.255 UH    0      0        0 cali467b02af53f
192.168.212.64  0.0.0.0         255.255.255.192 U     0      0        0 *

  

1.vm11将作为router负责转发目的地址为bbox1的数据包
2.vm11也负责转发目的地址为本地容器subnet 192.168.212.64/26的数据包

 

在vm12中运行容器bbox2,连接到cal_net1

[root@vm12 ~]# docker container run -itd --net cal_net1  --name bbox2 busybox
60400bf1b2fc5a45d5491e476d09c8602759a14b1c544a7d78c89e190d7e703c

[root@vm12 ~]# docker exec bbox2 ip a show cali0
5: cali0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
    inet 192.168.36.192/32 scope global cali0
       valid_lft forever preferred_lft forever
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

  

ip地址为192.168.36.192

vm12添加了三条路由

[root@vm12 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.20.10.1     0.0.0.0         UG    100    0        0 ens33
10.1.1.0        0.0.0.0         255.255.255.0   U     101    0        0 ens34
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.20.10.0     0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.36.192  0.0.0.0         255.255.255.255 UH    0      0        0 cali95c362e1920
192.168.36.192  0.0.0.0         255.255.255.192 U     0      0        0 *
192.168.212.64  10.1.1.11       255.255.255.192 UG    0      0        0 ens34

  

 

128 64 32 16 8 4 2 1
128+64=192

1.目的地址为vm11容器subnet 192.168.212.64/26

2.目的地址为本地bbox2容器192.168.36.192/32的路由

3.目的地址为本地容器subnet 192.168.36.192/26的路由

同样的vm11也自动添加了到subnet 192.168.36.192/26的路由

[root@vm11 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.20.10.1     0.0.0.0         UG    100    0        0 ens33
10.1.1.0        0.0.0.0         255.255.255.0   U     101    0        0 ens34
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.20.10.0     0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.36.192  10.1.1.12       255.255.255.192 UG    0      0        0 ens34
192.168.212.64  0.0.0.0         255.255.255.255 UH    0      0        0 cali467b02af53f
192.168.212.64  0.0.0.0         255.255.255.192 U     0      0        0 *

  

calico网络的连通性

测试bbox1与bbox2的连通性

[root@vm11 ~]# docker exec bbox1 ping -c 2 bbox2
PING bbox2 (192.168.36.193): 56 data bytes
64 bytes from 192.168.36.193: seq=0 ttl=62 time=0.419 ms
64 bytes from 192.168.36.193: seq=1 ttl=62 time=0.891 ms

--- bbox2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.419/0.655/0.891 ms

  

注意:ip变化,只是次日删掉新建容器造成,实验环境一致。

1.根据bbox1的路由表,数据包从cali0发出

[root@vm11 ~]# docker exec bbox1 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         169.254.1.1     0.0.0.0         UG    0      0        0 cali0
169.254.1.1     0.0.0.0         255.255.255.255 UH    0      0        0 cali0

  

2.数据经过veth pair到达vm11,查看路由表,数据从ens34发送vm12(10.1.1.12)

192.168.36.192  10.1.1.12       255.255.255.192 UG    0      0        0 ens34

  

3.vm12收到数据包,根据路由表发送给cali5e923a4b5ad@if5,进而通过veth pair cali0到达bbox2

192.168.36.193  0.0.0.0         255.255.255.255 UH    0      0        0 cali5e923a4b5ad

  

不同calico网络之间的连通性

创建cal_net2

[root@vm11 ~]# docker network create --driver calico --ipam-driver calico-ipam cal_net2
96d3da8d23d144383fcc6b06b9d6dfdd45a6d30f4dfd86eecbb408aef91d8389
[root@vm11 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
dfe7c92ca537        bridge              bridge              local
443f6e95909d        cal_net1            calico              global
96d3da8d23d1        cal_net2            calico              global
962f8ed087f6        host                host                local
0059fdd784d2        none                null                local

  

在vm11中运行容器bbox3,--net cal_net2

[root@vm11 ~]# docker container run -itd --net cal_net2 --name bbox3 busybox
7faa12a6c0a52a3a561b9fde5e004231d6141568b307254025939531e35c2a14

  

calico为bbox3分配的ip 192.168.212.66

[root@vm11 ~]# docker exec bbox3 ip a show cali0
7: cali0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
    inet 192.168.212.66/32 scope global cali0
       valid_lft forever preferred_lft forever
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

  

验证bbox1与bbox3的连通性

[root@vm11 ~]# docker exec bbox1 ping -c 2 bbox3
ping: bad address 'bbox3'

  

虽然bbox1和bbox3都在vm11,而且都在一个subnet 192.168.212.64/26,但它们属于不同的calico,默认不能通行

calico默认的policy规则是:容器只能与同一个calico网络的容器通行

calico的每个网络都有一个同名的profile,profile中定义了该网络的policy

[root@vm11 ~]# calicoctl get profile cal_net1 -o yaml
- apiVersion: v1
  kind: profile
  metadata:
    name: cal_net1	
    tags:
    - cal_net1
  spec:
    egress:
    - action: allow
      destination: {}
      source: {}
    ingress:
    - action: allow
      destination: {}
      source:
        tag: cal_net1

  

metadata:

1.name命名为cal_net1,这就是calico网络cal_net1的profile
2.为profile添加一个tag cal_net1,这个tag随便起,与name无关,后面方便调用

spec

1.egress对容器发出的数据包进行控制,当前无限制

2.ingress对进入容器的数据包进行限制,当前设置接收来自tag cal_net1的容器

 

calico网络最大的特性,可定制calico policy

 

定制policy

calico能够让用户定义灵活的policy规则,精细化控制进出容器的流量

实践1:

1.创建一个新的calico网络cal_web,并启动容器web1

2.定义policy允许cal_net2中的容器访问web1的80端口

 

创建cal_web

[root@vm11 ~]# docker network create --driver calico --ipam-driver calico-ipam cal_web
7580c0a1e0529c8c07b64b228033204d1330c2b47a71baa513365cf7d58e9794

  

在vm11中运行web1,连接到cal_web

[root@vm11 ~]# docker container run -itd --net cal_web --name web1 httpd

  

web1的ip为192.168.212.68

[root@vm11 ~]# docker inspect  web1 |grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "192.168.212.68",

  

目前bbox3无法访问web1的80端口

[root@vm11 ~]# docker exec bbox3 wget 192.168.212.8
Connecting to 192.168.212.68 (192.168.212.68:80)
wget: can't connect to remote host (192.168.212.68): Connection timed out

  

calicoctl get profile cal_web -o yaml

  

基于原来的修改更方便:

创建policy文件web.yml,

- apiVersion: v1
  kind: profile
  metadata:
    name: cal_web
    tags:
    - cal_web
  spec:
    egress:
    - action: allow
      destination: {}
      source: {}
    ingress:
    - action: allow
      protocol: tcp
      destination: 
        ports:
        - 80
      source:
        tag: cal_net2

  

profile中name与cal_web网络同名cal_web的所有容器(web1)都会应用此profile中的policy

ingress允许cal_net2中的容器bbox3访问

只放开80端口

应用该policy

[root@vm11 ~]# calicoctl apply -f web.yml 
Successfully applied 1 'profile' resource(s)

  

现在再测试bbox3访问web1的http服务

[root@vm11 ~]# docker exec bbox3 wget 192.168.212.68
Connecting to 192.168.212.68 (192.168.212.68:80)
saving to 'index.html'
index.html           100% |********************************|    45  0:00:00 ETA
'index.html' saved

  

ping还是不可以的,只方通了80

[root@vm11 ~]# docker exec bbox3 ping -c 2 192.168.212.68
PING 192.168.212.68 (192.168.212.68): 56 data bytes

--- 192.168.212.68 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

  

关于calico policy的更多配置,可看官网

 

IPAM

calico默认自动为网络分配subnet

可以定制,参考官网

先定义一个ip pool

apiVersion: v1
kind: ipPool
metadata:
  cidr: 20.1.0.0/16
spec:
  ipip:
    enabled: true
    mode: cross-subnet
  nat-outgoing: true
  disabled: false

  

[root@vm11 ~]# calicoctl create  -f ippool 
Successfully created 1 'ipPool' resource(s)

  

查看所有的ipPool

[root@vm11 ~]# calicoctl get ipPool
CIDR                       
192.168.0.0/16             
20.1.0.0/16                
fd80:24e2:f998:72d6::/64

  

用此ip pool创建calico网络

[root@vm11 ~]# docker network create --driver calico  --ipam-driver calico-ipam  --subnet=20.1.0.0/16  my_net
92897f980013e202cfb5a2280ac3201a5a76936a3f083e9f77998c33052886f5

  

运行容器并连接到my_net网络

[root@vm11 ~]# docker container run -itd --net my_net --name bbox5  busybox 
d08bd47b36c198697892f448bcd3f47fb1979f5c72075118b533bb08dbb61047

[root@vm11 ~]# docker exec bbox5 ip a show cali0
14: cali0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
    inet 20.1.212.65/32 scope global cali0
       valid_lft forever preferred_lft forever
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

  

也可以通过--ip为容器指定ip,但必须在subnet范围内

[root@vm11 ~]# docker container run -itd --net my_net  --ip 20.1.0.100  --name bbox6 busybox
7c0b81e20798b5328908cffc8ed53d4a12c1475298c12c17aab45f470eeb7169

[root@vm11 ~]# docker exec bbox6 ip a show cali0
16: cali0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
    inet 20.1.0.100/32 scope global cali0
       valid_lft forever preferred_lft forever
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever