本文目录:

一、安装Containerd

二、运行一个busybox镜像

三、创建CNI网络

四、使containerd容器具备网络功能

五、与宿主机共享目录

六、与其它容器共享ns

七、docker/containerd并用

一、安装Containerd

本地安装Containerd:

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y containerd epel-release
yum install -y jq

Containerd版本:

[root@containerd ~]# ctr version
Client:
Version: 1.4.3
Revision: 269548fa27e0089a8b8278fc4fc781d7f65a939b
Go version: go1.13.15


Server:
Version: 1.4.3
Revision: 269548fa27e0089a8b8278fc4fc781d7f65a939b
UUID: b7e3b0e7-8a36-4105-a198-470da2be02f2

初始化Containerd配置:

containerd config default > /etc/containerd/config.toml
systemctl enabled containerd
systemctl start containerd

替换 containerd 默认的sand_box镜像,编辑​​/etc/containerd/config.toml​​文件:

# registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 可以使用阿里云镜像源
sandbox_image = "172.16.0.4/captain/pause-amd64:3.0"

应用配置并重新运行containerd服务

systemctl daemon-reload
systemctl restart containerd

容器运行时Containerd基础_kubernetes

二、运行一个busybox镜像:

预先准备:

[root@containerd ~]# # 拉取镜像
[root@containerd ~]# ctr -n k8s.io i pull docker.io/library/busybox:latest
[root@containerd ~]# # 创建一个container(此时还未运行)
[root@containerd ~]# ctr -n k8s.io container create docker.io/library/busybox:latest busybox
[root@containerd ~]# # 创建一个task
[root@containerd ~]# ctr -n k8s.io task start -d busybox
[root@containerd ~]# # 上述步骤也可以简写成如下
[root@containerd ~]# # ctr -n k8s.io run -d docker.io/library/busybox:latest busybox

查看该容器在宿主机的PID:

[root@containerd ~]# ctr -n k8s.io task ls
TASK PID STATUS
busybox 2356 RUNNING
[root@containerd ~]# ps ajxf|grep "containerd-shim-runc\|2356"|grep -v grep
1 2336 2336 1178 ? -1 Sl 0 0:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id busybox -address /run/containerd/containerd.sock
2336 2356 2356 2356 ? -1 Ss 0 0:00 \_ sh

进入容器:

[root@containerd ~]# ctr -n k8s.io t exec --exec-id $RANDOM -t busybox sh
/ # uname -a
Linux containerd 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 GNU/Linux
/ # ls /etc
group localtime network passwd shadow
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
/ #

发送SIGKILL信号量杀死该容器:

[root@containerd ~]# ctr -n k8s.io t kill -s SIGKILL busybox
[root@containerd ~]# ctr -n k8s.io t rm busybox
WARN[0000] task busybox exit with non-zero exit code 137

三、创建CNI网络

访问如下两个Git项目,并从release页面下载最新版本:

下载至HOME目录并解压:

[root@containerd ~]# pwd
/root
[root@containerd ~]# # 解压至HOME目录的cni-plugins/文件夹中
[root@containerd ~]# mkdir -p cni-plugins
[root@containerd ~]# tar xvf cni-plugins-linux-amd64-v0.9.0.tgz -C cni-plugins
[root@containerd ~]# # 解压至HOME目录的cni/文件夹中
[root@containerd ~]# tar -zxvf cni-v0.8.0.tar.gz
[root@containerd ~]# mv cni-0.8.0 cni

本教程我们首先使用bridge插件创建一个网卡,首先执行如下指令:

mkdir -p /etc/cni/net.d
cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF


cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}
EOF

随后激活网卡(说明:​​宿主机执行ip a命令即可看到一个cni0的网卡​​):

[root@containerd ~]# cd cni/scripts/
[root@containerd scripts]# CNI_PATH=/root/cni-plugins ./priv-net-run.sh echo "Hello World"
Hello World
[root@containerd scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:ea:35:42 brd ff:ff:ff:ff:ff:ff
inet 192.168.105.110/24 brd 192.168.105.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::1c94:5385:5133:cd48/64 scope link noprefixroute
valid_lft forever preferred_lft forever
10: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether de:12:0b:ea:a4:bc brd ff:ff:ff:ff:ff:ff
inet 10.22.0.1/24 brd 10.22.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::dc12:bff:feea:a4bc/64 scope link
valid_lft forever preferred_lft forever

四、使containerd容器具备网络功能

注重细节的我们在步骤二中将会发现,busybox容器此刻仅有一张本地网卡,其是无法访问任何网络的,那么我们如何使其具备各容器互通、外部网络通信功能呢?不妨执行如下指令:

[root@containerd ~]# ctr -n k8s.io t ls
TASK PID STATUS
busybox 5111 RUNNING
[root@containerd ~]# # pid=5111
[root@containerd ~]# pid=$(ctr -n k8s.io t ls|grep busybox|awk '{print $2}')
[root@containerd ~]# netnspath=/proc/$pid/ns/net
[root@containerd ~]# CNI_PATH=/root/cni-plugins /root/cni/scripts/exec-plugins.sh add $pid $netnspath

随后进入busybox容器我们将会发现其新增了一张网卡并可以实现外部网络访问:

[root@containerd ~]# ctr -n k8s.io task exec --exec-id $RANDOM -t busybox  sh -
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether d2:f2:8d:53:fc:95 brd ff:ff:ff:ff:ff:ff
inet 10.22.0.13/24 brd 10.22.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::d0f2:8dff:fe53:fc95/64 scope link
valid_lft forever preferred_lft forever
/ # ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114): 56 data bytes
64 bytes from 114.114.114.114: seq=0 ttl=127 time=17.264 ms
64 bytes from 114.114.114.114: seq=0 ttl=127 time=13.838 ms
64 bytes from 114.114.114.114: seq=1 ttl=127 time=18.024 ms
64 bytes from 114.114.114.114: seq=2 ttl=127 time=15.316 ms

小试牛刀:按照上述方法分别创建两个名为busybox-1与busybox-2容器,借助nc -l -p 8080暴露TCP服务端口进行彼此通信。

容器运行时Containerd基础_gwt_02

五、与宿主机共享目录

通过执行如下方案,我们即可实现容器与宿主机的/tmp进行共享:

[root@docker scripts]# ctr -n k8s.io c create v4ehxdz8.mirror.aliyuncs.com/library/busybox:latest busybox1 --mount type=bind,src=/tmp,dst=/host,options=rbind:rw
[root@docker scripts]# ctr -n k8s.io t start -d busybox1 bash
[root@docker scripts]# ctr -n k8s.io t exec -t --exec-id $RANDOM busybox1 sh
/ # echo "Hello world" > /host/1
/ #
[root@docker scripts]# cat /tmp/1
Hello world

六、与其它容器共享ns

本节仅对pid ns共享进行举例,其它ns共享与该方案类似

首先我们对docker的ns共享进行实验:

[root@docker scripts]# docker run --rm -it -d busybox sh
687c80243ee15e0a2171027260e249400feeeee2607f88d1f029cc270402cdd1
[root@docker scripts]# docker run --rm -it -d --pid="container:687c80243ee15e0a2171027260e249400feeeee2607f88d1f029cc270402cdd1" busybox cat
fa2c09bd9c042128ebb2256685ce20e265f4c06da6d9406bc357d149af7b83d2
[root@docker scripts]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fa2c09bd9c04 busybox "cat" 2 seconds ago Up 1 second pedantic_goodall
687c80243ee1 busybox "sh" 22 seconds ago Up 21 seconds hopeful_franklin
[root@docker scripts]# docker exec -it 687c80243ee1 sh
/ # ps aux
PID USER TIME COMMAND
1 root 0:00 sh
8 root 0:00 cat
15 root 0:00 sh
22 root 0:00 ps aux

接下来仿照该方案我们基于containerd的方式实现pid ns共享:

[root@docker scripts]# ctr -n k8s.io t ls
TASK PID STATUS
busybox 2255 RUNNING
busybox1 2652 RUNNING
[root@docker scripts]# # 这里的2652即为已有task运行时的pid号
[root@docker scripts]# ctr -n k8s.io c create --with-ns "pid:/proc/2652/ns/pid" v4ehxdz8.mirror.aliyuncs.com/library/python:3.6-slim python
[root@docker scripts]# ctr -n k8s.io t start -d python python # 这里启动了一个python的命令
[root@docker scripts]# ctr -n k8s.io t exec -t --exec-id $RANDOM busybox1 sh
/ # ps aux
PID USER TIME COMMAND
1 root 0:00 sh
34 root 0:00 python3
41 root 0:00 sh
47 root 0:00 ps aux

七、docker/containerd并用

参考链接:https://docs.docker.com/engine/reference/commandline/dockerd/

在完成对containerd的安装/配置启动后,我们可以在宿主机中安装docker客户端及服务。执行如下指令:

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum update -y && yum install -y docker-ce-18.06.2.ce
systemctl enable docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum update -y && yum install -y docker-ce-18.06.2.ce
systemctl enable docker

编辑​​/etc/systemd/system/multi-user.target.wants/docker.service​​文件并为其新增​​--containerd​​启动项:

容器运行时Containerd基础_docker_03容器运行时Containerd基础_gwt_04

保存,推出后执行如下指令:

[root@docker ~]# systemctl daemon-reload
[root@docker ~]# systemctl start docker
[root@docker ~]# ps aux|grep docker
root 72570 5.0 2.9 872668 55216 ? Ssl 01:31 0:00 /usr/bin/dockerd --containerd /run/containerd/containerd.sock --debug
[root@docker ~]# systemctl daemon-reload
[root@docker ~]# systemctl start docker
[root@docker ~]# ps aux|grep docker
root 72570 5.0 2.9 872668 55216 ? Ssl 01:31 0:00 /usr/bin/dockerd --containerd /run/containerd/containerd.sock --debug

进行验证:

容器运行时Containerd基础_docker_05容器运行时Containerd基础_trigger_06

END


技术交流

为了大家更快速的学习知识,掌握技术,随时沟通交流问题,特组建了技术交流群,大家在群里可以分享自己的技术栈,抛出日常问题,群里会有很多大佬及时解答的,这样我们就会结识很多志同道合的人,让我们共同的努力,向着美好的未来出发吧~~~,想要免费获取linux、k8s、DevOps、Openstack、Openshift、运维、开发、测试、架构师、Python、Go、面试文档、容器、岗位内推等资料也可进群获取哈~~    

容器运行时Containerd基础_migration_07

好课推荐

《kubernetes/k8s+Jenkins+DevOps+Prometheus+微服务(istio、SpringCloud)+EFK+Ceph+Python架构师实战》专题课程链接如下:

容器运行时Containerd基础_kubernetes_08容器运行时Containerd基础_gwt_09

微信公众号

容器运行时Containerd基础_docker_10



      容器运行时Containerd基础_kubernetes_11