目录

1. Pod管理

2. 资源清单

3. Pod生命周期

4. 控制器

1. Pod管理

Pod 是可以创建和管理Kubernetes计算的最小可部署单元,一个Pod代表着集群中运行的一个进程,每个pod都有一个唯一的ip。

一个 pod 类似一个豌豆荚,包含一个或多个容器(通常是docker),多个容器间共享IPC、Network和UTC namespace。

kubectl 命令指南:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

1).创建 Pod 应用:

[root@node22 kubernetes]# kubectl run demo --image=nginx  创建容器demo
pod/demo created
[root@node22 kubernetes]# kubectl get pod  查看容器
NAME   READY   STATUS    RESTARTS   AGE
demo   1/1     Running   0          12s
[root@node22 kubernetes]# kubectl get pod -o wide  
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          35s   10.244.1.2   node33   <none>           <none>
[root@node22 kubernetes]# kubectl describe pod demo 查看容器详细信息
Name:         demo
Namespace:    default
Priority:     0
Node:         node33/192.168.0.33
Start Time:   Thu, 25 Aug 2022 00:06:55 +0800
Labels:       run=demo
Annotations:  <none>
Status:       Running
IP:           10.244.1.2
IPs:
  IP:  10.244.1.2
[root@node22 kubernetes]# kubectl logs demo查看容器日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/08/24 16:07:03 [notice] 1#1: using the "epoll" event method
2022/08/24 16:07:03 [notice] 1#1: nginx/1.21.5
2022/08/24 16:07:03 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/08/24 16:07:03 [notice] 1#1: OS: Linux 3.10.0-957.el7.x86_64
2022/08/24 16:07:03 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:65536
2022/08/24 16:07:03 [notice] 1#1: start worker processes
2022/08/24 16:07:03 [notice] 1#1: start worker process 31
2022/08/24 16:07:03 [notice] 1#1: start worker process 32
[root@node22 kubernetes]# kubectl delete pod demo  删除容器这种自主式的 pod ,在删除时便会彻底删除;
pod "demo" deleted
通过控制器运行的 pod 在删除之后,控制器会自动再次新建一个 pod ;

 拉取实验用镜像上传至私有仓库:

[root@node11 ~]# docker pull yakexi007/myapp:v1
v1: Pulling from yakexi007/myapp
550fe1bea624: Pull complete
af3988949040: Pull complete
d6642feac728: Pull complete
c20f0a205eaa: Pull complete
438668b6babd: Pull complete
bf778e8612d0: Pull complete
Digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e
Status: Downloaded newer image for yakexi007/myapp:v1
docker.io/yakexi007/myapp:v1
[root@node11 ~]# docker pull yakexi007/myapp:v2
v2: Pulling from yakexi007/myapp
550fe1bea624: Already exists
af3988949040: Already exists
d6642feac728: Already exists
c20f0a205eaa: Already exists
438668b6babd: Already exists
Digest: sha256:5f4afc8302ade316fc47c99ee1d41f8ba94dbe7e3e7747dd87215a15429b9102
Status: Downloaded newer image for yakexi007/myapp:v2
docker.io/yakexi007/myapp:v2
[root@node11 ~]# docker tag yakexi007/myapp:v1 reg.westos.org/library/myapp:v1
[root@node11 ~]# docker push reg.westos.org/library/myapp:v1
The push refers to repository [reg.westos.org/library/myapp]
a0d2c4392b06: Pushed
05a9e65e2d53: Pushed
68695a6cfd7d: Pushed
c1dc81a64903: Pushed
8460a579ab63: Pushed
d39d92664027: Pushed
v1: digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e size: 1569
[root@node11 ~]# docker tag yakexi007/myapp:v2 reg.westos.org/library/myapp:v2
[root@node11 ~]# docker push reg.westos.org/library/myapp:v2
The push refers to repository [reg.westos.org/library/myapp]
05a9e65e2d53: Layer already exists
68695a6cfd7d: Layer already exists
c1dc81a64903: Layer already exists
8460a579ab63: Layer already exists
d39d92664027: Layer already exists
v2: digest: sha256:5f4afc8302ade316fc47c99ee1d41f8ba94dbe7e3e7747dd87215a15429b9102 size: 1362
[root@node11 ~]# docker images
reg.westos.org/library/myapp             v1        d4a5e0eaa84f   4 years ago     15.5MB
reg.westos.org/library/myapp             v2        54202d3f0f35   4 years ago     15.5MB
[root@node22 kubernetes]# kubectl create deployment myapp --image=myapp:v1  创建容器
deployment.apps/myapp created
[root@node22 kubernetes]# kubectl get pod  查看容器
NAME                     READY   STATUS    RESTARTS   AGE
myapp-678fcbc488-wks5h   1/1     Running   0          11s
[root@node22 kubernetes]# kubectl get pod -o wide 调度到了node33

kubernetes pod 中的用户 kubernetes 创建pod_docker

[root@node22 kubernetes]# kubectl delete pod myapp-678fcbc488-wks5h通过控制器运行的 pod 在删除之后,控制器会自动再次新建一个 pod ;
pod "myapp-678fcbc488-wks5h" deleted
[root@node22 kubernetes]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myapp-678fcbc488-hwmvg   1/1     Running   0          5s
[root@node22 kubernetes]# kubectl delete svc myapp
service "myapp" deleted
[root@node22 kubernetes]# kubectl delete deployments.apps myapp 永久删除
deployment.apps "myapp" deleted

2).Pod扩容与缩容

[root@node22 kubernetes]# kubectl scale deployment myapp --replicas=3 拉升为3个
deployment.apps/myapp scaled
[root@node22 kubernetes]# kubectl get pod
NAME                     READY   STATUS              RESTARTS   AGE
myapp-678fcbc488-5jdbt   0/1     ContainerCreating   0          3s
myapp-678fcbc488-9vzwd   1/1     Running             0          3s
myapp-678fcbc488-hwmvg   1/1     Running             0          2m7s
[root@node22 kubernetes]# kubectl scale deployment myapp --replicas=1缩减为1个
deployment.apps/myapp scaled

3).创建service

[root@node22 kubernetes]# curl 10.244.2.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@node22 kubernetes]# curl 10.244.1.5
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@node22 kubernetes]# curl 10.244.1.4
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@node22 kubernetes]# kubectl expose deployment myapp --port=80 --target-port=80
service/myapp exposed暴露控制器的端口信息;
[root@node22 kubernetes]# kubectl describe svc myapp
Name:              myapp
Namespace:         default
Labels:            app=myapp
Annotations:       <none>
Selector:          app=myapp
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.104.85.158
IPs:               10.104.85.158
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.4:80,10.244.1.5:80,10.244.2.2:80
Session Affinity:  None
Events:            <none>
[root@node22 kubernetes]# kubectl scale deployment myapp --replicas=6  拉伸为六个
deployment.apps/myapp scaled
[root@node22 kubernetes]# kubectl describe svc myapp svc自动跟新为6个
Name:              myapp
Namespace:         default
Labels:            app=myapp
Annotations:       <none>
Selector:          app=myapp
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.104.85.158
IPs:               10.104.85.158
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.4:80,10.244.1.5:80,10.244.1.6:80 + 3 more...
Session Affinity:  None
Events:            <none>
此时 pod 客户端可以通过 service 的名称访问后端的两个 Pod;
ClusterIP: 默认类型,自动分配一个仅集群内部可以访问的虚拟IP.
此时在访问时,三个后端是负载均衡的:
[root@node22 kubernetes]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   3h42m
myapp        ClusterIP   10.104.85.158   <none>        80/TCP    2m20s
[root@node22 kubernetes]# curl 10.104.85.158
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@node22 kubernetes]# curl 10.104.85.158/hostname.html
myapp-678fcbc488-zmfw8
[root@node22 kubernetes]# curl 10.104.85.158/hostname.html
myapp-678fcbc488-tq5vt
[root@node22 kubernetes]# curl 10.104.85.158/hostname.html
myapp-678fcbc488-hwmvg
[root@node22 kubernetes]# curl 10.104.85.158/hostname.html
myapp-678fcbc488-zmfw8
[root@node22 kubernetes]# curl 10.104.85.158/hostname.html
myapp-678fcbc488-9vzwd
[root@node22 kubernetes]# curl 10.104.85.158/hostname.html
myapp-678fcbc488-9vzwd
[root@node22 kubernetes]# curl 10.104.85.158/hostname.html
myapp-678fcbc488-bm28r

4).使用NodePort类型暴露端口,让外部客户端访问Pod

使用NodePort类型暴露端口,让外部客户端访问Pod:
[root@node22 kubernetes]# kubectl edit svc myapp  修改service的type为NodePort
NodePort:在ClusterIP基础上为Service在每台机器上绑定一个端口,这样就 可以通过 NodeIP:NodePort 来访问该服务
[root@node22 kubernetes]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3h53m
myapp        NodePort    10.104.85.158   <none>        80:32682/TCP   13m
[root@localhost ~]# curl 192.168.0.6:32682
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@localhost ~]# curl 192.168.0.6:32682/hostname.html
demo-5b4fc8bb88-g4vz8
##此时在访问集群中的任何一个节点都是可以的,并且是负载均衡的。
[root@localhost ~]# curl 192.168.0.11:32682/hostname.html
demo-5b4fc8bb88-flfq8
[root@localhost ~]# curl 192.168.0.22:32682/hostname.html
demo-5b4fc8bb88-g4vz8
此时外部主机在访问时可以指定端口:

kubernetes pod 中的用户 kubernetes 创建pod_IP_02

5).更新pod镜像

[root@node22 kubernetes]# kubectl set image deployment myapp myapp=myapp:v2更新为v2
deployment.apps/myapp image updated
[root@node22 kubernetes]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
myapp-57c78c68df-npq65   1/1     Running   0          15s   10.244.1.7   node33   <none>           <none>
[root@node22 kubernetes]# curl 10.244.1.7
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

回滚

[root@node22 kubernetes]#  kubectl rollout history deployment myapp  查看历史版本
deployment.apps/myapp
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
[root@node22 kubernetes]# kubectl rollout undo deployment myapp --to-revision=1 回退到指定版本
deployment.apps/myapp rolled back
[root@node22 kubernetes]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
myapp-678fcbc488-6zzvn   1/1     Running   0          16s   10.244.1.8   node33   <none>           <none>
[root@node22 kubernetes]# curl 10.244.1.8
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

2. 资源清单

[root@node22 ~]# mkdir yaml
[root@node22 ~]# cd yaml/
[root@node22 yaml]# vim pod.yaml
apiVersion: v1     指明api资源属于哪个群组和版本,同一个组可以有多个版本
kind: Pod         标记创建的资源类型,k8s主要支持一下类型Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet,Job,Cronjob 
metadata:         元数据
  name: demo     对象名称
  namespace: default  对象属于哪个命名空间
  labels:          指定资源标签
    app: nginx     
spec:             定义目标的期望状态
  containers:
  - name: nginx
    image: nginx
[root@node22 yaml]# kubectl apply -f pod.yaml 运行脚本
pod/demo created
[root@node22 yaml]# kubectl get pod
\NAME   READY   STATUS    RESTARTS   AGE
demo   1/1     Running   0          4m30s
[root@node22 yaml]# kubectl delete -f pod.yaml   删除
pod "demo" deleted
[root@node22 yaml]# kubectl get pod
No resources found in default namespace.
[root@node22 yaml]# kubectl explain pod //查询帮助文档
注:同一个清单文件中,写入两个 pod 时,如果两个 pod 用的是一个端口,只能启动一个;
同一个 pod 中使用的是同一个网络接口。
如以下清单文件在生成 pod 时;当两个 pod 占用一个端口时,会有一个启动失败;
[root@node22 yaml]# vim pod.yaml     nginx和myapp都是用80端口
apiVersion: v1
kind: Pod
metadata:
  name: demo
  namespace: default
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
  - name: myapp
    image: myapp:v1
[root@node22 yaml]# kubectl apply -f pod.yaml
pod/demo created
[root@node22 yaml]# kubectl get pod
NAME   READY   STATUS   RESTARTS     AGE
demo   1/2     Error    1 (6s ago)   10s
[root@node22 yaml]# kubectl describe pod demo
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  110s                default-scheduler  Successfully assigned default/demo to node33
  Normal   Pulling    109s                kubelet            Pulling image "nginx"
  Normal   Pulled     109s                kubelet            Successfully pulled image "nginx" in 64.663891ms
  Normal   Created    109s                kubelet            Created container nginx
  Normal   Started    109s                kubelet            Started container nginx
  Warning  BackOff    31s (x6 over 103s)  kubelet            Back-off restarting failed container
  Normal   Pulled     17s (x5 over 109s)  kubelet            Container image "myapp:v1" already present on machine
  Normal   Created    17s (x5 over 109s)  kubelet            Created container myapp
  Normal   Started    17s (x5 over 109s)  kubelet            Started container myapp
再次修改清单文件,来测试同一个 pod 中的容器使用的是同一个网络接口;
[root@node11 ~]# docker pull yakexi007/busyboxplus
Using default tag: latest
latest: Pulling from yakexi007/busyboxplus
a3ed95caeb02: Pull complete
c468b9f92624: Pull complete
1dc11860ba87: Pull complete
Digest: sha256:a0374e26b029688b32e284005bd7ae1fd7894ea46200fd6257cfd298518b78bf
Status: Downloaded newer image for yakexi007/busyboxplus:latest
docker.io/yakexi007/busyboxplus:latest
[root@node11 ~]# docker tag yakexi007/busyboxplus reg.westos.org/library/busyboxplus
[root@node11 ~]# docker push reg.westos.org/library/busyboxplus
Using default tag: latest
The push refers to repository [reg.westos.org/library/busyboxplus]
5f70bf18a086: Pushed
774600fa57ae: Pushed
075a34aac01b: Pushed
latest: digest: sha256:9d1c242c1fd588a1b8ec4461d33a9ba08071f0cc5bb2d50d4ca49e430014ab06 size: 1353
[root@node22 yaml]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: demo
  namespace: default
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
  - name: busyboxplus
    image: busyboxplus
    tty: true
    stdin: true
[root@node22 yaml]# kubectl apply -f pod.yaml
pod/demo created
[root@node22 yaml]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
demo   2/2     Running   0          9s
[root@node22 yaml]# kubectl attach demo -c busyboxplus -it  用交互方式进入
If you don't see a command prompt, try pressing enter.
/ # ps ax
PID   USER     COMMAND
    1 root     /bin/sh
    7 root     ps ax
[root@node22 yaml]# kubectl delete -f pod.yaml
pod "demo" deleted


[root@node22 yaml]# kubectl run demo --image=nginx --dry-run=client -o yaml将此命令转换为yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: demo
  name: demo
spec:
  containers:
  - image: nginx
    name: demo
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

1).镜像拉取设定:设定参数的意思是表示当镜像不存在的时候会拉取;默认值是always;

[root@node22 yaml]# vim test.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1
        name: myapp
        imagePullPolicy: IfNotPresent
[root@node22 yaml]# kubectl apply -f test.yml
deployment.apps/myapp configured
[root@node22 yaml]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myapp-678fcbc488-6w67z   1/1     Running   0          86s

2).端口映射

[root@node22 yaml]# vim test.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1
        name: myapp
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
[root@node22 yaml]# kubectl apply -f test.yml
deployment.apps/myapp configured
[root@node22 yaml]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myapp-6c9fbc7485-zpsbn   1/1     Running   0          5s
验证:
[root@node33 ~]# iptables -t nat -nL | grep :80
CNI-HOSTPORT-SETMARK  tcp  --  10.244.1.0/24        0.0.0.0/0            tcp dpt:80
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:80
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:10.244.1.13:80

3).指定使用主机网络

[root@node22 yaml]# vim test.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1
        name: myapp
        imagePullPolicy: IfNotPresent
        #ports:
        #- name: http
        #  containerPort: 80
        #  hostPort: 80
      hostNetwork: true
[root@node22 yaml]# kubectl apply -f test.yml
deployment.apps/myapp configured
[root@node22 yaml]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myapp-549dbfcbcc-45cwj   1/1     Running   0          5s
[root@node22 yaml]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
myapp-549dbfcbcc-45cwj   1/1     Running   0          11s   192.168.0.33   node33   <none>           <none>
验证:
[root@node33 ~]# curl 192.168.0.33
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

4).限制 cpu 和内存

[root@node22 yaml]# vim test.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1
        name: myapp
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            cpu: 0.1
            memory: 100M
          limits:
            cpu: 0.2
            memory: 200M
        #ports:
        #- name: http
        #  containerPort: 80
        #  hostPort: 80
      hostNetwork: true
[root@node22 yaml]# kubectl apply -f test.yml
[root@node22 yaml]# kubectl describe pod myapp-696b46cc77-lcfw2 查看详细信息
 Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  200M
    Requests:
      cpu:        100m
      memory:     100M
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qtpr (ro)
Conditions:

5).之前一直调度到的是node33可以通过主机名来将其调度到node44,优先级最高,但是当node44不存在时就有问题;

[root@node22 yaml]# vim test.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1
        name: myapp
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            cpu: 0.1
            memory: 100M
          limits:
            cpu: 0.2
            memory: 200M
        #ports:
        #- name: http
        #  containerPort: 80
        #  hostPort: 80
      hostNetwork: true
      nodeSelector:
        kubernetes.io/hostname: node44
[root@node22 yaml]# kubectl apply -f test.yml 运行
deployment.apps/myapp configured
[root@node22 yaml]# kubectl get pod -o wide
NAME                     READY   STATUS              RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
myapp-59fb6674b6-m4sj8   0/1     ContainerCreating   0          5s      192.168.0.44   node44   <none>           <none>
myapp-696b46cc77-lcfw2   1/1     Running             0          5m39s   192.168.0.33   node33   <none>           <none>
[root@node22 yaml]# kubectl get pod --show-labels  显示pod上的标签
[root@node22 yaml]# kubectl get nodes --show-labels  显示nodes上的标签
[root@node22 yaml]# kubectl get ns --show-labels     显示ns上的标签

3. Pod生命周期

Pod 可以包含多个容器,应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。

Init 容器与普通的容器非常像,除了如下两点:

它们总是运行到完成。

Init 容器不支持 Readiness,因为它们必须在 Pod 就绪之前运行完成,每个 Init 容器必须运行成功,下一个才能够运行。

如果 Pod 的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。然而,如果 Pod 对应的 restartPolicy 值为 Never,它不会重新启动。

Init 容器能做什么?

Init 容器可以包含一些安装过程中应用容器中不存在的实用工具或个性化代码。

Init 容器可以安全地运行这些工具,避免这些工具导致应用镜像的安全性降低。

应用镜像的创建者和部署者可以各自独立工作,而没有必要联合构建一个单独的应用镜像。

Init 容器能以不同于Pod内应用容器的文件系统视图运行。因此,Init容器可具有访问 Secrets 的权限,而应用容器不能够访问。

由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。一旦前置条件满足,Pod内的所有的应用容器会并行启动。

kubernetes pod 中的用户 kubernetes 创建pod_docker_03

初始化容器使用(init)

[root@node22 ~]# cd yaml/
[root@node22 yaml]# kubectl delete -f test.yml
deployment.apps "myapp" deleted
[root@node22 yaml]# kubectl get pod
No resources found in default namespace.
[root@node22 yaml]# vim init.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox
    command: ['sh', '-c', "until nslookup myservice.default.svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
[root@node22 yaml]# kubectl apply -f init.yaml
pod/myapp-pod created
[root@node22 yaml]# kubectl get pod
NAME        READY   STATUS     RESTARTS   AGE
myapp-pod   0/1     Init:0/1   0          6s        初始化容器和普通容器都只有一个,先启动初始化容器
[root@node22 yaml]# kubectl describe pod myapp-pod	  查看详细情况
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  103s  default-scheduler  Successfully assigned default/myapp-pod to node33
  Normal  Pulling    101s  kubelet            Pulling image "busybox"
  Normal  Pulled     101s  kubelet            Successfully pulled image "busybox" in 612.735774ms
  Normal  Created    101s  kubelet            Created container init-myservice
  Normal  Started    100s  kubelet            Started container init-myservice
[root@node22 yaml]# kubectl logs myapp-pod -c init-myservice 日志显示解析未成功,等待
waiting for myservice
Server:         10.96.0.10
Address:        10.96.0.10:53

** server can't find myservice.default.svc.cluster.local: NXDOMAIN

*** Can't find myservice.default.svc.cluster.local: No answer

waiting for myservice

 创建init-svc:

[root@node22 yaml]# vim init-svc.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376
[root@node22 yaml]# kubectl apply -f init-svc.yaml
service/myservice created
[root@node22 yaml]# kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          7m28s
[root@node22 yaml]# kubectl get svc 创建svc的目的:在dns中加入解析
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   12h
myservice    ClusterIP   10.102.64.181   <none>        80/TCP    10s
[root@node22 yaml]# kubectl describe svc -n kube-system
Name:              kube-dns
Namespace:         kube-system
Labels:            k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=CoreDNS
Annotations:       prometheus.io/port: 9153
                   prometheus.io/scrape: true
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.0.10
IPs:               10.96.0.10
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         10.244.0.4:53,10.244.0.5:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         10.244.0.4:53,10.244.0.5:53
Port:              metrics  9153/TCP
TargetPort:        9153/TCP
Endpoints:         10.244.0.4:9153,10.244.0.5:9153
Session Affinity:  None
Events:            <none>
初始化容器再启动之后就会自动删除,即使删除初始化容器,pod 也还是运行,因为运行pod 是在初始化结束之后进行的。
[root@node22 yaml]# kubectl delete -f init-svc.yaml
service "myservice" deleted
[root@node22 yaml]# kubectl get pod
NAME        READY   STATUS    RESTARTS        AGE
demo        1/1     Running   1 (4m59s ago)   5m52s
myapp-pod   1/1     Running   0               17m

• 探针 是由 kubelet 对容器执行的定期诊断:

• ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为

诊断成功。

• TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果

端口打开,则诊断被认为是成功的。

• HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get

请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功

的。

• 每次探测都将获得以下三种结果之一:

• 成功:容器通过了诊断。

• 失败:容器未通过诊断。

• 未知:诊断失败,因此不会采取任何行动

• Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应:

• livenessProbe:指示容器是否正在运行。如果存活探测失败,则 kubelet 会

杀死容器,并且容器将受到其 重启策略 的影响。如果容器不提供存活探针,

则默认状态为 Success。

• readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点

控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初

始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状

态为 Success。

• startupProbe: 指示容器中的应用是否已经启动。如果提供了启动探测

(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失

败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提

供启动探测,则默认状态为成功Success。

• 重启策略

• PodSpec 中有一个 restartPolicy 字段,可能的值为 Always、OnFailure 和

Never。默认为 Always。

• Pod 的生命

• 一般Pod 不会消失,直到人为销毁他们,这可能是一个人或控制器。

• 建议创建适当的控制器来创建 Pod,而不是直接自己创建 Pod。因为单独的

Pod 在机器故障的情况下没有办法自动复原,而控制器却可以。

• 三种可用的控制器:

• 使用 Job 运行预期会终止的 Pod,例如批量计算。Job 仅适用于重启策略为

OnFailure 或 Never 的 Pod。

• 对预期不会终止的 Pod 使用 ReplicationController、ReplicaSet 和

Deployment ,例如 Web 服务器。 ReplicationController 仅适用于具有

restartPolicy 为 Always 的 Pod。

• 提供特定于机器的系统服务,使用 DaemonSet 为每台机器运行一个 Pod 。

1).livenessProbe(存活探针)

[root@node22 yaml]# vim liveness.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
    - name: liveness
      image: nginx
      imagePullPolicy: IfNotPresent
      livenessProbe:
        tcpSocket:
          port: 8080
        initialDelaySeconds: 1
        periodSeconds: 3
        timeoutSeconds: 1
[root@node22 yaml]# kubectl apply -f liveness.yaml
pod/liveness-http created
[root@node22 yaml]# kubectl get pod
NAME            READY   STATUS    RESTARTS      AGE
demo            1/1     Running   1 (14m ago)   15m
liveness-http   1/1     Running   0             8s
[root@node22 yaml]# kubectl describe pod liveness-http探针检测8080出现问题
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  41s                default-scheduler  Successfully assigned default/liveness-http to node33
  Normal   Pulled     23s (x3 over 40s)  kubelet            Container image "nginx" already present on machine
  Normal   Created    23s (x3 over 40s)  kubelet            Created container liveness
  Normal   Started    23s (x3 over 40s)  kubelet            Started container liveness
  Warning  Unhealthy  14s (x9 over 38s)  kubelet            Liveness probe failed: dial tcp 10.244.1.16:8080: connect: connection refused
  Normal   Killing    14s (x3 over 32s)  kubelet            Container liveness failed liveness probe, will be restarted
  Warning  BackOff    14s (x2 over 14s)  kubelet            Back-off restarting failed container
[root@node22 yaml]# kubectl get pod  一值处于重启
NAME            READY   STATUS    RESTARTS      AGE
demo            1/1     Running   1 (16m ago)   17m
liveness-http   1/1     Running   5 (58s ago)   118s

2).readinessProbe(就绪探针)

[root@node22 yaml]# vim liveness.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
    - name: liveness
      image: nginx
      imagePullPolicy: IfNotPresent
      livenessProbe:
        tcpSocket:
          port: 80
        initialDelaySeconds: 2
        periodSeconds: 3
        timeoutSeconds: 2
      readinessProbe:
        httpGet:
          path: /test.html
          port: 80
        initialDelaySeconds: 1
        periodSeconds: 3
        timeoutSeconds: 1
[root@node22 yaml]# kubectl apply -f liveness.yaml
pod/liveness-http created
[root@node22 yaml]# kubectl get pod
NAME            READY   STATUS    RESTARTS      AGE
demo            1/1     Running   1 (21m ago)   22m
liveness-http   0/1     Running   0             16s
[root@node22 yaml]# kubectl describe pod liveness-http
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  53s                default-scheduler  Successfully assigned default/liveness-http to node33
  Normal   Pulled     53s                kubelet            Container image "nginx" already present on machine
  Normal   Created    53s                kubelet            Created container liveness
  Normal   Started    53s                kubelet            Started container liveness
  Warning  Unhealthy  2s (x20 over 52s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404
[root@node22 yaml]# kubectl expose pod liveness-http --port=80 --target-port=80 暴露端口
service/liveness-http exposed
[root@node22 yaml]# kubectl describe svc liveness-http
Name:              liveness-http
Namespace:         default
Labels:            test=liveness
Annotations:       <none>
Selector:          test=liveness
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.98.152.204
IPs:               10.98.152.204
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:
Session Affinity:  None
Events:            <none>
[root@node22 yaml]# kubectl exec liveness-http -- touch /usr/share/nginx/html/test.html创建页面
[root@node22 yaml]# kubectl get pod
NAME            READY   STATUS    RESTARTS      AGE
demo            1/1     Running   1 (25m ago)   26m
liveness-http   1/1     Running   0             4m8s
[root@node22 yaml]# kubectl exec liveness-http -- rm /usr/share/nginx/html/test.html删除页面
[root@node22 yaml]# kubectl get pod
NAME            READY   STATUS    RESTARTS      AGE
demo            1/1     Running   1 (26m ago)   27m
liveness-http   0/1     Running   0             4m40s
[root@node22 yaml]# kubectl delete svc liveness-http
service "liveness-http" deleted
[root@node22 yaml]# kubectl delete -f liveness.yaml
pod "liveness-http" deleted

4. 控制器

  • Pod 的分类:
    自主式 Pod:Pod 退出后不会被创建
    控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目
  • 控制器类型:
    Replication Controller和ReplicaSet
    Deployment
    DaemonSet
    StatefulSet
    Job
    CronJob
    HPA全称Horizontal Pod Autoscaler
  • Replication Controller 和ReplicaSet

ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet。

ReplicaSet 和 Replication Controller 的唯一区别是选择器的支持,ReplicaSet 支持新的基于集合的选择器需求。

ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。

虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制。

  • Deployment

Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法。
典型的应用场景:
用来创建 Pod 和 ReplicaSet ,滚动更新和回滚,扩容和缩容,暂停与恢复。

  • DaemonSet

DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod 。当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。

DaemonSet 的典型用法:

在每个节点上运行集群存储 DaemonSet,例如 glusterd、ceph。

在每个节点上运行日志收集 DaemonSet,例如 fluentd、logstash。

在每个节点上运行监控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等

一个简单的用法是在所有的节点上都启动一个 DaemonSet,将被作为每种类型的 daemon 使用。

一个稍微复杂的用法是单独对每种 daemon 类型使用多个 DaemonSet,但具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。

  • StatefulSet

StatefulSet 是用来管理有状态应用的工作负载 API 对象。实例之间有不对等关系,以及实例对外部数据有依赖关系的应用,称为“有状态应用”

StatefulSet 用来管理 Deployment 和扩展一组 Pod,并且能为这些 Pod 提供"序号和唯一性保证"。

StatefulSets 对于需要满足以下一个或多个需求的应用程序很有价值:

稳定的、唯一的网络标识符。

稳定的、持久的存储。

有序的、优雅的部署和缩放。

有序的、自动的滚动更新。

  • Job

执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束。

  • CronJob

Cron Job 创建基于时间调度的 Jobs。
一个 CronJob 对象就像 crontab (cron table) 文件中的一行,它用 Cron 格式进行编写,并周期性地在给定的调度时间执行 Job。

  • HPA,必须要有指标参数;默认的只有cpu和内存,其他的需要第三方软件提供。

根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放。

1).ReplicaSet
可以看到在指定为3个副本时,当控制器生效后,就会有3个pod;

[root@node22 yaml]# vim replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
    name: replicaset-example
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
     containers:
     - name: nginx
       image: nginx
[root@node22 yaml]# kubectl apply -f replicaset.yaml
replicaset.apps/replicaset-example created
[root@node22 yaml]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
replicaset-example-5w2nh   1/1     Running   0          96s
replicaset-example-cs5m5   1/1     Running   0          96s
replicaset-example-q4szq   1/1     Running   0          96s
[root@node22 yaml]# kubectl get rs
NAME                 DESIRED   CURRENT   READY   AGE
replicaset-example   3         3         2       51s
[root@node22 yaml]# kubectl get pod --show-labels   查看标签
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
replicaset-example-5w2nh   1/1     Running   0          2m38s   app=nginx
replicaset-example-cs5m5   1/1     Running   0          2m38s   app=nginx
replicaset-example-q4szq   1/1     Running   0          2m38s   app=nginx
[root@node22 yaml]# kubectl label pod replicaset-example-5w2nh app=myapp --overwrite
pod/replicaset-example-5w2nh labeled改变标签
[root@node22 yaml]# kubectl get pod --show-labels 自动增加一个标签
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
replicaset-example-5w2nh   1/1     Running   0          4m38s   app=myapp
replicaset-example-cs5m5   1/1     Running   0          4m38s   app=nginx
replicaset-example-lmfcw   1/1     Running   0          42s     app=nginx
replicaset-example-q4szq   1/1     Running   0          4m38s   app=nginx
[root@node22 yaml]# kubectl delete pod replicaset-example-5w2nh 此标签不归rs管理
pod "replicaset-example-5w2nh" deleted
[root@node22 yaml]# kubectl get pod --show-labels
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
replicaset-example-cs5m5   1/1     Running   0          5m47s   app=nginx
replicaset-example-lmfcw   1/1     Running   0          111s    app=nginx
replicaset-example-q4szq   1/1     Running   0          5m47s   app=nginx
[root@node22 yaml]# kubectl label pod replicaset-example-cs5m5  app=myapp --overwrite
pod/replicaset-example-cs5m5 labeled改变标签
[root@node22 yaml]# kubectl get pod --show-labels
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
replicaset-example-cs5m5   1/1     Running   0          7m10s   app=myapp
replicaset-example-lmfcw   1/1     Running   0          3m14s   app=nginx
replicaset-example-q4szq   1/1     Running   0          7m10s   app=nginx
replicaset-example-wznhm   1/1     Running   0          8s      app=nginx
[root@node22 yaml]# kubectl label pod replicaset-example-cs5m5  app=nginx --overwrite
pod/replicaset-example-cs5m5 labeled改回原标签,回收最近新建的标签
[root@node22 yaml]# kubectl get pod --show-labels
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
replicaset-example-cs5m5   1/1     Running   0          7m26s   app=nginx
replicaset-example-lmfcw   1/1     Running   0          3m30s   app=nginx
replicaset-example-q4szq   1/1     Running   0          7m26s   app=nginx

由于 rs 能确保任何时间都有指定数量的 Pod 副本在运行,当修改其中一个标签之后,会检测到少于指定数量会再次拉起一个 pod 来;当发现多于指定数量时,会立即会收掉多余的 pod;rs 在控制副本时,是通过标签来实现的;

2).Deployment和ReplicaSet 实现滚动更新

此时在让Deployment 控制器生效时,还有一个 rs;

[root@node22 yaml]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: deployment-example
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
     containers:
     - name: nginx
       image: nginx
[root@node22 yaml]# kubectl get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/deployment-example-85b98978db-gnbz5   1/1     Running   0          2m27s
pod/deployment-example-85b98978db-ndvzv   1/1     Running   0          2m27s
pod/deployment-example-85b98978db-pw76j   1/1     Running   0          2m27s
pod/replicaset-example-cs5m5              1/1     Running   0          13m
pod/replicaset-example-lmfcw              1/1     Running   0          9m30s
pod/replicaset-example-q4szq              1/1     Running   0          13m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   15h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment-example   3/3     3            3           2m27s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/deployment-example-85b98978db   3         3         3       2m27s
replicaset.apps/replicaset-example              3         3         3       13m

[root@node22 yaml]# vim deployment.yaml
    spec:
     containers:
     - name: nginx
       image: myapp:v1
[root@node22 yaml]# kubectl apply -f deployment.yaml
deployment.apps/deployment-example configured
[root@node22 yaml]# kubectl get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/deployment-example-7d84d7dccb-46ghp   1/1     Running   0          31s
pod/deployment-example-7d84d7dccb-sbc56   1/1     Running   0          26s
pod/deployment-example-7d84d7dccb-wvdv6   1/1     Running   0          33s
pod/replicaset-example-cs5m5              1/1     Running   0          14m
pod/replicaset-example-lmfcw              1/1     Running   0          10m
pod/replicaset-example-q4szq              1/1     Running   0          14m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   15h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment-example   3/3     3            3           3m53s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/deployment-example-7d84d7dccb   3         3         3       33s
replicaset.apps/deployment-example-85b98978db   0         0         0       3m53s
replicaset.apps/replicaset-example              3         3         3       14m

[root@node22 yaml]# vim deployment.yaml
    spec:
     containers:
     - name: nginx
       image: myapp:v2
[root@node22 yaml]# kubectl apply -f deployment.yaml
deployment.apps/deployment-example configured
[root@node22 yaml]# kubectl get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/deployment-example-7d84d7dccb-46ghp   1/1     Running   0          31s
pod/deployment-example-7d84d7dccb-sbc56   1/1     Running   0          26s
pod/deployment-example-7d84d7dccb-wvdv6   1/1     Running   0          33s
pod/replicaset-example-cs5m5              1/1     Running   0          14m
pod/replicaset-example-lmfcw              1/1     Running   0          10m
pod/replicaset-example-q4szq              1/1     Running   0          14m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   15h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment-example   3/3     3            3           3m53s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/deployment-example-56c698c49c   3         3         3       8s
replicaset.apps/deployment-example-7d84d7dccb   0         0         0       108s
replicaset.apps/deployment-example-85b98978db   0         0         0       5m8s
replicaset.apps/replicaset-example              3         3         3       16m
回滚到v1版本:
[root@node22 yaml]# vim deployment.yaml
    spec:
     containers:
     - name: nginx
       image: myapp:v1
[root@node22 yaml]# kubectl apply -f deployment.yaml
deployment.apps/deployment-example configured
[root@node22 yaml]# kubectl get all
NAME                                      READY   STATUS        RESTARTS   AGE
pod/deployment-example-56c698c49c-gjt5f   1/1     Terminating   0          46s
pod/deployment-example-7d84d7dccb-jg8dc   1/1     Running       0          3s
pod/deployment-example-7d84d7dccb-rp7jf   1/1     Running       0          2s
pod/deployment-example-7d84d7dccb-vpcdn   1/1     Running       0          5s
pod/replicaset-example-cs5m5              1/1     Running       0          16m
pod/replicaset-example-lmfcw              1/1     Running       0          12m
pod/replicaset-example-q4szq              1/1     Running       0          16m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   15h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment-example   3/3     3            3           5m47s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/deployment-example-56c698c49c   0         0         0       47s
replicaset.apps/deployment-example-7d84d7dccb   3         3         3       2m27s
replicaset.apps/deployment-example-85b98978db   0         0         0       5m47s
replicaset.apps/replicaset-example              3         3         3       16m

3).DaemonSet控制器:

对于每个节点分布一个;
[root@node22 yaml]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
node22   Ready    control-plane,master   15h   v1.23.10
node33   Ready    <none>                 14h   v1.23.10
node44   Ready    <none>                 14h   v1.23.10
[root@node22 yaml]# kubectl get pod -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
deployment-example-85b98978db-2nlwr   1/1     Running   0          5m59s   10.244.1.29   node33   <none>           <none>
deployment-example-85b98978db-hz2mv   1/1     Running   0          5m54s   10.244.1.30   node33   <none>           <none>
deployment-example-85b98978db-t6lp5   1/1     Running   0          5m56s   10.244.2.11   node44   <none>           <none>
replicaset-example-cs5m5              1/1     Running   0          25m     10.244.2.5    node44   <none>           <none>
replicaset-example-lmfcw              1/1     Running   0          21m     10.244.1.20   node33   <none>           <none>
replicaset-example-q4szq              1/1     Running   0          25m     10.244.1.19   node33   <none>           <none>

4).job 控制器:

用控制器来做一次计算pi的值,计算到小数点后1000位;
[root@node22 yaml]# kubectl delete -f deployment.yaml
deployment.apps "deployment-example" deleted
[root@node22 yaml]# vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(1000)"]
      restartPolicy: Never
  backoffLimit: 4
[root@node22 yaml]# kubectl apply -f job.yaml
job.batch/pi unchanged
[root@node22 yaml]# kubectl get pod
NAME       READY   STATUS      RESTARTS   AGE
pi-nt9vb   0/1     Completed   0          98s
[root@server2 k8s]# kubectl logs pi-nt9vb		##查看 pi 的计算结果
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393607260249141273724587006606315588174881520920962829254091715364367892590360011330530548820466521384146951941511609433057270365759591953092186117381932611793105118548074462379962749567351885752724891227938183011949129833673362440656643086021394946395224737190702179860943702770539217176293176752384674818467669405132000568127145263560827785771342757789609173637178721468440901224953430146549585371050792279689258923542019956112129021960864034418159813629774771309960518707211349999998372978049951059731732816096318595024459455346908302642522308253344685035261931188171010003137838752886587533208381420617177669147303598253490428755468731159562863882353787593751957781857780532171226806613001927876611195909216420199

5).cronjob控制器:
周期性的动作;

[root@node22 yaml]# vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cronjob-example
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from k8s cluster
          restartPolicy: OnFailure
[root@node22 yaml]# kubectl apply -f cronjob.yaml
cronjob.batch/hello created
[root@node22 yaml]# kubectl get pod
NAME                             READY   STATUS      RESTARTS   AGE
hello-27690053-fkn4b             0/1     Completed   0          28s
[root@node22 yaml]# kubectl logs hello-27690053-fkn4b
Thu Aug 25 04:53:01 UTC 2022
Hello from k8s cluster
[root@node22 yaml]# kubectl logs hello-27690054-4xb4t
Thu Aug 25 04:54:00 UTC 2022
Hello from k8s cluster
[root@node22 yaml]# kubectl logs hello-27690055-fkbwz
Thu Aug 25 04:55:00 UTC 2022
Hello from k8s cluster