使用kubeadm部署k8s

该部分可以只在 k8s-master 上执行

1.kubernetes镜像切换成国内源

[root@M001 ~]# touch /etc/yum.repos.d/kubernetes.repo
[root@M001 ~]# vi /etc/yum.repos.d/kubernetes.repo
[root@M001 ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg


[root@M001 ~]# 
[root@M001 ~]# yum clean all                        
33 files removed
[root@M001 ~]# yum makecache 
CentOS Stream 9 - BaseOS                                                                                                               4.2 MB/s | 6.2 MB     00:01    
CentOS Stream 9 - AppStream                                                                                                            6.1 MB/s |  17 MB     00:02    
CentOS Stream 9 - Extras packages                                                                                                       15 kB/s |  12 kB     00:00    
Docker CE Stable - x86_64                                                                                                               99 kB/s |  27 kB     00:00    
Kubernetes                                                                                                                             577 kB/s | 175 kB     00:00    
Metadata cache created.
[root@M001 ~]# yum repolist                         
repo id                                                                    repo name
appstream                                                                  CentOS Stream 9 - AppStream
baseos                                                                     CentOS Stream 9 - BaseOS
docker-ce-stable                                                           Docker CE Stable - x86_64
extras-common                                                              CentOS Stream 9 - Extras packages
kubernetes                                                                 Kubernetes
[root@M001 ~]#

注意:gpgkey=后面的网址是在同一行,不要换行哦;尤其是对yum源配置不熟悉的朋友容易犯这样的错误。

2. 安装指定版本 kubeadm,kubelet和kubectl

Kubeadm、kubelet和kubectl都是用于与Kubernetes集群交互的工具,但它们具有不同的用途。

  • Kubeadm是用于安装和配置Kubernetes集群的工具。它通过处理诸如引导控制平面、将节点加入集群和配置网络等任务来自动化集群设置的过程。
  • Kubelet是主要的节点代理,运行在Kubernetes集群中的每个工作节点上。它负责管理运行在节点上的容器和Pod,与控制平面通信以接收指令,并报告节点及其工作负载的状态。
  • Kubectl是用于与Kubernetes集群交互的命令行工具。它允许您部署和管理应用程序、检查和修改群集资源,并查看日志和其他诊断信息。

总之,kubeadm用于设置Kubernetes集群,kubelet用于管理集群中的节点,kubectl用于从命令行与集群交互。

有关版本的信息,大家可以去官网了解:

https://kubernetes.io/releases/download/

也可以在所有节点安装指定版本 kubeadm,kubelet和kubectl(截止我部署的时候,最新的版本是1.27.0【仅部分组件】);我们选择前一个版本1.26.0

[root@M001 ~]# yum info kubeadm-1.26.0 kubelet-1.26.0 kubectl-1.26.0
CentOS Stream 9 - BaseOS                                                                                                               4.6 MB/s | 6.2 MB     00:01    
CentOS Stream 9 - AppStream                                                                                                            6.7 MB/s |  17 MB     00:02    
CentOS Stream 9 - Extras packages                                                                                                       15 kB/s |  12 kB     00:00    
Docker CE Stable - x86_64                                                                                                               91 kB/s |  27 kB     00:00    
Kubernetes                                                                                                                             479 kB/s | 175 kB     00:00    
Available Packages
Name         : kubeadm
Version      : 1.26.0
Release      : 0
Architecture : x86_64
Size         : 10 M
Source       : kubelet-1.26.0-0.src.rpm
Repository   : kubernetes
Summary      : Command-line utility for administering a Kubernetes cluster.
URL          : https://kubernetes.io
License      : ASL 2.0
Description  : Command-line utility for administering a Kubernetes cluster.

Name         : kubectl
Version      : 1.26.0
Release      : 0
Architecture : x86_64
Size         : 11 M
Source       : kubelet-1.26.0-0.src.rpm
Repository   : kubernetes
Summary      : Command-line utility for interacting with a Kubernetes cluster.
URL          : https://kubernetes.io
License      : ASL 2.0
Description  : Command-line utility for interacting with a Kubernetes cluster.

Name         : kubelet
Version      : 1.26.0
Release      : 0
Architecture : x86_64
Size         : 22 M
Source       : kubelet-1.26.0-0.src.rpm
Repository   : kubernetes
Summary      : Container cluster management
URL          : https://kubernetes.io
License      : ASL 2.0
Description  : The node agent of Kubernetes, the container cluster manager.

[root@M001 ~]# 

[root@M001 ~]# yum install -y kubeadm-1.26.0 kubelet-1.26.0 kubectl-1.26.0
...
Installed:
  conntrack-tools-1.4.7-2.el9.x86_64       cri-tools-1.26.0-0.x86_64        kubeadm-1.26.0-0.x86_64                      kubectl-1.26.0-0.x86_64                      
  kubelet-1.26.0-0.x86_64                  kubernetes-cni-1.2.0-0.x86_64    libnetfilter_cthelper-1.0.0-22.el9.x86_64    libnetfilter_cttimeout-1.0.0-19.el9.x86_64   
  libnetfilter_queue-1.0.5-1.el9.x86_64    socat-1.7.4.1-5.el9.x86_64      

Complete!
[root@M001 ~]# 



[root@M001 ~]# yum info kubeadm
Last metadata expiration check: 0:03:03 ago on Fri 16 Jun 2023 11:46:36 AM CST.
Installed Packages
Name         : kubeadm
Version      : 1.26.0
Release      : 0
Architecture : x86_64
Size         : 45 M
Source       : kubelet-1.26.0-0.src.rpm
Repository   : @System
From repo    : kubernetes
Summary      : Command-line utility for administering a Kubernetes cluster.
URL          : https://kubernetes.io
License      : ASL 2.0
Description  : Command-line utility for administering a Kubernetes cluster.

Available Packages
Name         : kubeadm
Version      : 1.27.3
Release      : 0
Architecture : x86_64
Size         : 11 M
Source       : kubelet-1.27.3-0.src.rpm
Repository   : kubernetes
Summary      : Command-line utility for administering a Kubernetes cluster.
URL          : https://kubernetes.io
License      : ASL 2.0
Description  : Command-line utility for administering a Kubernetes cluster.

[root@M001 ~]#

设置kubelet开机启动

[root@M001 ~]# systemctl status kubelet
○ kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: inactive (dead)
       Docs: https://kubernetes.io/docs/
[root@M001 ~]# 
[root@M001 ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@M001 ~]# systemctl start kubelet
[root@M001 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Fri 2023-06-16 11:51:12 CST; 3s ago
       Docs: https://kubernetes.io/docs/
    Process: 46780 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
   Main PID: 46780 (code=exited, status=1/FAILURE)
        CPU: 116ms

Jun 16 11:51:12 M001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jun 16 11:51:12 M001 systemd[1]: kubelet.service: Failed with result 'exit-code'.
[root@M001 ~]#

配置 kubeadm 及 kubectl 自动补全功能:(可选操作哦)

yum install -y bash-completion 
kubeadm completion bash > /etc/bash_completion.d/kubeadm
kubectl completion bash > /etc/bash_completion.d/kubectl
source /etc/bash_completion.d/kubeadm /etc/bash_completion.d/kubectl

3.kubeadm生成初始文件

官方推荐使用--config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file最初Kubernetes这么做是为了支持动态Kubelet配置(Dynamic Kubelet Configuration),但动态Kubelet配置特性从k8s 1.22中已弃用,并在1.24中被移除。如果需要调整集群汇总所有节点kubelet的配置,还是推荐使用ansible等工具将配置分发到各个节点

关于kubeadmin init的官方说明,大家有不明白的可以参考官方的说明:

https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/

3.1 生成配置文件:

[root@M001 ~]# pwd
/root
[root@M001 ~]# ll
total 4
-rw-------. 1 root root 820 Jun 15 12:32 anaconda-ks.cfg
[root@M001 ~]# kubeadm config print init-defaults > kubeadm-init.yml
[root@M001 ~]# ll
total 8
-rw-------. 1 root root 820 Jun 15 12:32 anaconda-ks.cfg
-rw-r--r--. 1 root root 807 Jun 16 11:55 kubeadm-init.yml
[root@M001 ~]# 
[root@M001 ~]# cat -n kubeadm-init.yml
     1  apiVersion: kubeadm.k8s.io/v1beta3
     2  bootstrapTokens:
     3  - groups:
     4    - system:bootstrappers:kubeadm:default-node-token
     5    token: abcdef.0123456789abcdef
     6    ttl: 24h0m0s
     7    usages:
     8    - signing
     9    - authentication
    10  kind: InitConfiguration
    11  localAPIEndpoint:
    12    advertiseAddress: 1.2.3.4
    13    bindPort: 6443
    14  nodeRegistration:
    15    criSocket: unix:///var/run/containerd/containerd.sock
    16    imagePullPolicy: IfNotPresent
    17    name: node
    18    taints: null
    19  ---
    20  apiServer:
    21    timeoutForControlPlane: 4m0s
    22  apiVersion: kubeadm.k8s.io/v1beta3
    23  certificatesDir: /etc/kubernetes/pki
    24  clusterName: kubernetes
    25  controllerManager: {}
    26  dns: {}
    27  etcd:
    28    local:
    29      dataDir: /var/lib/etcd
    30  imageRepository: registry.k8s.io
    31  kind: ClusterConfiguration
    32  kubernetesVersion: 1.26.0
    33  networking:
    34    dnsDomain: cluster.local
    35    serviceSubnet: 10.96.0.0/12
    36  scheduler: {}
[root@M001 ~]#

3.2 基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件:

[root@M001 ~]# cat kubeadm-init.yml                            
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s    	#修改token过期时间为无限制,则修改这里为0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.11.120     	#修改为k8s-master节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
#  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: xml_k8s                   #群集的名字根据自己的喜好修改
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers   	#替换为国内的镜像仓库加快速度
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.112.0.0/12	                                #增加:为pod网络指定网络段;不能与serviceSubnet重叠或冲突;例如:10.100.0.0/16就不行(亲测,检查时 不会检查出问题,但还是不要冲突的好。)
scheduler: {}

---   #一个 kubeadm 配置文件中可以包含多个配置类型,使用三根横线(---)作为分隔符。
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd                             #申明cgroup用 systemd
failSwapOn: false

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs      #申明使用ipvs;需要依赖前面步骤中关于ip_vs内核模块加载的设定


[root@M001 ~]#

Kubernetes 中的 kube-proxy 是一个网络代理和负载均衡器,用于实现服务到容器之间的网络通信。kube-proxy 提供了几种模式来管理服务的网络连接,包括以下几种:

  1. Userspace:这是 kube-proxy 的旧版模式,它使用 Linux 内核中的 netfilter 和 iptables 来实现转发和负载均衡。在这个模式下,kube-proxy 在用户空间处理每个请求,并维护一个内核中的 iptables 规则集来路由流量。
  2. iptables:这是 kube-proxy 的默认模式,它直接利用 Linux 内核中的 iptables 功能来处理网络流量。在这个模式下,kube-proxy 使用 iptables 规则集来实现转发、负载均衡和服务发现,不再需要在用户空间进行额外的处理。
  3. IPVS(IP Virtual Server):这是一种高性能的负载均衡技术,可以替代 iptables 模式。IPVS 通过在内核中维护一个调度表来实现负载均衡,可以提供更高的性能和扩展性。使用 IPVS 模式时,kube-proxy 使用 IPVS 规则集来分配流量到后端的 Pod。

上述三种模式各有特点和适用场景,您可以根据自己的需求选择合适的模式。如果您不指定任何模式,默认情况下 kube-proxy 将使用 iptables 模式。

请注意,从 Kubernetes 1.19 版本开始,kube-proxy 的 userspace 模式已被标记为弃用,并且将在后续的版本中移除。建议使用 iptables 或 IPVS 模式来取代 userspace 模式。

最重要的几个配置点:

  • controlPlaneEndpoint 可用于为所有控制平面节点设置共享端点
  • imageRepository 用于指定镜像仓库
  • networking.podSubnet 指定 pod 网络地址范围
  • cgroupDriver: 申明cgroup用 systemd
  • mode: 申明使用ipvs;不设置则默认使用iptables模式

关于配置文件中的apiVersion的说明;

kubeadm 支持以下配置类型;以及这这些类型对应的apiVersion如下(一一对应):

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration

参考官方说明:

https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/

3.3 验证语法是否有错误的地方:

dry-run模式验证语法

kubeadm init --config kubeadm-config.yaml --dry-run

如果指定的nodeRegistration.node参数不正确,则会报:

[root@M001 ~]# kubeadm init --config kubeadm-init.yml --dry-run 
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "node" could not be reached
        [WARNING Hostname]: hostname "node": lookup node on 192.168.11.1:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
        [ERROR Mem]: the system RAM (739 MB) is less than the minimum 1700 MB
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@M001 ~]#

同时也检查出必须满足的硬件条件:

  • CPU至少需要2core
  • 内存至少需要2GB

当所有的报错和警告信息都得到解决后,检查结果会是这样:

[root@M001 ~]# kubeadm init --config kubeadm-init.yml --dry-run
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Would pull the required images (like 'kubeadm config images pull')
[certs] Using certificateDir folder "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m001] and IPs [10.96.0.1 192.168.11.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m001] and IPs [192.168.11.120 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m001] and IPs [192.168.11.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674/config.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Would ensure that "/var/lib/etcd" directory is present
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to the "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674" directory
[dryrun] The certificates or kubeconfig files would not be printed due to their sensitive nature
[dryrun] Please examine the "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
        apiVersion: v1
        kind: Pod
        metadata:
          annotations:
            kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.11.120:6443
          creationTimestamp: null
          labels:
            component: kube-apiserver
            tier: control-plane
          name: kube-apiserver
          namespace: kube-system
        spec:
          containers:
          - command:
            - kube-apiserver
            - --advertise-address=192.168.11.120
            - --allow-privileged=true
            - --authorization-mode=Node,RBAC
            - --client-ca-file=/etc/kubernetes/pki/ca.crt
            - --enable-admission-plugins=NodeRestriction
            - --enable-bootstrap-token-auth=true
            - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
            - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
            - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
            - --etcd-servers=https://127.0.0.1:2379
            - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
            - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
            - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
            - --requestheader-allowed-names=front-proxy-client
            - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
            - --requestheader-extra-headers-prefix=X-Remote-Extra-
            - --requestheader-group-headers=X-Remote-Group
            - --requestheader-username-headers=X-Remote-User
            - --secure-port=6443
            - --service-account-issuer=https://kubernetes.default.svc.cluster.local
            - --service-account-key-file=/etc/kubernetes/pki/sa.pub
            - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
            - --service-cluster-ip-range=10.96.0.0/12
            - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
            - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
            image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 8
              httpGet:
                host: 192.168.11.120
                path: /livez
                port: 6443
                scheme: HTTPS
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 15
            name: kube-apiserver
            readinessProbe:
              failureThreshold: 3
              httpGet:
                host: 192.168.11.120
                path: /readyz
                port: 6443
                scheme: HTTPS
              periodSeconds: 1
              timeoutSeconds: 15
            resources:
              requests:
                cpu: 250m
            startupProbe:
              failureThreshold: 24
              httpGet:
                host: 192.168.11.120
                path: /livez
                port: 6443
                scheme: HTTPS
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 15
            volumeMounts:
            - mountPath: /etc/ssl/certs
              name: ca-certs
              readOnly: true
            - mountPath: /etc/pki
              name: etc-pki
              readOnly: true
            - mountPath: /etc/kubernetes/pki
              name: k8s-certs
              readOnly: true
          hostNetwork: true
          priorityClassName: system-node-critical
          securityContext:
            seccompProfile:
              type: RuntimeDefault
          volumes:
          - hostPath:
              path: /etc/ssl/certs
              type: DirectoryOrCreate
            name: ca-certs
          - hostPath:
              path: /etc/pki
              type: DirectoryOrCreate
            name: etc-pki
          - hostPath:
              path: /etc/kubernetes/pki
              type: DirectoryOrCreate
            name: k8s-certs
        status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
        apiVersion: v1
        kind: Pod
        metadata:
          creationTimestamp: null
          labels:
            component: kube-controller-manager
            tier: control-plane
          name: kube-controller-manager
          namespace: kube-system
        spec:
          containers:
          - command:
            - kube-controller-manager
            - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
            - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
            - --bind-address=127.0.0.1
            - --client-ca-file=/etc/kubernetes/pki/ca.crt
            - --cluster-name=xml_k8s
            - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
            - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
            - --controllers=*,bootstrapsigner,tokencleaner
            - --kubeconfig=/etc/kubernetes/controller-manager.conf
            - --leader-elect=true
            - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
            - --root-ca-file=/etc/kubernetes/pki/ca.crt
            - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
            - --use-service-account-credentials=true
            image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 8
              httpGet:
                host: 127.0.0.1
                path: /healthz
                port: 10257
                scheme: HTTPS
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 15
            name: kube-controller-manager
            resources:
              requests:
                cpu: 200m
            startupProbe:
              failureThreshold: 24
              httpGet:
                host: 127.0.0.1
                path: /healthz
                port: 10257
                scheme: HTTPS
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 15
            volumeMounts:
            - mountPath: /etc/ssl/certs
              name: ca-certs
              readOnly: true
            - mountPath: /etc/pki
              name: etc-pki
              readOnly: true
            - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
              name: flexvolume-dir
            - mountPath: /etc/kubernetes/pki
              name: k8s-certs
              readOnly: true
            - mountPath: /etc/kubernetes/controller-manager.conf
              name: kubeconfig
              readOnly: true
          hostNetwork: true
          priorityClassName: system-node-critical
          securityContext:
            seccompProfile:
              type: RuntimeDefault
          volumes:
          - hostPath:
              path: /etc/ssl/certs
              type: DirectoryOrCreate
            name: ca-certs
          - hostPath:
              path: /etc/pki
              type: DirectoryOrCreate
            name: etc-pki
          - hostPath:
              path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
              type: DirectoryOrCreate
            name: flexvolume-dir
          - hostPath:
              path: /etc/kubernetes/pki
              type: DirectoryOrCreate
            name: k8s-certs
          - hostPath:
              path: /etc/kubernetes/controller-manager.conf
              type: FileOrCreate
            name: kubeconfig
        status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
        apiVersion: v1
        kind: Pod
        metadata:
          creationTimestamp: null
          labels:
            component: kube-scheduler
            tier: control-plane
          name: kube-scheduler
          namespace: kube-system
        spec:
          containers:
          - command:
            - kube-scheduler
            - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
            - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
            - --bind-address=127.0.0.1
            - --kubeconfig=/etc/kubernetes/scheduler.conf
            - --leader-elect=true
            image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 8
              httpGet:
                host: 127.0.0.1
                path: /healthz
                port: 10259
                scheme: HTTPS
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 15
            name: kube-scheduler
            resources:
              requests:
                cpu: 100m
            startupProbe:
              failureThreshold: 24
              httpGet:
                host: 127.0.0.1
                path: /healthz
                port: 10259
                scheme: HTTPS
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 15
            volumeMounts:
            - mountPath: /etc/kubernetes/scheduler.conf
              name: kubeconfig
              readOnly: true
          hostNetwork: true
          priorityClassName: system-node-critical
          securityContext:
            seccompProfile:
              type: RuntimeDefault
          volumes:
          - hostPath:
              path: /etc/kubernetes/scheduler.conf
              type: FileOrCreate
            name: kubeconfig
        status: {}
[dryrun] Would write file "/var/lib/kubelet/config.yaml" with content:
        apiVersion: kubelet.config.k8s.io/v1beta1
        authentication:
          anonymous:
            enabled: false
          webhook:
            cacheTTL: 0s
            enabled: true
          x509:
            clientCAFile: /etc/kubernetes/pki/ca.crt
        authorization:
          mode: Webhook
          webhook:
            cacheAuthorizedTTL: 0s
            cacheUnauthorizedTTL: 0s
        cgroupDriver: systemd
        clusterDNS:
        - 10.96.0.10
        clusterDomain: cluster.local
        cpuManagerReconcilePeriod: 0s
        evictionPressureTransitionPeriod: 0s
        failSwapOn: false
        fileCheckFrequency: 0s
        healthzBindAddress: 127.0.0.1
        healthzPort: 10248
        httpCheckFrequency: 0s
        imageMinimumGCAge: 0s
        kind: KubeletConfiguration
        logging:
          flushFrequency: 0
          options:
            json:
              infoBufferSize: "0"
          verbosity: 0
        memorySwap: {}
        nodeStatusReportFrequency: 0s
        nodeStatusUpdateFrequency: 0s
        rotateCertificates: true
        runtimeRequestTimeout: 0s
        shutdownGracePeriod: 0s
        shutdownGracePeriodCriticalPods: 0s
        staticPodPath: /etc/kubernetes/manifests
        streamingConnectionIdleTimeout: 0s
        syncFrequency: 0s
        volumeStatsAggPeriod: 0s
[dryrun] Would write file "/var/lib/kubelet/kubeadm-flags.env" with content:
        KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/tmp/kubeadm-init-dryrun2269528674". This can take up to 4m0s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        data:
          ClusterConfiguration: |
            apiServer:
              extraArgs:
                authorization-mode: Node,RBAC
              timeoutForControlPlane: 4m0s
            apiVersion: kubeadm.k8s.io/v1beta3
            certificatesDir: /etc/kubernetes/pki
            clusterName: xml_k8s
            controllerManager: {}
            dns: {}
            etcd:
              local:
                dataDir: /var/lib/etcd
            imageRepository: registry.aliyuncs.com/google_containers
            kind: ClusterConfiguration
            kubernetesVersion: v1.26.0
            networking:
              dnsDomain: cluster.local
              serviceSubnet: 10.96.0.0/12
            scheduler: {}
        kind: ConfigMap
        metadata:
          creationTimestamp: null
          name: kubeadm-config
          namespace: kube-system
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: Role
        metadata:
          creationTimestamp: null
          name: kubeadm:nodes-kubeadm-config
          namespace: kube-system
        rules:
        - apiGroups:
          - ""
          resourceNames:
          - kubeadm-config
          resources:
          - configmaps
          verbs:
          - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: RoleBinding
        metadata:
          creationTimestamp: null
          name: kubeadm:nodes-kubeadm-config
          namespace: kube-system
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: Role
          name: kubeadm:nodes-kubeadm-config
        subjects:
        - kind: Group
          name: system:bootstrappers:kubeadm:default-node-token
        - kind: Group
          name: system:nodes
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        data:
          kubelet: |
            apiVersion: kubelet.config.k8s.io/v1beta1
            authentication:
              anonymous:
                enabled: false
              webhook:
                cacheTTL: 0s
                enabled: true
              x509:
                clientCAFile: /etc/kubernetes/pki/ca.crt
            authorization:
              mode: Webhook
              webhook:
                cacheAuthorizedTTL: 0s
                cacheUnauthorizedTTL: 0s
            cgroupDriver: systemd
            clusterDNS:
            - 10.96.0.10
            clusterDomain: cluster.local
            cpuManagerReconcilePeriod: 0s
            evictionPressureTransitionPeriod: 0s
            failSwapOn: false
            fileCheckFrequency: 0s
            healthzBindAddress: 127.0.0.1
            healthzPort: 10248
            httpCheckFrequency: 0s
            imageMinimumGCAge: 0s
            kind: KubeletConfiguration
            logging:
              flushFrequency: 0
              options:
                json:
                  infoBufferSize: "0"
              verbosity: 0
            memorySwap: {}
            nodeStatusReportFrequency: 0s
            nodeStatusUpdateFrequency: 0s
            rotateCertificates: true
            runtimeRequestTimeout: 0s
            shutdownGracePeriod: 0s
            shutdownGracePeriodCriticalPods: 0s
            staticPodPath: /etc/kubernetes/manifests
            streamingConnectionIdleTimeout: 0s
            syncFrequency: 0s
            volumeStatsAggPeriod: 0s
        kind: ConfigMap
        metadata:
          creationTimestamp: null
          name: kubelet-config
          namespace: kube-system
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: Role
        metadata:
          creationTimestamp: null
          name: kubeadm:kubelet-config
          namespace: kube-system
        rules:
        - apiGroups:
          - ""
          resourceNames:
          - kubelet-config
          resources:
          - configmaps
          verbs:
          - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: RoleBinding
        metadata:
          creationTimestamp: null
          name: kubeadm:kubelet-config
          namespace: kube-system
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: Role
          name: kubeadm:kubelet-config
        subjects:
        - kind: Group
          name: system:nodes
        - kind: Group
          name: system:bootstrappers:kubeadm:default-node-token
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "m001"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "m001"
[dryrun] Attached patch:
        {"metadata":{"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/containerd/containerd.sock"}}}
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node m001 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node m001 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "m001"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "m001"
[dryrun] Attached patch:
        {"metadata":{"labels":{"node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]}}
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-abcdef"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        data:
          auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=
          expiration: MjAyMy0wNi0xN1QwNjozOTozM1o=
          token-id: YWJjZGVm
          token-secret: MDEyMzQ1Njc4OWFiY2RlZg==
          usage-bootstrap-authentication: dHJ1ZQ==
          usage-bootstrap-signing: dHJ1ZQ==
        kind: Secret
        metadata:
          creationTimestamp: null
          name: bootstrap-token-abcdef
          namespace: kube-system
        type: bootstrap.kubernetes.io/token
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRole
        metadata:
          creationTimestamp: null
          name: kubeadm:get-nodes
          namespace: kube-system
        rules:
        - apiGroups:
          - ""
          resources:
          - nodes
          verbs:
          - get
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          creationTimestamp: null
          name: kubeadm:get-nodes
          namespace: kube-system
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: kubeadm:get-nodes
        subjects:
        - kind: Group
          name: system:bootstrappers:kubeadm:default-node-token
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          creationTimestamp: null
          name: kubeadm:kubelet-bootstrap
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: system:node-bootstrapper
        subjects:
        - kind: Group
          name: system:bootstrappers:kubeadm:default-node-token
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          creationTimestamp: null
          name: kubeadm:node-autoapprove-bootstrap
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
        subjects:
        - kind: Group
          name: system:bootstrappers:kubeadm:default-node-token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          creationTimestamp: null
          name: kubeadm:node-autoapprove-certificate-rotation
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
        subjects:
        - kind: Group
          name: system:nodes
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        data:
          kubeconfig: |
            apiVersion: v1
            clusters:
            - cluster:
                certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1EWXhOakEyTXpreU9Wb1hEVE16TURZeE16QTJNemt5T1Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSnpQCkhOckFPbmpYeXNUR3N0Z2FZc2tNMjhMV3BEVVAvOGtiQWppRzVLRHVrTmZYWnl3R08xRkd6bDBZTDlnb3NQUWwKUHcwOVJabm43cUVQOCtNdncxN2VxOVkyYzFFcFVJNDNSdndyV2hIOHRHWWpKTkhjaTRnc0VkMEpmaTR5Mm56Vgo4RExmTjNqcGtsOUlzOTdtMi9IV0VNVC9ldk4wOUJHVE9tMXJwaGRqNVV1d3BFMGIrYjRxVVJGaDhpM1FxaDF0CnRnczk3NU1raEp2UVM1MUQ4ZTczdThXdThYVG80RjJIYVV2SllpNGx3cEdUQmlLdUZLZWJOZVplUUROUmQrSDAKUmlRM09Udjk1b0NValVtQWhtemtZK3pkZ2pJV0xnME8rSFhxalQ5bTk4T0l5ejVaK05XUUhzZlVWU0thZDMxTApwYWZ0NTgvWjJMcSs1S0ZWZ3kwQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQOUNoUlZLY2krajk5YjRLWDZLV3BFVHFyUnVNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSStEVUZhUjQ2TDNteHZOdEM2NApyczl2eHdiUkdST2c1YURFeFZYSzN1Qy8zU0lKY0VWSUZNQTVua1NBSEliRWxxTG40SHlkc0tuR2luUGFiMUtaCjRsMVJQZ3EvVGJ4VDNteTVRd3NLV1lod3V2M2xEM3A3bjRvU2NsL2ZGSjluN3hOWGhmbXU4SVMwQjAzcUVxTnUKT3ZGaFZvSlNoYW9Wb2RPcXhjTTVRMmpmbWFkekEzZHpRaU1DdFZmZnBXVmNHd0Y3d29FRTFHdVFnMXZ1R1ZlcgpSYS9oWElZbytCSDVEM2dibXFPSFl1S3VuOS9NM2xEeDRlb0RINTBPOXo1VGFsUlN0K3B0QlBMWGFjUnQreVByClBXTVg5eGdDU3V2N3VxcWxVRnc4Wk5Nc0FnUkhUS3AwcjBHc2x3L3NjLy9NVmoySlQ3aG1wc1U4bE12bUwzeFAKMncwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
                server: https://192.168.11.120:6443
              name: ""
            contexts: null
            current-context: ""
            kind: Config
            preferences: {}
            users: null
        kind: ConfigMap
        metadata:
          creationTimestamp: null
          name: cluster-info
          namespace: kube-public
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: Role
        metadata:
          creationTimestamp: null
          name: kubeadm:bootstrap-signer-clusterinfo
          namespace: kube-public
        rules:
        - apiGroups:
          - ""
          resourceNames:
          - cluster-info
          resources:
          - configmaps
          verbs:
          - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: RoleBinding
        metadata:
          creationTimestamp: null
          name: kubeadm:bootstrap-signer-clusterinfo
          namespace: kube-public
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: Role
          name: kubeadm:bootstrap-signer-clusterinfo
        subjects:
        - kind: User
          name: system:anonymous
[dryrun] Would perform action LIST on resource "deployments" in API group "apps/v1"
[dryrun] Would perform action GET on resource "configmaps" in API group "core/v1"
[dryrun] Resource name: "coredns"
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        data:
          Corefile: |
            .:53 {
                errors
                health {
                   lameduck 5s
                }
                ready
                kubernetes cluster.local in-addr.arpa ip6.arpa {
                   pods insecure
                   fallthrough in-addr.arpa ip6.arpa
                   ttl 30
                }
                prometheus :9153
                forward . /etc/resolv.conf {
                   max_concurrent 1000
                }
                cache 30
                loop
                reload
                loadbalance
            }
        kind: ConfigMap
        metadata:
          creationTimestamp: null
          name: coredns
          namespace: kube-system
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRole
        metadata:
          creationTimestamp: null
          name: system:coredns
        rules:
        - apiGroups:
          - ""
          resources:
          - endpoints
          - services
          - pods
          - namespaces
          verbs:
          - list
          - watch
        - apiGroups:
          - ""
          resources:
          - nodes
          verbs:
          - get
        - apiGroups:
          - discovery.k8s.io
          resources:
          - endpointslices
          verbs:
          - list
          - watch
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          creationTimestamp: null
          name: system:coredns
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: system:coredns
        subjects:
        - kind: ServiceAccount
          name: coredns
          namespace: kube-system
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        kind: ServiceAccount
        metadata:
          creationTimestamp: null
          name: coredns
          namespace: kube-system
[dryrun] Would perform action CREATE on resource "deployments" in API group "apps/v1"
[dryrun] Attached object:
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          creationTimestamp: null
          labels:
            k8s-app: kube-dns
          name: coredns
          namespace: kube-system
        spec:
          replicas: 2
          selector:
            matchLabels:
              k8s-app: kube-dns
          strategy:
            rollingUpdate:
              maxUnavailable: 1
            type: RollingUpdate
          template:
            metadata:
              creationTimestamp: null
              labels:
                k8s-app: kube-dns
            spec:
              affinity:
                podAntiAffinity:
                  preferredDuringSchedulingIgnoredDuringExecution:
                  - podAffinityTerm:
                      labelSelector:
                        matchExpressions:
                        - key: k8s-app
                          operator: In
                          values:
                          - kube-dns
                      topologyKey: kubernetes.io/hostname
                    weight: 100
              containers:
              - args:
                - -conf
                - /etc/coredns/Corefile
                image: registry.aliyuncs.com/google_containers/coredns:v1.9.3
                imagePullPolicy: IfNotPresent
                livenessProbe:
                  failureThreshold: 5
                  httpGet:
                    path: /health
                    port: 8080
                    scheme: HTTP
                  initialDelaySeconds: 60
                  successThreshold: 1
                  timeoutSeconds: 5
                name: coredns
                ports:
                - containerPort: 53
                  name: dns
                  protocol: UDP
                - containerPort: 53
                  name: dns-tcp
                  protocol: TCP
                - containerPort: 9153
                  name: metrics
                  protocol: TCP
                readinessProbe:
                  httpGet:
                    path: /ready
                    port: 8181
                    scheme: HTTP
                resources:
                  limits:
                    memory: 170Mi
                  requests:
                    cpu: 100m
                    memory: 70Mi
                securityContext:
                  allowPrivilegeEscalation: false
                  capabilities:
                    add:
                    - NET_BIND_SERVICE
                    drop:
                    - all
                  readOnlyRootFilesystem: true
                volumeMounts:
                - mountPath: /etc/coredns
                  name: config-volume
                  readOnly: true
              dnsPolicy: Default
              nodeSelector:
                kubernetes.io/os: linux
              priorityClassName: system-cluster-critical
              serviceAccountName: coredns
              tolerations:
              - key: CriticalAddonsOnly
                operator: Exists
              - effect: NoSchedule
                key: node-role.kubernetes.io/control-plane
              volumes:
              - configMap:
                  items:
                  - key: Corefile
                    path: Corefile
                  name: coredns
                name: config-volume
        status: {}
[dryrun] Would perform action CREATE on resource "services" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        kind: Service
        metadata:
          annotations:
            prometheus.io/port: "9153"
            prometheus.io/scrape: "true"
          creationTimestamp: null
          labels:
            k8s-app: kube-dns
            kubernetes.io/cluster-service: "true"
            kubernetes.io/name: CoreDNS
          name: kube-dns
          namespace: kube-system
          resourceVersion: "0"
        spec:
          clusterIP: 10.96.0.10
          ports:
          - name: dns
            port: 53
            protocol: UDP
            targetPort: 53
          - name: dns-tcp
            port: 53
            protocol: TCP
            targetPort: 53
          - name: metrics
            port: 9153
            protocol: TCP
            targetPort: 9153
          selector:
            k8s-app: kube-dns
        status:
          loadBalancer: {}
[addons] Applied essential addon: CoreDNS
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        data:
          config.conf: |-
            apiVersion: kubeproxy.config.k8s.io/v1alpha1
            bindAddress: 0.0.0.0
            bindAddressHardFail: false
            clientConnection:
              acceptContentTypes: ""
              burst: 0
              contentType: ""
              kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
              qps: 0
            clusterCIDR: ""
            configSyncPeriod: 0s
            conntrack:
              maxPerCore: null
              min: null
              tcpCloseWaitTimeout: null
              tcpEstablishedTimeout: null
            detectLocal:
              bridgeInterface: ""
              interfaceNamePrefix: ""
            detectLocalMode: ""
            enableProfiling: false
            healthzBindAddress: ""
            hostnameOverride: ""
            iptables:
              localhostNodePorts: null
              masqueradeAll: false
              masqueradeBit: null
              minSyncPeriod: 0s
              syncPeriod: 0s
            ipvs:
              excludeCIDRs: null
              minSyncPeriod: 0s
              scheduler: ""
              strictARP: false
              syncPeriod: 0s
              tcpFinTimeout: 0s
              tcpTimeout: 0s
              udpTimeout: 0s
            kind: KubeProxyConfiguration
            metricsBindAddress: ""
            mode: ipvs
            nodePortAddresses: null
            oomScoreAdj: null
            portRange: ""
            showHiddenMetricsForVersion: ""
            winkernel:
              enableDSR: false
              forwardHealthCheckVip: false
              networkName: ""
              rootHnsEndpointName: ""
              sourceVip: ""
          kubeconfig.conf: |-
            apiVersion: v1
            kind: Config
            clusters:
            - cluster:
                certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
                server: https://192.168.11.120:6443
              name: default
            contexts:
            - context:
                cluster: default
                namespace: default
                user: default
              name: default
            current-context: default
            users:
            - name: default
              user:
                tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        kind: ConfigMap
        metadata:
          creationTimestamp: null
          labels:
            app: kube-proxy
          name: kube-proxy
          namespace: kube-system
[dryrun] Would perform action CREATE on resource "daemonsets" in API group "apps/v1"
[dryrun] Attached object:
        apiVersion: apps/v1
        kind: DaemonSet
        metadata:
          creationTimestamp: null
          labels:
            k8s-app: kube-proxy
          name: kube-proxy
          namespace: kube-system
        spec:
          selector:
            matchLabels:
              k8s-app: kube-proxy
          template:
            metadata:
              creationTimestamp: null
              labels:
                k8s-app: kube-proxy
            spec:
              containers:
              - command:
                - /usr/local/bin/kube-proxy
                - --config=/var/lib/kube-proxy/config.conf
                - --hostname-override=$(NODE_NAME)
                env:
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                image: registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
                imagePullPolicy: IfNotPresent
                name: kube-proxy
                resources: {}
                securityContext:
                  privileged: true
                volumeMounts:
                - mountPath: /var/lib/kube-proxy
                  name: kube-proxy
                - mountPath: /run/xtables.lock
                  name: xtables-lock
                - mountPath: /lib/modules
                  name: lib-modules
                  readOnly: true
              hostNetwork: true
              nodeSelector:
                kubernetes.io/os: linux
              priorityClassName: system-node-critical
              serviceAccountName: kube-proxy
              tolerations:
              - operator: Exists
              volumes:
              - configMap:
                  name: kube-proxy
                name: kube-proxy
              - hostPath:
                  path: /run/xtables.lock
                  type: FileOrCreate
                name: xtables-lock
              - hostPath:
                  path: /lib/modules
                name: lib-modules
          updateStrategy:
            type: RollingUpdate
        status:
          currentNumberScheduled: 0
          desiredNumberScheduled: 0
          numberMisscheduled: 0
          numberReady: 0
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
        apiVersion: v1
        kind: ServiceAccount
        metadata:
          creationTimestamp: null
          name: kube-proxy
          namespace: kube-system
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          creationTimestamp: null
          name: kubeadm:node-proxier
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: system:node-proxier
        subjects:
        - kind: ServiceAccount
          name: kube-proxy
          namespace: kube-system
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: Role
        metadata:
          creationTimestamp: null
          name: kube-proxy
          namespace: kube-system
        rules:
        - apiGroups:
          - ""
          resourceNames:
          - kube-proxy
          resources:
          - configmaps
          verbs:
          - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
        apiVersion: rbac.authorization.k8s.io/v1
        kind: RoleBinding
        metadata:
          creationTimestamp: null
          name: kube-proxy
          namespace: kube-system
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: Role
          name: kube-proxy
        subjects:
        - kind: Group
          name: system:bootstrappers:kubeadm:default-node-token
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/tmp/kubeadm-init-dryrun2269528674/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.11.120:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:c8e42b113cb34eea58600dff1a08707b40add15a60ef5e89bd9336a66b67d562 
[root@M001 ~]# 
[root@M001 ~]#

当然了,这个信息一般来说不需要弄懂,有兴趣的可以了解下它主要做了哪些事情。我们只要看到Your Kubernetes control-plane has initialized successfully!即代表检查通过了。

3.4 在语法上没有错误的前提下启动kubelet

[root@M001 ~]# systemctl status kubelet                        
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Fri 2023-06-16 14:39:53 CST; 5s ago
       Docs: https://kubernetes.io/docs/
    Process: 4608 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
   Main PID: 4608 (code=exited, status=1/FAILURE)
        CPU: 50ms
[root@M001 ~]# systemctl daemon-reload && systemctl restart kubelet
[root@M001 ~]#

4.下载镜像

查看镜像

[root@M001 ~]# kubeadm config images list --config=kubeadm-init.yml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.6-0
registry.aliyuncs.com/google_containers/coredns:v1.9.3
[root@M001 ~]#

注意:再次确认 registry.aliyuncs.com/google_containers/pause:3.9 就是上面 /etc/containerd/config.toml 中所需要填写正确的 pause镜像及版本号。

>> cat /etc/containerd/config.toml
...
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
...

修改完成后记得重启 containerd,如果修改了那么所有节点都需要操作:

[root@M001 ~]# systemctl restart containerd
[root@M001 ~]#

拉区镜像到镜像

[root@M001 ~]# kubeadm config images pull --config=kubeadm-init.yml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3
[root@M001 ~]# 
[root@M001 ~]# 
[root@M001 ~]# kubeadm config images list
I0616 15:04:28.928292    5762 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.26
registry.k8s.io/kube-apiserver:v1.26.6
registry.k8s.io/kube-controller-manager:v1.26.6
registry.k8s.io/kube-scheduler:v1.26.6
registry.k8s.io/kube-proxy:v1.26.6
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
[root@M001 ~]#

也可以直接指定镜像仓库进行拉取:

kubeadm config images list  --image-repository registry.aliyuncs.com/google_containers
kubeadm config images pull  --image-repository registry.aliyuncs.com/google_containers

5.kubeadm初始化集群

初始化

执行初始化并将日志记录到 kubeadm-init.log 日志文件中

[root@M001 ~]# kubeadm init --config=kubeadm-init.yml | tee kubeadm-init.log
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m001] and IPs [10.96.0.1 192.168.11.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m001] and IPs [192.168.11.120 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m001] and IPs [192.168.11.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502203 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node m001 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node m001 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.11.120:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:17e3e26bbdae1d507f2b78057c9400acb8195e669f77ce247f3b92df6f1fb369 
[root@M001 ~]#

解释下上面的日志输出内容,看行首 [] 括起来的内容。

  • [init] :开始初始化集群,动作声明;
  • [preflight] :开始初始化之前的检查工作;
  • [certs] :生成相关的各种证书
  • [kubeconfig] :生成相关的kubeconfig文件
  • [kubelet-start]: 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
  • [control-plane] :使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod,也称为控制平面
  • [bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons]: 安装基本插件:CoreDNS, kube-proxy

使用集群

使用下面的命令是配置常规用户如何使用kubectl访问集群(root也可这样执行):

[root@M001 ~]# mkdir -p $HOME/.kube
[root@M001 ~]# ll -a
total 52
dr-xr-x---.  4 root root 4096 Jun 16 15:09 .
dr-xr-xr-x. 18 root root  255 Jun 16 14:38 ..
-rw-------.  1 root root  820 Jun 15 12:32 anaconda-ks.cfg
-rw-------.  1 root root 4156 Jun 16 14:36 .bash_history
-rw-r--r--.  1 root root   18 Aug 10  2021 .bash_logout
-rw-r--r--.  1 root root  141 Aug 10  2021 .bash_profile
-rw-r--r--.  1 root root  739 Jun 16 10:12 .bashrc
-rw-r--r--.  1 root root  100 Aug 10  2021 .cshrc
drwxr-xr-x   2 root root    6 Jun 16 15:09 .kube
-rw-r--r--   1 root root 4846 Jun 16 15:07 kubeadm-init.log
-rw-r--r--   1 root root 1132 Jun 16 14:50 kubeadm-init.yml
-rw-------   1 root root   20 Jun 16 14:51 .lesshst
drwx------.  2 root root    6 Jun 15 17:11 .ssh
-rw-r--r--.  1 root root  129 Aug 10  2021 .tcshrc
[root@M001 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@M001 ~]# ll $HOME/.kube/
total 8
-rw------- 1 root root 5626 Jun 16 15:10 config
[root@M001 ~]# cat $HOME/.kube/config  
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1EWXhOakEzTURjeU5Wb1hEVE16TURZeE16QTNNRGN5TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSnQwCjBtYVF1K2U2bkRta2Q4dFlvcUkwVy9LeDJPZ2NGajEyUjdWQk9vS09Ba2lLTklNbHNxSEJ5SzhaTlNYNHk4WnkKS1NrNzlJdktXbzRUekkzSENML0phb3hhR295Mnlvelh1bDBKYW4wT2phdGcrOFJ6V3pBMUFGd2hTT0ZobzJJaAprUEljclhvQkFzR2lWVFNCYnI1azRGY1A0UUpEejZDdCtmM0oybjExMFJWTFBuQnFzNmljbFVnTFF5N1NBUXBkCkVvMnNObjB5cGFxdDNZd2hPVlpJUVF6S3IxZWUzM0VPc3pXTXhnNC8zellHSml4U0h4OENLN0R0cUV3VnJGR0UKYndmMEs5amZ0ejMwRDlKVGoxcnVNL3hnODhiTnJKK1JXdHplRUhrSFg1UW1tTGhJTzRmRy9jeis2emFsaU9PSQoxaXpObmlUMDVoY1pTUzlOS3Q4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZIcGhtQ1Z6R0FmSlB2Smh4NlRmMFJ4UVRHekdNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQTErd29MRGFsYnVPSGZHZ0czUgpqRTgwZHNxajFVM2cvMUlKaFB4YzN6N0V2ODhNSkxyVlcxait3QUdFckdxRnlTTGx2eG1zRjRIaWJoSXAzV1RrCnVpa1k3a1g0eFk1TzBqa3R2S1FKdHh0N29KLzBETSs5Q3NkOCtVczByZk1OYW8xQURPeERmZk4va2lvb01pTWYKOWhnOG4yZHR0U093Y1NybTFhOVI1Snd3NWFsTjRTTXRJdW5kODZvenR4anNES1hjeFZTMHJDdDJ6UVVJOWpPawphQ0kyNnFUYTlYNG4vYUZ6NENtUmlhMjRkYWxwMXp4UGVnRk4yQUVwQkQ4SWtvRXlLR2JXekk0QnAwMWZNNSthCnVFYnJpRTgxejBFYThiSUJFa3Z6aUs4ZVV0eVRyL2RzdUdyOTZLWW1EQXNndWpkbEVtUHljNDE0QVFMQzlkT3IKTUFRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.11.120:6443
  name: xml_k8s
contexts:
- context:
    cluster: xml_k8s
    user: kubernetes-admin
  name: kubernetes-admin@xml_k8s
current-context: kubernetes-admin@xml_k8s
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJVkI5TnJONCtSVWd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBMk1UWXdOekEzTWpWYUZ3MHlOREEyTVRVd056QTNNalphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBJUXFqc2dqVlQ5VnMySHQKQkxyb24zR3FlZVg1UHpjSUdGNWxYbmxPNWErSmZLL2d1SEhvWWZzRjYyNTQyZis5ZGo0NDZtb2g5L1FteGg3UgpGQkt0K3RiTGlBdi8vMXE0Z1QzeWxMbC9odUI4UEFEZmREMUEwblhzRmFjeE9XVGF0ejMrTldWTVdJQXVyaHJ3CjdaSms2MER1ck5BWit2OGJiWkYxRFRxV3p4RTN5azdZZmo5OGVVMG0wSFdLLzBsNkRzMkduVlcrc1FTZlM0aWQKRCtIYjg1eE45a3htNysxRm5ocWFwaUVTTlhETm1PdWhRRXI5NlM2bDJTZU5oN0Z0RUVPbFlQdDNxcVJqUTdFcQpuRnhXZVZXUHcvTWxncTVSYWxIcGRGWFZXN1ExTkhCYzNhVys3MGVveFFGWS9oTEM3RVF1bEU3YldRVGNmcVF6Ckp1QUgxUUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSNllaZ2xjeGdIeVQ3eVljZWszOUVjVUV4cwp4akFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbU52R3FobW0zRkRxWnBkeVRydHNRRGFZVUN5b0JDZm5ZckJmCmFUbE00bDQ3SWZUc1JOMUxmajBLY29zcXhTbjNuTURuZXUyaEEyaU52TDBkNCs4ZXdIdEY1VE1hY3Y1eHJackcKYlFCcnFoa01oU3IrRzdnUSs1MW8zSEhtZjcxS2xhTUptcnBBWXcvQ25DdThoK2Y3YXNmWnljNk9MLzVxN2FNRgp4TDA5TWhLaXlKVk5HOHg0MjVmbWZqd0ZaUlRxL2JRcnF6NXU2SXQ5Y0pCZXRTNlp4SXdZVzdpUjY2WWwycm9WCmZ0SXNidCtrcnFEVkJtekR3RzJVUXgrUkxSaUNEZUZWbWlUYnMrTWVRTUpzMHkwZC9RbHphSXczTTA5SFlPUzYKOTZCZmV3SzhUOGRyTlFMcnNhVG9GMjRieHV1T0x6OWFiM05LL2hOOStrTXNwRnpsd1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBMElRcWpzZ2pWVDlWczJIdEJMcm9uM0dxZWVYNVB6Y0lHRjVsWG5sTzVhK0pmSy9nCnVISG9ZZnNGNjI1NDJmKzlkajQ0Nm1vaDkvUW14aDdSRkJLdCt0YkxpQXYvLzFxNGdUM3lsTGwvaHVCOFBBRGYKZEQxQTBuWHNGYWN4T1dUYXR6MytOV1ZNV0lBdXJocnc3WkprNjBEdXJOQVordjhiYlpGMURUcVd6eEUzeWs3WQpmajk4ZVUwbTBIV0svMGw2RHMyR25WVytzUVNmUzRpZEQrSGI4NXhOOWt4bTcrMUZuaHFhcGlFU05YRE5tT3VoClFFcjk2UzZsMlNlTmg3RnRFRU9sWVB0M3FxUmpRN0VxbkZ4V2VWV1B3L01sZ3E1UmFsSHBkRlhWVzdRMU5IQmMKM2FXKzcwZW94UUZZL2hMQzdFUXVsRTdiV1FUY2ZxUXpKdUFIMVFJREFRQUJBb0lCQUFiRzVPTXpHZ0xoUmhYSQpidjJpWFlFaEhwdExvQ2d2ejdHTEQ5eGNNUFpDR0VQWEs0U1RIeXhnRGpjeXBmYmYydmFHMnk2ek9GdG9zZ0hxCmFuMHVoajBLMGg2ZjFUZ2xhSzI2cDdHeHZiVlNnbmNveUJwdEN6aEw0TnByVHF4QTNPTHJ2dUZaWTN2VTNxK0YKN0tLc0NWK2tBcDNYUGFEc3ZhVjMvc2ZwU1pGOVBNQWFkT3lVVEhERU9PNFVtRWo3WGlQV3M2czcwSll6V2ErbgpONVdML1pENU4yQ05NelJwQjRtYXFGOEVZVFFic1ZtekEzT0J2Y1Y3V042ZXRvc1o0SEFUZitFS0QzTmp4S3RGCm01RUJhTWc2RWp4OXZTNUtrdFZ1N29DR0VHT3FGRnZTSThWZG9iWEdLSnFWWTBoMmZqdTR6akcrRC9DYTFvOUkKTVBtVW1vRUNnWUVBNlVVZGpLZ1prMFEzQ1d1a29IcTljZkVRa2NCazVsRmdVN0kraXRrbHRISXBTN2NucDEyeQp2YUgyWmR3MHZEVVdDNWJ2clZMbDJJNDN3M09ZaDdVOElTRDZpTUJnOVB3VElzWjZydWdZMllwN1oyc3pvaWZhCnlQeVV1bmJjdStnbDFCbEJYc1ZyblBLd09sUkJZKzkrZjRRRFVaNWI1YU5wZHZiQWRWZzM4TVVDZ1lFQTVOV1MKdytaWXYzNlB5QVlxU3JnSGN5MjkraWk0bm5EMDFiR3k5M0MzZEJIU0o4WHN4U1dNL0gydUwweVk1aXNJblUybwpwUVdIbXp1VXhOV2Z0WThhUCt2dGlMdEs0eU43MmZiUFF1YmRtQ09oMnFORWV2QWhDZjJEQlRydzcrWW9zcHNBCjQ3WjNUSnJHUWx6TXRwUEdMWUhaUlhWUk9hZlA1aDRJdW5nc0M5RUNnWUFLUDFZSCtzNTgwSzlXUTV0TXpYZUwKRE5yOGZDWXlrL3FXVXFzNnNFVmV3dkViZVdWTmpla3ZPWEU0a2s3aXdiWkJOaFU3V1B2dDRubUNwWTVhejlSZgpaREo5Vlc0czlQSG1RaS9iaFNpcVRkSVQvZnFic2dLRGQ4MFV6K25zZTB0R0lRSGxKdWtPVVA3NjRQNnFaRGY3ClpCSTlRS2FxMU4zcU12YkxjTitzUFFLQmdGcWI3cmMvR0ZrSzVpZlB1U05JNXpwR0hIbFhjbkxhU3pmcVd2RDcKWXNqRTNhLytBUmkzRzdKR21aZ0UvbTMxRUQ0cEorUGY0cWdtMld0dkl3UWpHOE8veVpoZ2dQQ2Jka2tDSDJOZwpRdEloR2MrVzRtRERnSEdTUGpUdDk4VW1IMnRKVVByWm56ZG4rYVVCVmRYZGdaVTNXeTdUbTB2M0ZLMExxMjBhCmRHWmhBb0dBSUF**k04MHpPak5DM1BTQm9QSy9YVnhjTUpRaW94V3FUaC84S3Q0WXBUTXNQdjJXTmVVYkl1MGMKUVFrTTJaMk9tQStpTHE4TEtRa2M4Nm1WVk54N0dwV0dnZnFybVJJcHVjT2VVWlpRZFFQZG9oSzgwMG9YZDE4agpnWkY1VDZNb05XT0NQTlBXUEVualFrYUdscUZEQW4yUG1palRBOHZza1Fzc3ByVS9mVWs9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
[root@M001 ~]# 
[root@M001 ~]# chown $(id -u):$(id -g) $HOME/.kube/config     #root不用做
[root@M001 ~]# ll $HOME/.kube/        
total 8
-rw------- 1 root root 5626 Jun 16 15:10 config
[root@M001 ~]#

输出中(日志)中也给出了加入集群的命令,在Node节点使用下面的命令将节点加入群集:

>> kubeadm join 192.168.11.120:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:17e3e26bbdae1d507f2b78057c9400acb8195e669f77ce247f3b92df6f1fb369

检查集群

查看集群是否处于健康状态:

[root@M001 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
[root@M001 ~]#

集群正常,如果遇到错误的情况,可使用 kubeadm reset 重置,然后重启主机,再次进行 初始化。

下载 flannel

部署容器网络,CNI网络插件(在Master上执行,著名的有flannel、calico、canal和kube-router等,简单易用的实现是为CoreOS提供的flannel项目),这里使用Flannel实现。

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

由于 flannel 清单文件需要特殊手段获取才能下载 ,这里直接放出来吧,较长。

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.112.0.0/12",    #这里和前面kubeadm-init.yml中的podSubnet设置成一样的
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.21.5
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.21.5
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

安装执行flannel

[root@M001 ~]# kubectl get cs 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
[root@M001 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   13m
kube-node-lease   Active   13m
kube-public       Active   13m
kube-system       Active   13m
[root@M001 ~]# 

[root@M001 ~]# kubectl apply -f  kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@M001 ~]#

检查flannel

flannel 会创建独立的名称空间:

[root@M001 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
[root@M001 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   14m
kube-flannel      Active   24s
kube-node-lease   Active   14m
kube-public       Active   14m
kube-system       Active   14m
[root@M001 ~]#

查看flannel创建的Pod

[root@M001 ~]# kubectl get pods -n kube-flannel
NAME                    READY   STATUS                  RESTARTS   AGE
kube-flannel-ds-g94hc   0/1     Init:ImagePullBackOff   0          3m12s
[root@M001 ~]#

查看集群节点的状态

[root@M001 ~]# kubectl get nodes
NAME   STATUS     ROLES           AGE   VERSION
m001   NotReady   control-plane   18m   v1.26.0
[root@M001 ~]#

集群加入node节点

这部分仅在node节点执行就可以了。(我这里就是N001和N002)

安装包

node节点无需安装 kubectl 客户端工具。

#先配置yum源
[root@N001 ~]# touch /etc/yum.repos.d/kubernetes.repo
[root@N001 ~]# vi /etc/yum.repos.d/kubernetes.repo
[root@N001 ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg


[root@N001 ~]# yum install -y kubeadm-1.26.0 kubelet-1.26.0
...
Installed:
  conntrack-tools-1.4.7-2.el9.x86_64       cri-tools-1.26.0-0.x86_64        kubeadm-1.26.0-0.x86_64                      kubectl-1.27.3-0.x86_64                      
  kubelet-1.26.0-0.x86_64                  kubernetes-cni-1.2.0-0.x86_64    libnetfilter_cthelper-1.0.0-22.el9.x86_64    libnetfilter_cttimeout-1.0.0-19.el9.x86_64   
  libnetfilter_queue-1.0.5-1.el9.x86_64    socat-1.7.4.1-5.el9.x86_64      

Complete!
[root@N001 ~]

#N002上做相同的操作

kubeadm群集管理工具;对node节点来说会使用它将节点加入群集。

Kubelet是主要的节点代理,运行在Kubernetes集群中的每个工作节点上。它负责管理运行在节点上的容器和Pod,与控制平面通信以接收指令,并报告节点及其工作负载的状态。

kubectl当然也可以在node上安装,这样在node上也可以使用kubectl 来与Kubernetes集群交互。

加入集群

提示:加入集群前,请务必确认 /etc/containerd/config.toml 中配置修改正确,可以直接拷贝 k8s-master 节点配置文件到此。

[root@N001 ~]# scp root@M001:/etc/containerd/config.toml /etc/containerd/config.toml 
The authenticity of host 'm001 (192.168.11.120)' can't be established.
ED25519 key fingerprint is SHA256:xM90cqHrfIvcX7H4tDRvod1d118c5JRF1x3ob+kViXo.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'm001' (ED25519) to the list of known hosts.
root@m001's password: 
config.toml                                                                                                                          100% 7070     4.8MB/s   00:00    
[root@N001 ~]#

配置完成后,重启服务

systemctl restart containerd

设置 kubelet 开机启动:

systemctl status kubelet
systemctl enable kubelet
systemctl restart kubelet

提示:该项不设置,加入集群会出现告警信息:

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

加入集群,该命令在 k8s-master 节点初始化成功后,日志直接给出。

[root@N001 ~]# kubeadm join 192.168.11.120:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:17e3e26bbdae1d507f2b78057c9400acb8195e669f77ce247f3b92df6f1fb369
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@N001 ~]# 

#在M001上查看node信息
[root@M001 ~]# kubectl get nodes
NAME   STATUS     ROLES           AGE   VERSION
m001   Ready      control-plane   48m   v1.26.0
n001   NotReady   <none>          47s   v1.26.0
[root@M001 ~]# 


#在N002上统一执行join cluster操作
<略>


#再次在M001上查看node信息
[root@M001 ~]# kubectl get nodes
NAME   STATUS     ROLES           AGE    VERSION
m001   Ready      control-plane   51m    v1.26.0
n001   NotReady   <none>          4m1s   v1.26.0
n002   NotReady   <none>          51s    v1.26.0
[root@M001 ~]#

出现如上提示表示加入集群成功。提示:如果加入集群失败,可使用 kubeadm reset 重置。

查看集群

M001上查看节点信息:

[root@M001 ~]# kubectl get nodes -o wide
NAME   STATUS     ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
m001   Ready      control-plane   51m     v1.26.0   192.168.11.120   <none>        CentOS Stream 9   5.14.0-325.el9.x86_64   containerd://1.6.21
n001   NotReady   <none>          4m31s   v1.26.0   192.168.11.121   <none>        CentOS Stream 9   5.14.0-171.el9.x86_64   containerd://1.6.21
n002   NotReady   <none>          81s     v1.26.0   192.168.11.122   <none>        CentOS Stream 9   5.14.0-171.el9.x86_64   containerd://1.6.21
[root@M001 ~]#

Node节点已加入完成。

注意:为什么是NotReady状态呢?稍微等一会就会自动恢复的,主要是因为我的虚拟机给的资源少。

集群状态

可以在任意安装了kubectl的节点上运行;一般都是master上。

集群节点状态

[root@M001 ~]# kubectl get nodes -o wide
NAME   STATUS     ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
m001   Ready      control-plane   51m     v1.26.0   192.168.11.120   <none>        CentOS Stream 9   5.14.0-325.el9.x86_64   containerd://1.6.21
n001   Ready   <none>          4m31s   v1.26.0   192.168.11.121   <none>        CentOS Stream 9   5.14.0-171.el9.x86_64   containerd://1.6.21
n002   Ready   <none>          81s     v1.26.0   192.168.11.122   <none>        CentOS Stream 9   5.14.0-171.el9.x86_64   containerd://1.6.21
[root@M001 ~]#

集群所有Pod状态

[root@M001 ~]# kubectl get po -o wide -A
NAMESPACE      NAME                           READY   STATUS                  RESTARTS        AGE   IP               NODE   NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-5nzr9          0/1     Init:ImagePullBackOff   1               13m   192.168.11.121   n001   <none>           <none>
kube-flannel   kube-flannel-ds-g94hc          0/1     CrashLoopBackOff        8 (4m49s ago)   46m   192.168.11.120   m001   <none>           <none>
kube-flannel   kube-flannel-ds-jmc5d          0/1     CrashLoopBackOff        6 (3m27s ago)   10m   192.168.11.122   n002   <none>           <none>
kube-system    coredns-5bbd96d687-c948c       0/1     ContainerCreating       0               61m   <none>           m001   <none>           <none>
kube-system    coredns-5bbd96d687-tpqs6       0/1     ContainerCreating       0               61m   <none>           m001   <none>           <none>
kube-system    etcd-m001                      1/1     Running                 0               61m   192.168.11.120   m001   <none>           <none>
kube-system    kube-apiserver-m001            1/1     Running                 0               61m   192.168.11.120   m001   <none>           <none>
kube-system    kube-controller-manager-m001   1/1     Running                 0               61m   192.168.11.120   m001   <none>           <none>
kube-system    kube-proxy-5q92t               1/1     Running                 1 (92s ago)     13m   192.168.11.121   n001   <none>           <none>
kube-system    kube-proxy-b245t               1/1     Running                 0               10m   192.168.11.122   n002   <none>           <none>
kube-system    kube-proxy-khc5n               1/1     Running                 0               61m   192.168.11.120   m001   <none>           <none>
kube-system    kube-scheduler-m001            1/1     Running                 0               61m   192.168.11.120   m001   <none>           <none>
[root@M001 ~]#

集群所有Service状态

[root@M001 ~]# kubectl get service -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  62m
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   62m
[root@M001 ~]#

集群所有 ConfigMap 状态

[root@M001 ~]# kubectl get configmaps -A
NAMESPACE         NAME                                 DATA   AGE
default           kube-root-ca.crt                     1      62m
kube-flannel      kube-flannel-cfg                     2      48m
kube-flannel      kube-root-ca.crt                     1      48m
kube-node-lease   kube-root-ca.crt                     1      62m
kube-public       cluster-info                         2      63m
kube-public       kube-root-ca.crt                     1      62m
kube-system       coredns                              1      63m
kube-system       extension-apiserver-authentication   6      63m
kube-system       kube-proxy                           2      63m
kube-system       kube-root-ca.crt                     1      62m
kube-system       kubeadm-config                       1      63m
kube-system       kubelet-config                       1      63m
[root@M001 ~]#



以上,内容均记录自实际的测试环境,每个人在执行的时候都可能会遇到未知的问题,积极在互联网上查询资料;问题一定都能得到妥善解决。

文章准备仓促,可能存在错别字或者表述不清甚至错误的情况,如果大家有发现文章不妥之处,真诚欢迎留言,本人会尽力修正。

喜欢本文的朋友请三连哦,谢谢!