1 您需要了解

  • 前面安装和之前版本一样,网络插件calico操作直接定位4.4 小节
  • 操作系统版本 CentOS Stream release 8 ,采用最小化 minimal 安装,配置静态 IP
  • 安装源您可访问 阿里开源镜像站 或其他镜像站进行下载
  • 环境用到 3台 虚拟机,vcpus至少2个,单网卡,网卡类型配置为 NAT桥接,需连通外部网络
  • 安装详细步骤中,Kubernetes 源配置方法有些变化,已在脚本中进行更新,直接下载使用
  • 匹配最新版本 calico 网络插件 v3.28.0,具体要求及配置流程请参考 官文 Install Calico
  • 获取脚本及calico离线镜像:calico 3.28.0 版本下载 | calico 3.27.3 版本下载
  • 为有更好的浏览体验,您可以点击文章左上方目录按钮来显示文章整体目录结构

Kubernetes requirements

Supported versions (支持的版本)

We test Calico v3.28 against the following Kubernetes versions. Other versions may work, but we are not actively testing them. 我们针对以下 Kubernetes 版本测试了 Calico v3.28。其他版本可能适用,但没有主动去测试它们。

  • v1.27
  • v1.28
  • v1.29
  • v1.30

2 环境规划

主机名 IP 网关/DNS CPU/内存 磁盘 角色 备注
kmaster 192.168.100.143 192.168.100.2 2c / 8G 100g 控制节点
knode1 192.168.100.144 192.168.100.2 2c / 8G 100g 工作节点1
knode2 192.168.100.145 192.168.100.2 2c / 8G 100g 工作节点2

3 系统环境配置

三台节点配置好主机名及IP地址即可,系统环境配置分别通过脚本完成,注意脚本中yum源配置及网卡名称,根据实际情况更改

[root@kmaster ~]# sh Stream8-k8s-v1.30.0.sh
[root@knode1 ~]# sh Stream8-k8s-v1.30.0.sh
[root@knode2 ~]# sh Stream8-k8s-v1.30.0.sh

***kmaster输出记录节选***
  Installing       : kubeadm-1.30.0-150500.1.1.x86_64                                                                9/10
  Installing       : kubectl-1.30.0-150500.1.1.x86_64                                                               10/10
  Running scriptlet: kubectl-1.30.0-150500.1.1.x86_64                                                               10/10
  Verifying        : conntrack-tools-1.4.4-11.el8.x86_64                                                             1/10
  Verifying        : libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                       2/10
  Verifying        : libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                                      3/10
  Verifying        : libnetfilter_queue-1.0.4-3.el8.x86_64                                                           4/10
  Verifying        : socat-1.7.4.1-1.el8.x86_64                                                                      5/10
  Verifying        : cri-tools-1.30.0-150500.1.1.x86_64                                                              6/10
  Verifying        : kubeadm-1.30.0-150500.1.1.x86_64                                                                7/10
  Verifying        : kubectl-1.30.0-150500.1.1.x86_64                                                                8/10
  Verifying        : kubelet-1.30.0-150500.1.1.x86_64                                                                9/10
  Verifying        : kubernetes-cni-1.4.0-150500.1.1.x86_64                                                         10/10

Installed:
  conntrack-tools-1.4.4-11.el8.x86_64                         cri-tools-1.30.0-150500.1.1.x86_64
  kubeadm-1.30.0-150500.1.1.x86_64                            kubectl-1.30.0-150500.1.1.x86_64
  kubelet-1.30.0-150500.1.1.x86_64                            kubernetes-cni-1.4.0-150500.1.1.x86_64
  libnetfilter_cthelper-1.0.0-15.el8.x86_64                   libnetfilter_cttimeout-1.0.0-11.el8.x86_64
  libnetfilter_queue-1.0.4-3.el8.x86_64                       socat-1.7.4.1-1.el8.x86_64

Complete!
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
10 configuration successful ^_^
Congratulations ! The basic configuration has been completed

***knode1和knode2输出记录与kmaster一致***

4 集群搭建

4.1 初始化集群(仅master节点)

脚本中最后一条第 11 条命令,单独拷贝在 kmaster 节点上运行

[root@kmaster ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.30.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.30.0
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0706 22:43:36.436955    4278 checks.go:844] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.143]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.100.143 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.100.143 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002128348s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 6.502687903s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: wkrz29.kbl8n72o7aj6qmf3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.143:6443 --token wkrz29.kbl8n72o7aj6qmf3 \
        --discovery-token-ca-cert-hash sha256:58fbfb3e1554b5fee794a2892eb33298cd2dfe86748d3555e8f93a9da2b4413a 

4.2 配置环境变量(仅master节点)

[root@kmaster ~]# mkdir -p $HOME/.kube
[root@kmaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kmaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@kmaster ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
[root@kmaster ~]# source /etc/profile

[root@kmaster ~]# kubectl get no
NAME      STATUS     ROLES           AGE     VERSION
kmaster   NotReady   control-plane   2m43s   v1.30.0

4.3 工作节点加入集群(knode1节点及knode2节点)

4.1 小节 最后生成的 kubeadm join 语句,分别拷贝至两个节点执行

[root@knode1 ~]# kubeadm join 192.168.100.143:6443 --token wkrz29.kbl8n72o7aj6qmf3 \
>         --discovery-token-ca-cert-hash sha256:58fbfb3e1554b5fee794a2892eb33298cd2dfe86748d3555e8f93a9da2b4413a
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.049215ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@knode2 ~]# kubeadm join 192.168.100.143:6443 --token wkrz29.kbl8n72o7aj6qmf3 \
>         --discovery-token-ca-cert-hash sha256:58fbfb3e1554b5fee794a2892eb33298cd2dfe86748d3555e8f93a9da2b4413a
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001747386s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@kmaster ~]# kubectl get no
NAME      STATUS     ROLES           AGE    VERSION
kmaster   NotReady   control-plane   4m8s   v1.30.0
knode1    NotReady   <none>          47s    v1.30.0
knode2    NotReady   <none>          36s    v1.30.0

4.4 上传 calico 镜像(所有节点)

将calico离线镜像tar包分别上传3个节点

[root@kmaster calico]# ls
apiserver-v3.28.0.tar  kube-controllers-v3.28.0.tar       operator-v1.34.0.tar
cni-v3.28.0.tar        node-driver-registrar-v3.28.0.tar  pod2daemon-flexvol-v3.28.0.tar
csi-v3.28.0.tar        node-v3.28.0.tar                   typha-v3.28.0.tar

[root@knode1 calico]# ls
apiserver-v3.28.0.tar  kube-controllers-v3.28.0.tar       operator-v1.34.0.tar
cni-v3.28.0.tar        node-driver-registrar-v3.28.0.tar  pod2daemon-flexvol-v3.28.0.tar
csi-v3.28.0.tar        node-v3.28.0.tar                   typha-v3.28.0.tar

[root@knode2 calico]# ls
apiserver-v3.28.0.tar  kube-controllers-v3.28.0.tar       operator-v1.34.0.tar
cni-v3.28.0.tar        node-driver-registrar-v3.28.0.tar  pod2daemon-flexvol-v3.28.0.tar
csi-v3.28.0.tar        node-v3.28.0.tar                   typha-v3.28.0.tar

4.5 导入 calico 镜像(所有节点)

crictl 查询的所有镜像,来自于底层 k8s.io 命名空间,使用 ctr 将 tar 包导入 k8s.io 命名空间

  • 导入kmaster
[root@kmaster calico]# crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/coredns                   v1.11.1             cbb01a7bd410d       18.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.12-0            3861cfcd7c04c       57.2MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.30.0             c42f13656d0b2       32.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.30.0             c7aad43836fa5       31MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.30.0             a0bf559e280cf       29MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.30.0             259c8277fcbbc       19.2MB
registry.aliyuncs.com/google_containers/pause                     3.6                 6270bb605e12e       302kB
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB

# 执行导入
ctr -n k8s.io image import apiserver-v3.28.0.tar
ctr -n k8s.io image import cni-v3.28.0.tar csi-v3.28.0.tar
ctr -n k8s.io image import csi-v3.28.0.tar
ctr -n k8s.io image import kube-controllers-v3.28.0.tar
ctr -n k8s.io image import node-driver-registrar-v3.28.0.tar
ctr -n k8s.io image import node-v3.28.0.tar
ctr -n k8s.io image import operator-v1.34.0.tar
ctr -n k8s.io image import pod2daemon-flexvol-v3.28.0.tar
ctr -n k8s.io image import typha-v3.28.0.tar

[root@kmaster calico]# crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
docker.io/calico/apiserver                                        v3.28.0             6c07591fd1cfa       97.9MB
docker.io/calico/cni                                              v3.28.0             107014d9f4c89       209MB
docker.io/calico/csi                                              v3.28.0             1a094aeaf1521       18.3MB
docker.io/calico/kube-controllers                                 v3.28.0             428d92b022539       79.2MB
docker.io/calico/node-driver-registrar                            v3.28.0             0f80feca743f4       23.5MB
docker.io/calico/node                                             v3.28.0             4e42b6f329bc1       355MB
docker.io/calico/pod2daemon-flexvol                               v3.28.0             587b28ecfc62e       13.4MB
docker.io/calico/typha                                            v3.28.0             a9372c0f51b54       71.2MB
quay.io/tigera/operator                                           v1.34.0             01249e32d0f6f       73.7MB
registry.aliyuncs.com/google_containers/coredns                   v1.11.1             cbb01a7bd410d       18.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.12-0            3861cfcd7c04c       57.2MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.30.0             c42f13656d0b2       32.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.30.0             c7aad43836fa5       31MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.30.0             a0bf559e280cf       29MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.30.0             259c8277fcbbc       19.2MB
registry.aliyuncs.com/google_containers/pause                     3.6                 6270bb605e12e       302kB
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB
  • 导入knode1
[root@knode1 calico]# crictl images
IMAGE                                                TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.30.0             a0bf559e280cf       29MB
registry.aliyuncs.com/google_containers/pause        3.6                 6270bb605e12e       302kB

# 执行导入
ctr -n k8s.io image import apiserver-v3.28.0.tar
ctr -n k8s.io image import cni-v3.28.0.tar csi-v3.28.0.tar
ctr -n k8s.io image import csi-v3.28.0.tar
ctr -n k8s.io image import kube-controllers-v3.28.0.tar
ctr -n k8s.io image import node-driver-registrar-v3.28.0.tar
ctr -n k8s.io image import node-v3.28.0.tar
ctr -n k8s.io image import operator-v1.34.0.tar
ctr -n k8s.io image import pod2daemon-flexvol-v3.28.0.tar
ctr -n k8s.io image import typha-v3.28.0.tar

[root@knode1 calico]# crictl images
IMAGE                                                TAG                 IMAGE ID            SIZE
docker.io/calico/apiserver                           v3.28.0             6c07591fd1cfa       97.9MB
docker.io/calico/cni                                 v3.28.0             107014d9f4c89       209MB
docker.io/calico/csi                                 v3.28.0             1a094aeaf1521       18.3MB
docker.io/calico/kube-controllers                    v3.28.0             428d92b022539       79.2MB
docker.io/calico/node-driver-registrar               v3.28.0             0f80feca743f4       23.5MB
docker.io/calico/node                                v3.28.0             4e42b6f329bc1       355MB
docker.io/calico/pod2daemon-flexvol                  v3.28.0             587b28ecfc62e       13.4MB
docker.io/calico/typha                               v3.28.0             a9372c0f51b54       71.2MB
quay.io/tigera/operator                              v1.34.0             01249e32d0f6f       73.7MB
registry.aliyuncs.com/google_containers/kube-proxy   v1.30.0             a0bf559e280cf       29MB
registry.aliyuncs.com/google_containers/pause        3.6                 6270bb605e12e       302kB
  • 导入knode2
[root@knode2 calico]# crictl images
IMAGE                                                TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.30.0             a0bf559e280cf       29MB
registry.aliyuncs.com/google_containers/pause        3.6                 6270bb605e12e       302kB

# 执行导入
ctr -n k8s.io image import apiserver-v3.28.0.tar
ctr -n k8s.io image import cni-v3.28.0.tar csi-v3.28.0.tar
ctr -n k8s.io image import csi-v3.28.0.tar
ctr -n k8s.io image import kube-controllers-v3.28.0.tar
ctr -n k8s.io image import node-driver-registrar-v3.28.0.tar
ctr -n k8s.io image import node-v3.28.0.tar
ctr -n k8s.io image import operator-v1.34.0.tar
ctr -n k8s.io image import pod2daemon-flexvol-v3.28.0.tar
ctr -n k8s.io image import typha-v3.28.0.tar

[root@knode2 calico]# crictl images
IMAGE                                                TAG                 IMAGE ID            SIZE
docker.io/calico/apiserver                           v3.28.0             6c07591fd1cfa       97.9MB
docker.io/calico/cni                                 v3.28.0             107014d9f4c89       209MB
docker.io/calico/csi                                 v3.28.0             1a094aeaf1521       18.3MB
docker.io/calico/kube-controllers                    v3.28.0             428d92b022539       79.2MB
docker.io/calico/node-driver-registrar               v3.28.0             0f80feca743f4       23.5MB
docker.io/calico/node                                v3.28.0             4e42b6f329bc1       355MB
docker.io/calico/pod2daemon-flexvol                  v3.28.0             587b28ecfc62e       13.4MB
docker.io/calico/typha                               v3.28.0             a9372c0f51b54       71.2MB
quay.io/tigera/operator                              v1.34.0             01249e32d0f6f       73.7MB
registry.aliyuncs.com/google_containers/kube-proxy   v1.30.0             a0bf559e280cf       29MB
registry.aliyuncs.com/google_containers/pause        3.6                 6270bb605e12e       302kB

4.6 部署 calico(仅master节点)

将两个 yaml 文件上传到 master,并按照顺序分别执行

[root@kmaster calico]# ls *.yaml
custom-resources-v3.28.0.yaml  tigera-operator-v3.28.0.yaml

[root@kmaster calico]# kubectl create -f tigera-operator-v3.28.0.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

[root@kmaster calico]# kubectl create -f custom-resources-v3.28.0.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

[root@kmaster calico]# kubectl get pod -n calico-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-56fd574ff-xnjfm   1/1     Running   0          70s
calico-node-52xjm                         1/1     Running   0          70s
calico-node-mvddg                         1/1     Running   0          70s
calico-node-thn86                         1/1     Running   0          70s
calico-typha-8799df89c-bfhj6              1/1     Running   0          65s
calico-typha-8799df89c-nxvt2              1/1     Running   0          70s
csi-node-driver-9b275                     2/2     Running   0          70s
csi-node-driver-ghknb                     2/2     Running   0          70s
csi-node-driver-lgrt8                     2/2     Running   0          70s

[root@kmaster calico]# kubectl get no
NAME      STATUS   ROLES           AGE   VERSION
kmaster   Ready    control-plane   29m   v1.30.0
knode1    Ready    <none>          26m   v1.30.0
knode2    Ready    <none>          26m   v1.30.0
  • END