配置虚拟机节点

以CentOS-Stream8为例,创建虚拟机并进行克隆,克隆三台虚拟机分别是kmaster(控制节点),knode1(集群节点1),knode2(集群节点2)。

主机规划

主机名

IP地址

网关/DNS

CPU/内存

磁盘

kmaster

192.168.145.141

192.168.145.2

2U/4G

100G

knode1

192.168.145.142

192.168.145.2

2U/6G

100G

knode2

192.168.145.143

192.168.145.2

2U/6G

100G

根据自己的NAT虚拟环境查看网关:进入虚拟网络编辑器就可以查看自己的网关以及网段,网关和网段可以修改,我们这里使用默认的网段和网关,DNS同步成网段地址。

K8S—集群搭建超详细保姆级教程_docker


进入虚拟机更改主机名:

kmaster为例:

[root@localhost ~]# hostnamectl set-hostname kmaster
[root@localhost ~]# bash
[root@kmaster ~]#

接着根据以上步骤分别继续更改knode1,knode2


配置静态IP(可配可不配)

将三台虚拟机器分别配置静态IP地址,以kmaster为例根据规划进行配置

[root@kmaster ~]# cd /etc/sysconfig/network-scripts/
[root@kmaster network-scripts]# ls
ifcfg-ens160
[root@kmaster network-scripts]# vim ifcfg-ens160

[root@kmaster network-scripts]# cat ifcfg-ens160
TYPE=Ethernet
BOOTPROTO=none
NAME=ens160
DEVICE=ens160
ONBOOT=yes

IPADDR=192.168.145.141
NETMASK=255.255.255.0
GATEWAY=192.168.145.2
DNS1=192.168.145.2

接下来按照上面的方式继续配置knode1,knode2的静态IP

做到这里因为需要重启网络,所以我们直接关机,顺手给虚拟机打上一个快照,简介可以写:已经配置完静态IP,打完快照我开机进行接下来的步骤

运行K8S的脚本文件前的准备工作

上传脚本文件

通过WinSCP连接或者是mobaxterm,Xshell的上传文件上传K8s脚本文件到kmaster,knode1,knode2的家目录下面

K8S—集群搭建超详细保姆级教程_docker_02

接着分别进入三台虚拟机查看该文件是否存在

[root@kmaster ~]# ls
1-Stream8-k8s-v1.30.0.sh  Desktop    Downloads             Music     Public     Videos
anaconda-ks.cfg           Documents  initial-setup-ks.cfg  Pictures  Templates

确认脚本已经存在,运行脚本前需要注意的是:

检查并修改host ip

在vim打开k8s脚本确认里面host ip那一行里面的网卡名与你的网卡名称保持一致,如果里面的是ens33,你的是ens160,那么需要将脚本里面的ens33改成ens160,如果里面的是ens160,你的是ens33,那么将脚本里面的ens160改成ens33,总之将脚本里面的host ip的ens改成你设备上的ens即可

我这里是ens160,脚本里面是ens33,所以我把三个节点脚本里面的ens33全都改成ens160

K8S—集群搭建超详细保姆级教程_calico_03

K8S—集群搭建超详细保姆级教程_K8S_04


卸载podman,runc

修改三台虚拟机的脚本host ip后,需要在三台虚拟机上分别卸载podmanrunc 这两个软件,因为脚本第五段中需要安装docker-ce,这两个软件可能会和docker-ce发生冲突,进而导致集群初始化命令失败,为了避免这个问题的出现,我们提前卸载掉

[root@kmaster ~]# yum -y remove podman runc
Dependencies resolved.
=======================================================================================================================
 Package                    Architecture   Version                                            Repository          Size
=======================================================================================================================
Removing:
 podman                     x86_64         2:4.2.0-1.module_el8.7.0+1216+b022c01d             @AppStream          41 M
 runc                       x86_64         1:1.1.4-1.module_el8.7.0+1216+b022c01d             @AppStream         9.5 M
Removing dependent packages:
 buildah                    x86_64         1:1.27.0-2.module_el8.7.0+1216+b022c01d            @AppStream          26 M

…………………………………………
…………………………………………#略
…………………………………………

	shadow-utils-subid-2:4.6-17.el8.x86_64
  slirp4netns-1.2.0-2.module_el8.7.0+1216+b022c01d.x86_64

Complete!

若没有卸载这两个软件包,虽然能运行成功,但是第五段就会报错如下05Error

[root@knode1 ~]# sh 1-Stream8-k8s-v1.30.0.sh
###00 Checking RPM###
mount: /mnt: WARNING: device write-protected, mounted read-only.
0 files removed
repo id                                                   repo name                                             status
app                                                       app                                                   enabled

…………………………………………
…………………………………………略
…………………………………………

04 configuration successful ^_^
###05 Checking docker###
Adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
list docker-ce versions
Waiting for process with pid 5073 to finish.
Last metadata expiration check: 0:00:01 ago on Wed 14 Aug 2024 10:31:01 AM CST.
docker-ce.x86_64                3:26.1.3-1.el8                  docker-ce-stable
docker-ce.x86_64                3:26.1.2-1.el8                  docker-ce-stable
docker-ce.x86_64                3:26.1.1-1.el8                  docker-ce-stable
docker-ce.x86_64                3:26.1.0-1.el8                  docker-ce-stable
docker-ce.x86_64                3:26.0.2-1.el8                  docker-ce-stable
docker-ce.x86_64                3:26.0.1-1.el8                  docker-ce-stable
docker-ce.x86_64                3:26.0.0-1.el8                  docker-ce-stable
docker-ce.x86_64                3:25.0.5-1.el8                  docker-ce-stable
docker-ce.x86_64                3:25.0.4-1.el8                  docker-ce-stable
docker-ce.x86_64                3:25.0.3-1.el8                  docker-ce-stable
docker-ce.x86_64                3:25.0.2-1.el8                  docker-ce-stable
docker-ce.x86_64                3:25.0.1-1.el8                  docker-ce-stable
docker-ce.x86_64                3:25.0.0-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.9-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.8-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.7-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.6-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.5-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.4-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.3-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.2-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.1-1.el8                  docker-ce-stable
docker-ce.x86_64                3:24.0.0-1.el8                  docker-ce-stable
docker-ce.x86_64                3:23.0.6-1.el8                  docker-ce-stable
docker-ce.x86_64                3:23.0.5-1.el8                  docker-ce-stable
docker-ce.x86_64                3:23.0.4-1.el8                  docker-ce-stable
docker-ce.x86_64                3:23.0.3-1.el8                  docker-ce-stable
docker-ce.x86_64                3:23.0.2-1.el8                  docker-ce-stable
docker-ce.x86_64                3:23.0.1-1.el8                  docker-ce-stable
docker-ce.x86_64                3:23.0.0-1.el8                  docker-ce-stable
docker-ce.x86_64                3:20.10.9-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.8-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.7-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.6-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.5-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.4-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.3-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.24-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.2-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.23-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.22-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.21-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.20-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.19-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.18-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.17-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.16-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.15-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.14-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.1-3.el8                 docker-ce-stable
docker-ce.x86_64                3:20.10.13-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.12-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.11-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.10-3.el8                docker-ce-stable
docker-ce.x86_64                3:20.10.0-3.el8                 docker-ce-stable
docker-ce.x86_64                3:19.03.15-3.el8                docker-ce-stable
docker-ce.x86_64                3:19.03.14-3.el8                docker-ce-stable
docker-ce.x86_64                3:19.03.13-3.el8                docker-ce-stable
Available Packages
Last metadata expiration check: 0:00:02 ago on Wed 14 Aug 2024 10:31:01 AM CST.
Error:
 Problem: package docker-ce-3:26.1.3-1.el8.x86_64 requires containerd.io >= 1.6.24, but none of the providers can be installed
  - package containerd.io-1.6.24-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.24-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.25-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.25-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.26-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.26-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.27-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.27-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.28-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.28-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.28-3.2.el8.x86_64 conflicts with runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.28-3.2.el8.x86_64 obsoletes runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.31-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.31-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.32-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - package containerd.io-1.6.32-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.1.4-1.module_el8.7.0+1216+b022c01d.x86_64
  - problem with installed package buildah-1:1.27.0-2.module_el8.7.0+1216+b022c01d.x86_64
  - package buildah-1:1.27.0-2.module_el8.7.0+1216+b022c01d.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - cannot install the best candidate for the job
  - package runc-1.0.0-56.rc5.dev.git2abd837.module_el8.4.0+521+9df8e6d3.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-64.rc10.module_el8.4.0+522+66908d0c.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-73.rc95.module_el8.6.0+1107+d59a301b.x86_64 is filtered out by modular filtering
  - package runc-1:1.1.3-2.module_el8.7.0+1197+29cf2b8e.x86_64 is filtered out by modular filtering
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Failed to start docker.service: Unit docker.service not found.
Failed to enable unit: Unit file docker.service does not exist.
1-Stream8-k8s-v1.30.0.sh: line 77: /etc/docker/daemon.json: No such file or directory
Failed to restart docker.service: Unit docker.service not found.
05 configuration successful ^_^
###06 Checking iptables###
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
net.ipv4.ip_forward = 1
06 configuration successful ^_^

………………………………
………………………………略
………………………………

  slirp4netns-1.2.0-2.module_el8.7.0+1216+b022c01d.x86_64

Complete!

所以为了避免报错导致后续配置出问题,需要卸载了这两个软件包再运行脚本


安装containerd容器进行时

在 Kubernetes 集群中安装 containerd 是创建和管理容器化应用的基础步骤,确保容器的高效执行与管理。因为我们接下来在三台虚拟机上分别安装containerd容器进行时,避免环境中没有containerd导致接下来脚本运行报错

[root@knode1 ~]# yum -y install containerd.io.x86_64
Last metadata expiration check: 0:18:07 ago on Wed 14 Aug 2024 10:31:05 AM CST.
Dependencies resolved.
=======================================================================================================================
 Package                  Architecture  Version                                          Repository               Size
=======================================================================================================================
Installing:
 containerd.io            x86_64        1.6.32-3.1.el8                                   docker-ce-stable         35 M
Installing dependencies:
 container-selinux        noarch        2:2.189.0-1.module_el8.7.0+1216+b022c01d         app                      60 k

Transaction Summary

………………………………………………………………
………………………………………………………………#略
………………………………………………………………

Installed:
  container-selinux-2:2.189.0-1.module_el8.7.0+1216+b022c01d.noarch         containerd.io-1.6.32-3.1.el8.x86_64

Complete!



运行K8S脚本

成功运行

三台虚拟机上分别运行刚刚上传到家目录的sh脚本

[root@knode1 ~]# sh 1-Stream8-k8s-v1.30.0.sh

那这个时候三台虚拟机的脚本段落都会是successful

master运行脚本第十一步

首先查看一下脚本的文件,然后接着把脚本的11步操作拷贝出来单独给master运行进行初始化

[root@kmaster ~]# cat 1-Stream8-k8s-v1.30.0.sh
#!/bin/bash
# CentOS stream 8 install kubenetes 1.30.0
# the number of available CPUs 1 is less than the required 2
# k8s 环境要求虚拟cpu数量至少2个
# 使用方法:在所有节点上执行该脚本,所有节点配置完成后,复制第11步语句,单独在master节点上进行集群初始化。

…………………………………………
…………………………………………略
…………………………………………

#11 Initialize the cluster
# 仅在master主机上做集群初始化
# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.30.0 --pod-network-cidr=10.244.0.0/16

将这个命令复制下来:kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.30.0 --pod-network-cidr=10.244.0.0/16只在master上面运行,不需要再node1和node2上面执行

[root@kmaster ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.30.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.30.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0814 11:52:08.234925   36278 checks.go:844] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.145.141]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.145.141 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.145.141 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.921332ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 8.001371s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: fi8x2i.svw0deni455pgpy2
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.145.141:6443 --token fi8x2i.svw0deni455pgpy2 \
        --discovery-token-ca-cert-hash sha256:8a15b6f7f331df9df3d7145fad9ee43bb388cd7e245e10cf326c097b0f5247c3

看到运行结果的最后两行,是其他两台工作节点用来加入mastertoken

kubeadm join 192.168.145.141:6443 --token fi8x2i.svw0deni455pgpy2 \
        --discovery-token-ca-cert-hash sha256:8a15b6f7f331df9df3d7145fad9ee43bb388cd7e245e10cf326c097b0f5247c3

该token有效期为24小时,过了24小时需要重新打印tocken票才能让其他节点加入到master的集群里面来


重新打印token票

我们现在直接在master上重新打印token票

[root@kmaster ~]# kubeadm token create --print-join-command
kubeadm join 192.168.145.141:6443 --token tmzwy9.bwuu6zccpg0ehn3v --discovery-token-ca-cert-hash sha256:8a15b6f7f331df9df3d7145fad9ee43bb388cd7e245e10cf326c097b0f5247c3


加入集群

复制生成的新token票直接在knode1,knode2上面运行

[root@knode1 ~]# kubeadm join 192.168.145.141:6443 --token tmzwy9.bwuu6zccpg0ehn3v --discovery-token-ca-cert-hash sha256:8a15b6f7f331df9df3d7145fad9ee43bb388cd7e245e10cf326c097b0f5247c3
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "knode1" could not be reached
        [WARNING Hostname]: hostname "knode1": lookup knode1 on 192.168.145.2:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001275892s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@knode2 ~]# kubeadm join 192.168.145.141:6443 --token tmzwy9.bwuu6zccpg0ehn3v --discovery-token-ca-cert-hash sha256:8a15b6f7f331df9df3d7145fad9ee43bb388cd7e245e10cf326c097b0f5247c3
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "knode2" could not be reached
        [WARNING Hostname]: hostname "knode2": lookup knode2 on 192.168.145.2:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.048398ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

此时,knode1和knode2已经加入成功


激活K8S命令

K8S命令是基于隐藏目录.kube下的config文件里面的内容来执行的,我们需要将它重定向到/etc/profile文件里面重新加载才可以正常使用K8S命令

[root@kmaster ~]# mkdir -p $HOME/.kube
[root@kmaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kmaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@kmaster ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
[root@kmaster ~]# source /etc/profile

执行这五句命令后,K8S命令便可以正常使用,我们用k8s命令可以查看集群下刚刚加入的两个节点信息

[root@kmaster ~]# kubectl get node
NAME      STATUS     ROLES           AGE     VERSION
kmaster   NotReady   control-plane   4m18s   v1.30.0
knode1    NotReady   <none>          9s      v1.30.0
knode2    NotReady   <none>          5s      v1.30.0

这个时候已经可以看到集群里面所有节点的信息,但是设备都是NotReady表示还没有准备就绪,云因是因为节点之间的网络不通,这个时候可以通过安装calico网络插件实现knode和kmaster之间的网络互通。


上传yaml文件以及calico镜像包

以下步骤全部以kmaster为例,knode1和knode2都进行相同的操作

提前在三个虚拟机的家目录上创建一个calico目录用于专门存放9个calico压缩包,然后我们还是通过WinSCP连接到三个节点的家目录,把两个需要的yaml文件和calico镜像包全部上传过去

[root@kmaster ~]# mkdir calico

上传yaml文件

K8S—集群搭建超详细保姆级教程_calico_05

上传calico镜像包

K8S—集群搭建超详细保姆级教程_calico_06

然后在三台虚拟机的家目录下和calico目录下可以看到刚刚上传的文件

[root@kmaster ~]# ls
1-Stream8-k8s-v1.30.0.sh  custom-resources-v3.28.0.yaml  Downloads             Pictures   tigera-operator-v3.28.0.yaml
anaconda-ks.cfg           Desktop                        initial-setup-ks.cfg  Public     Videos
calico                    Documents                      Music                 Templates
[root@kmaster ~]# cd calico/
[root@kmaster calico]# ls
apiserver-v3.28.0.tar  kube-controllers-v3.28.0.tar       operator-v1.34.0.tar
cni-v3.28.0.tar        node-driver-registrar-v3.28.0.tar  pod2daemon-flexvol-v3.28.0.tar
csi-v3.28.0.tar        node-v3.28.0.tar                   typha-v3.28.0.tar


导入calico镜像

crictl images查询的所有镜像,都来自于底层 k8s.io 命名空间

[root@kmaster ~]# crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/coredns                   v1.11.1             cbb01a7bd410d       18.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.12-0            3861cfcd7c04c       57.2MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.30.0             c42f13656d0b2       32.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.30.0             c7aad43836fa5       31MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.30.0             a0bf559e280cf       29MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.30.0             259c8277fcbbc       19.2MB
registry.aliyuncs.com/google_containers/pause                     3.6                 6270bb605e12e       302kB
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB
[root@kmaster ~]# ctr -n k8s.io image import apiserver-v3.28.0.tar
ctr: open apiserver-v3.28.0.tar: no such file or directory

我们接下来在三台虚拟机上用ctr将calico九个镜像都上传到k8s.io的命名空间

[root@kmaster calico]# ctr -n k8s.io image import apiserver-v3.28.0.tar
unpacking docker.io/calico/apiserver:v3.28.0 (sha256:038978b5c494f9cb6c3fdac6a6c4cc5db9a98f0750e1a25afffa91deca341a28)...done
[root@kmaster calico]# ctr -n k8s.io image import cni-v3.28.0.tar csi-v3.28.0.tar
unpacking docker.io/calico/cni:v3.28.0 (sha256:2089130bd98d0fbf20b9d24fea3c29bdb1a3a7eacd21a5d1d9ae8165511a6f90)...done
[root@kmaster calico]# ctr -n k8s.io image import csi-v3.28.0.tar
unpacking docker.io/calico/csi:v3.28.0 (sha256:69740808222bfa4ad0690d3463021eceb982abfc3f8dff1263d58aa9dd2210bb)...done
[root@kmaster calico]# ctr -n k8s.io image import kube-controllers-v3.28.0.tar
unpacking docker.io/calico/kube-controllers:v3.28.0 (sha256:7e758bd6257c9a00c940c9f020894aeb736f0a7c65c5f8f61b215adc7a6afbe2)...done
[root@kmaster calico]# ctr -n k8s.io image import node-driver-registrar-v3.28.0.tar
unpacking docker.io/calico/node-driver-registrar:v3.28.0 (sha256:1eaefbf315f9bb4a25081c38948e450ea0d295bca366de15c50769fe5a9d9ea8)...done
[root@kmaster calico]# ctr -n k8s.io image import node-v3.28.0.tar
unpacking docker.io/calico/node:v3.28.0 (sha256:2b4e1fe89693271bf4a95f77195c9a33a46dc86f481bdbd628b62d188fde582e)...done
[root@kmaster calico]# ctr -n k8s.io image import operator-v1.34.0.tar
unpacking quay.io/tigera/operator:v1.34.0 (sha256:1cbb698731e0a38687ab1207b0112e4681cad1495a047eb4a27634b78d6b5f27)...done
[root@kmaster calico]# ctr -n k8s.io image import pod2daemon-flexvol-v3.28.0.tar
unpacking docker.io/calico/pod2daemon-flexvol:v3.28.0 (sha256:3be792e852c7d64361f096cd0b73aff9094f17f7d999c0c608494e7be2775dd4)...done
[root@kmaster calico]# ctr -n k8s.io image import typha-v3.28.0.tar
unpacking docker.io/calico/typha:v3.28.0 (sha256:6b6b7f5be1ce44ce6d26da169fa005e78a9acb494b3cb281e67b5fa19521cdd4)...done

这个时候再次进行crictl images查询可以看见多出来了我们刚刚导入的九个镜像

[root@kmaster calico]# crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
docker.io/calico/apiserver                                        v3.28.0             6c07591fd1cfa       97.9MB
docker.io/calico/cni                                              v3.28.0             107014d9f4c89       209MB
docker.io/calico/csi                                              v3.28.0             1a094aeaf1521       18.3MB
docker.io/calico/kube-controllers                                 v3.28.0             428d92b022539       79.2MB
docker.io/calico/node-driver-registrar                            v3.28.0             0f80feca743f4       23.5MB
docker.io/calico/node                                             v3.28.0             4e42b6f329bc1       355MB
docker.io/calico/pod2daemon-flexvol                               v3.28.0             587b28ecfc62e       13.4MB
docker.io/calico/typha                                            v3.28.0             a9372c0f51b54       71.2MB
quay.io/tigera/operator                                           v1.34.0             01249e32d0f6f       73.7MB
registry.aliyuncs.com/google_containers/coredns                   v1.11.1             cbb01a7bd410d       18.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.12-0            3861cfcd7c04c       57.2MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.30.0             c42f13656d0b2       32.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.30.0             c7aad43836fa5       31MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.30.0             a0bf559e280cf       29MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.30.0             259c8277fcbbc       19.2MB
registry.aliyuncs.com/google_containers/pause                     3.6                 6270bb605e12e       302kB
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB


运行yaml文件

在kmaster节点上先后运行两个yaml文件先运行tigera-operator-v3.28.0.yaml,再运行custom-resources-v3.28.0.yaml

[root@kmaster ~]# kubectl create -f tigera-operator-v3.28.0.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
[root@kmaster ~]# kubectl create -f custom-resources-v3.28.0.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

到此集群全部配置完毕


再次查看集群状态

动态查看calico容器状态,等待全部变成running后,集群即可变为正常

[root@kmaster calico]# kubectl get pod -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-67bf8fc9fc-4f78f   1/1     Running   0          63s
calico-node-7kwvx                          1/1     Running   0          63s
calico-node-pwwcv                          1/1     Running   0          63s
calico-node-xfwpk                          1/1     Running   0          63s
calico-typha-547b9c8859-b6xxp              1/1     Running   0          63s
calico-typha-547b9c8859-jjvtt              1/1     Running   0          63s
csi-node-driver-8njn2                      2/2     Running   0          63s
csi-node-driver-g25br                      2/2     Running   0          63s
csi-node-driver-thj9q                      2/2     Running   0          63s

全部变成running即代表calico网络插件安装成功

最后查看集群状态

[root@kmaster calico]# kubectl get 
NAME      STATUS   ROLES           AGE   VERSION
kmaster   Ready    control-plane   68m   v1.30.0
knode1    Ready    <none>          64m   v1.30.0
knode2    Ready    <none>          64m   v1.30.0

节点全部变成Ready,至此K8S集群搭建完成


PS:可以给安装好的K8S集群每个都打上一个快照,方便后续若出现其他问题可以直接使用这一套完整的集群



Author:Unefleur