一.环境准备

1.系统介绍

  • k8s-master:192.168.142.131
  • k8s-node1:192.168.142.133
  • k8s-node2:192.168.142.134

2.软件介绍

  • Docker:20-ce
  • K8s:1.23

3.主机环境配置

systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
swapoff -a # 临时关闭swap,如果要永久关闭需要注释/etc/fstab里面swap行

cat >> /etc/hosts << EOF
192.168.142.131 k8s-master
192.168.142.133 k8s-node1
192.168.142.134 k8s-node2
EOF

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

hostnamectl set-hostname <hostname> # 根据规划设置对应的主机名
yum install ntpdate -y
ntpdate -u cn.pool.ntp.org
echo "*/20 * * * * /usr/sbin/ntpdate -u cn.pool.ntp.org >/dev/null &" >> /var/spool/cron/root

二.集群服务安装配置

1.所有主机安装docker-ce,kubeadm、kubelet、kubectl

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce
systemctl enable docker && systemctl start docker
#下面配置镜像下载加速器,否则初始化master节点超时报错
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
systemctl enable kubelet
systemctl start kubelet

2.部署k8s-master主机服务

kubeadm init \
--apiserver-advertise-address=192.168.142.131 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.0 \
--service-cidr=10.10.0.0/16 \
--pod-network-cidr=10.20.0.0/16 \
--ignore-preflight-errors=all

参数说明:
--apiserver-advertise-address=192.168.142.131 #集群通告地址
--image-repository registry.aliyuncs.com/google_containers #指定阿里云镜像仓库地址
--kubernetes-version v1.23.0 #K8s版本,与上面安装的一致
--service-cidr=10.10.0.0/16 #集群内部虚拟网络,Pod统一访问入口
--pod-network-cidr=10.20.0.0/16 #Pod网络,与下面部署的CNI网络组件yaml中保持一致
--ignore-preflight-errors=all

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #拷贝k8s认证文件
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes #查看工作节点

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker

3.部署k8s-node节点服务

kubeadm join 192.168.142.131:6443 --token bymjqi.pug8jqsrv1x9cmpt \
--discovery-token-ca-cert-hash sha256:9656ee3002137db251923e93176e4ebf08de57b2c82979d30a7c49f1a59e7024#这里复制上面生成的一串命令,根据你实际生成的复制在node节点执行,token有效期为24小时,当过期之后,该token就不可用了。
#这时就需要重新创建token,使用下面的命令:
kubeadm token create --print-join-command

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_k8s_02

4.在k8s-master部署calico网络插件

wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate
cat calico.yaml | grep image
image: docker.io/calico/cni:v3.24.5
imagePullPolicy: IfNotPresent
image: docker.io/calico/cni:v3.24.5
imagePullPolicy: IfNotPresent
image: docker.io/calico/node:v3.24.5
imagePullPolicy: IfNotPresent
image: docker.io/calico/node:v3.24.5
imagePullPolicy: IfNotPresent
image: docker.io/calico/kube-controllers:v3.24.5
imagePullPolicy: IfNotPresent
sed -i 's/192.168.0.0/10.20.0.0/g' calico.yaml #修改CALICO_IPV4POOL_CIDR参数与前面kubeadm init的 --pod-network-cidr指定的一样
kubectl apply -f calico.yaml
kubectl get pods -n kube-system #启动很慢,需要等一会才看到runing状态

注意:K8s版本1.23,如果calico版本不匹配,会报错no matches for kind “PodDisruptionBudget“ in version “policy/v1“。

查看k8s对应的calico的版本 ​​https://projectcalico.docs.tigera.io/archive/v3.24/getting-started/kubernetes/requirements​

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker_03

看到K8s版本1.23有对应calico版本v3.24,满足版本需求。

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_k8s_04

三.测试kubernetes集群

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker_05

node节点查看主节点映射对应的30063端口服务:

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker_06

四.node节点部署Dashboard

1.部署Dashboard

# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
# vim recommended.yaml #修改下面的配置,实现集群外部也可以访问
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
type: NodePort

---

# kubectl apply -f recommended.yaml
# kubectl get pods -n kubernetes-dashboard

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker_07

报错:The connection to the server localhost:8080 was refused - did you specify the right host or port?解决:

scp -r 192.168.142.131:/etc/kubernetes/admin.conf /etc/kubernetes/
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker_08

2.授权获取网页访问需要的token

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker_09

3.网页访问

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker_10

centos7.9采用kubeadm的方式部署k8s(Kubernetes)_docker_11