文章目录




一、环境信息

K8s-master  192.168.1.11  CentOS7.5 2C4G40G
K8s-node 192.168.1.11 CentOS7.5 2C4G40G

二、安装前准备(所有节点执行)

​2.1. 关闭防火墙​

systemctl disable firewalld
systemctl stop firewalld

​2.2. 禁用SELinux​

setenforce 0
sed -i '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config

​2.3. 关闭Swap​

swapoff -a
sed -i '/.*swap.*/ s/^[a-Z]/#/' /etc/fstab

​2.4. 将桥接的IPv4流量传递到iptables链​

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

​2.5.安装Docker​

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce
systemctl enable docker && systemctl start docker

docker配置镜像加速器

​2.6.添加阿里云YUM软件源​

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

​2.7.安装kubeadm,kubelet和kubectl​

注意:在这里不需要启动Kubelet,部署Master时会自动启动

yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
systemctl enable kubelet

三、部署Kubernetes Master(Master节点执行)

kubeadm init \
--apiserver-advertise-address=192.168.1.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.19.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all

相关解释:

  • –apiserver-advertise-address #集群通告地址,填写Master的物理网卡地址
  • –image-repository #指定阿里云镜像仓库地址
  • –kubernetes-version #K8s版本,与上面安装的一致
  • –service-cidr #集群内部虚拟网络,指定Cluster IP的网段
  • –pod-network-cidr #指定Pod IP的网段
  • –ignore-preflight-errors=all #忽略安装过程的一些错误

​备注:如果想重新执行,可以通过kubeadm reset重置​

输出结果
kubeadm join那段需保存下来,添加节点会用到
kubeadm部署K8s集群V1.19.0_K8s
拷贝kubectl使用的连接k8s认证文件到默认路径

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

四、Kubernetes Node加入集群(Node节点操作)

执行kubeadm init输出的kubeadm join命令

kubeadm join 192.168.1.10:6443 --token 78gee8.o8w0kvp9g1qsr97p  \
--discovery-token-ca-cert-hash sha256:27dbba301cc22bdcd457d8bbdfb7acb97eb124ca25a6c93e22b084c9a35ad2dc

输出结果:
kubeadm部署K8s集群V1.19.0_kubeadm_02
在Master查看节点状态
kubeadm部署K8s集群V1.19.0_kubernetes_03
发现状态为​​​NotReady​​,journalctl -u kubelet -f查看日志,提示为网络插件没准备好
kubeadm部署K8s集群V1.19.0_kubeadm_04

五、安装Calico网络插件

wget https://docs.projectcalico.org/manifests/calico.yaml
vi calico.yaml

修改里面定义Pod网络(CALICO_IPV4POOL_CIDR)那行,该值与Kubeadm init指定的​​--pod-network-cidr​​​需一致
kubeadm部署K8s集群V1.19.0_kubeadm_05

kubectl apply -f calico.yaml
kubectl get pod -owide -nkube-system

发现分配到192.168.1.11上的两个Pod没起来,这是因为192.168.1.11机器不能访问外网,需要把10上的镜像拷贝到11机器上
kubeadm部署K8s集群V1.19.0_ico_06

​5.1.保存镜像(Master操作)​

mkdir images
cd images
docker save registry.aliyuncs.com/google_containers/pause:3.2 -o pause.tar
docker save registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0 -o kube-proxy.tar
docker save calico/node:v3.16.4 -o calico-node.tar
docker save calico/pod2daemon-flexvol:v3.16.4 -o calico-pod2daemon-flexvol.tar
docker save calico/cni:v3.16.4 -o calico-cni.tar
scp -r images 192.168.1.11:/opt/

​5.2.导入镜像(Node节点操作)​

ls *.tar |xargs -i docker load -i {}

过一会,你会看到所有Pod正常启动
kubeadm部署K8s集群V1.19.0_docker_07
节点状态变为Ready
kubeadm部署K8s集群V1.19.0_kubeadm_08