Kubernetes kubeadm快速部署你的集群_kubernetes

Kubernetes是什么


• Kubernetes是Google在2014年开源的一个容器集群管理系统,Kubernetes简称K8s。


• Kubernetes用于容器化应用程序的部署,扩展和管理,目标是让部署容器化应用简单高效。


官方网站:http://www.kubernetes.io

官方文档:https://kubernetes.io/zh/docs/home/

Kubernetes集群架构与组件



Kubernetes kubeadm快速部署你的集群_kubernetes_02


 

Kubernetes集群架构与组件


Master组件

  • kube-apiserver

Kubernetes API,集群的统一入口,各组件协调者, 以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。


  • kube-controller-manager

处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。


  • kube-scheduler

根据调度算法为新创建的Pod选择一个Node节点,可以任意部署, 可以部署在同一个节点上,也可以部署在不同的节点上。


  • etcd

分布式键值存储系统。 用于保存集群状态数据,比如Pod、Service 等对象信息





Node组件



  • kubelet

kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期, 比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。 kubelet将每个Pod转换成一组容器。



  • kube-proxy

在Node节点上实现Pod网络代理, 维护网络规则和四层负载均衡工作



  • docker或rocket

容器引擎,运行容器。


Kubernetes kubeadm快速部署你的集群_docker_03

生产环境部署K8s的2种方式


kubeadm

Kubeadm是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

部署地址:​​https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/(可以快速部署k8s,为了简化二进制部署K8s集群)​

缺点是如果使用kubeadm搭建出来k8s集群,那么后期的维护会有点难度,因为搭建集群完全帮你自动化了。做了任何的配置都对于你来说是不可见的,它在后面默默的帮你完成了一系列的工作。在后期的维护对于这个环境是不熟悉的。比如修改一个配置或者找到配置文件都是陌生的,但是通过二进制可以很清晰的知道手动的配置的组件的位置和配置参数。

二进制

推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

下载地址:​​https://github.com/kubernetes/kubernetes/releases​

上面两种方式部署在生产环境都是没问题的,推荐最好使用二进制部署

服务器硬件配置推荐


这个生产环境可以基于容器最小资源占用,比如1核2G这种,大概估计一下node节点上可以跑多少个容器。

Kubernetes kubeadm快速部署你的集群_docker_04

 

使用kubeadm快速部署一个K8s集群


 环境的初始化(在Master和两个Node上执行)【所有节点】

先升级操作系统 :如果你版本是7.1-7.4执行yum update升级版本

[root@localhost ~]# yum update -y
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

关闭防火墙:centos7之后使用的是firewalld,6版本使用的是iptables,不过在centos7里面也可以使用iptables。其实firewalld和iptables都是用户态的工具,基于内核netfilter实现的。安装好centos7之后默认会在netfilter创建一些规则,只放行部分端口可以让外部访问,所以要清空规则就将防火墙关闭。一般会在服务器的流量入口加一个防火墙进行控制,很少会对服务器加防火墙规则。

systemctl stop firewalld
systemctl disable firewalld

禁用swap分区:在k8s当中不禁用不行,不关闭k8s可能启动不了

# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久

关闭selinux:

sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0 # 临时

 设置主机名:添加规划好的IP和主机名

 hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2


cat >> /etc/hosts << EOF
192.168.179.102 k8s-master
192.168.179.103 k8s-node1
192.168.179.104 k8s-node2
EOF
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效

时间同步:确保时间保持一致,因为k8s涉及到了https证书,证书对时间比较敏感,如果时间不一致可能会检测到时间是过期的,可能无法通信。

这里要注意时区,如果时区不对修改为上海时区

执行命令:tzselect

通过选择最后在/etc/profile添加如下,重启

TZ='Asia/Shanghai'
export TZ

时区选对了最后同步时区

yum install ntpdate -y
ntpdate time.windows.com

安装Docker/kubeadm/kubelet【所有节点】


Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。 

 安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce

systemctl enable docker && systemctl start docker

配置镜像下载加速器:加速器,拉取镜像的时候就快一点

cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

systemctl restart docker
docker info

添加阿里云YUM软件源(安装k8s相关组件的,所有节点需要执行)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm(这个工具可以帮助我们完成k8s集群搭建),kubelet和kubectl

Kubeadm不仅仅将k8s集群搭建简化了,还采用了容器化的方式去部署k8s组件,这里只需要知道采用了容器化部署,因为帮我们封装好了。唯一一个采用的不是容器化部署的是kubelet,这个使用的是传统的systemd去管理的,在宿主机进行管理。其他的都是采用容器化部署,也就是通过启动容器帮你拉起组件。

yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0  所有节点安装
systemctl enable kubelet


[root@ks8-node1 ~]# kube
kubeadm kubectl kubelet
  • kubelet:systemd守护进程管理
  • kubeadm:部署工具
  • kubectl:k8s命令行管理工具

安装好之后不要直接启动kubelet,因为还没有配置文件,kubeadm没有帮你生成其配置文件,只有kebeadm执行完成才有该配置文件,并且kubeadm会自动帮你拉起kubelet。所以这里只需要设置开机启动就行

所有节点都会有 kubelet kubeadm kubectl这三个工具

Kubectl(管理k8s集群工具)实际上在master安装就行,只不过在node节点安装了不使用罢了

部署Kubernetes Master(先创建master然后将node加入到k8s集群)


下面操作都在你的master节点上执行

现在全部在Master 192.168.179.102上执行

kubeadm init \
--apiserver-advertise-address=192.168.179.102 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.19.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all

Kube init这个命令的意思是创建一个master节点,后面是一些参数

  • --apiserver-advertise-address=192.168.179.102通告参数用于内网节点连接它的一个地址
  • --image-repository registry.aliyuncs.com/google_containers 指定的是阿里云镜像仓库,默认使用的是国外仓库地址去拉取镜像启动容器,国外网站是访问不了的。这里改为国内的仓库,这样解决了网络不通的问题。
  • --kubernetes-version v1.19.0 \  指定版本,要和上面yum安装的组件版本保持一致

这两个网段不要和现有的物理网络有冲突就行

  • --service-cidr=10.96.0.0/12 \                一个service的网段
  • --pod-network-cidr=10.244.0.0/16 \      一个pod的网段
  • --ignore-preflight-errors=all                  忽略检查的一些错误
[root@k8s-master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.179.102 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.19.0 \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16 \
> --ignore-preflight-errors=all
W1115 14:53:46.739904 1251 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0

 kubeadm init 初始化 master节点


 (1)[preflight] 环境检查和拉取镜像 ,kubeadm config | images pull(检查当前机器是否满足安装k8s的条件,比如最低配置1核1G)

[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster


[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.19.0 bc9c328f379c 2 months ago 118MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.19.0 09d665d529d0 2 months ago 111MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.19.0 1b74e93ece2f 2 months ago 119MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.19.0 cbdc8369d8b1 2 months ago 45.7MB
registry.aliyuncs.com/google_containers/etcd 3.4.9-1 d4ca8726196c 4 months ago 253MB
registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 5 months ago 45.2MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 9 months ago 683kB

 (2)[certs] 生成k8s证书和etcd证书 /etc/kubernetes/pki,生成k8s证书和etcd证书,其实这个证书生成是为了启用https,各种各样的组件都需要连接Apiserver,都走的是https去通信,所以该步骤是为其准备证书(apiserver连接etcd走的也是https,所以这里为etcd也生成了相关的证书)生成的证书在该目录下/etc/kubernetes/pki/

[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.179.102]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.179.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.179.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key


[root@k8s-master ~]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# ls
apiserver.crt apiserver-kubelet-client.crt etcd front-proxy-client.key
apiserver-etcd-client.crt apiserver-kubelet-client.key front-proxy-ca.crt sa.key
apiserver-etcd-client.key ca.crt front-proxy-ca.key sa.pub
apiserver.key ca.key front-proxy-client.crt

 (3)[kubeconfig] 生成kubeconfig文件,这个文件是认证文件,要连接apiserver,需要指定api server的地址,同时需要指明以什么身份去连接apiserver

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
Your Kubernetes control-plane has initialized successfully!

K8s的控制面板是已经初始化完成了,要使用集群执行下面步骤

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

这一步是将连接集群的配置文件拷贝到默认路径下,好使用命令行工具去管理集群。也就是使用kubectl去管理集群了(如果不拷贝这个文件是无法使用kubectl这个命令管理集群)

 (4)[kubelet-start] 生成kubelet配置文件

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet

这里生成了kubelet的配置文件,是kubeadm帮你生成的,同时帮你启动,可以使用systemctl status kubelet查看

[root@k8s-master .kube]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2022-04-15 11:05:58 CST; 2min 27s ago
Docs: https://kubernetes.io/docs/
Main PID: 2797 (kubelet)
Tasks: 12
Memory: 56.8M
CGroup: /system.slice/kubelet.service
└─2797 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf ...

Apr 15 11:08:04 k8s-master kubelet[2797]: W0415 11:08:04.182477 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.d
Apr 15 11:08:05 k8s-master kubelet[2797]: E0415 11:08:05.595183 2797 kubelet.go:2103] Container runtime network not ready: Networ...ialized
Apr 15 11:08:09 k8s-master kubelet[2797]: W0415 11:08:09.183292 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.d
Apr 15 11:08:10 k8s-master kubelet[2797]: E0415 11:08:10.616475 2797 kubelet.go:2103] Container runtime network not ready: Networ...ialized
Apr 15 11:08:14 k8s-master kubelet[2797]: W0415 11:08:14.184185 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.d
Apr 15 11:08:15 k8s-master kubelet[2797]: E0415 11:08:15.635718 2797 kubelet.go:2103] Container runtime network not ready: Networ...ialized
Apr 15 11:08:19 k8s-master kubelet[2797]: W0415 11:08:19.184774 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.d
Apr 15 11:08:20 k8s-master kubelet[2797]: E0415 11:08:20.659029 2797 kubelet.go:2103] Container runtime network not ready: Networ...ialized
Apr 15 11:08:24 k8s-master kubelet[2797]: W0415 11:08:24.184898 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.d
Apr 15 11:08:25 k8s-master kubelet[2797]: E0415 11:08:25.668610 2797 kubelet.go:2103] Container runtime network not ready: Networ...ialized
Hint: Some lines were ellipsized, use -l to show in full.
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2

(5)[control-plane] 部署管理节点组件,用镜像启动容器  

[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"

帮你启动kube-apiserve kube-controller-manage kube-scheduler etcd这些组件

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-ddt97 0/1 Pending 0 43m
coredns-6d56c8448f-lwn8m 0/1 Pending 0 43m
etcd-k8s-master 1/1 Running 0 43m
kube-apiserver-k8s-master 1/1 Running 0 43m
kube-controller-manager-k8s-master 1/1 Running 0 43m
kube-proxy-xth6p 1/1 Running 0 43m
kube-scheduler-k8s-master 1/1 Running 0 43m

(6)[etcd] 部署etcd数据库,用镜像启动容器

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

(7)[upload-config] [kubelet] [upload-certs] 上传配置文件到k8s中(将其配置文件存储在k8s当中,其他节点要加入集群会拉取这个配置文件去启动)

[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster

(8)[mark-control-plane] 给管理节点添加一个标签 node-role.kubernetes.io/master='',再添加一个污点[node-role.kubernetes.io/master:NoSchedule]

[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[root@k8s-master ~]# kubectl describe node k8s-master
Taints: node.kubernetes.io/not-ready:NoExecute
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule

(9)[bootstrap-token] 自动为kubelet颁发证书,这个token其实就是为了其他节点加入集群颁发证书用的(也就是为每个节点颁发证书)

[bootstrap-token] Using token: u7iclt.miuss90cwnjokuje
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

(10)[addons] 部署插件,CoreDNS、kube-proxy

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

最后,拷贝连接k8s集群的认证文件到默认路径下,这样就可以使用kubectl去查看了(别忘记了!!!!!!!!!!!!!!!!!!!) 

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 7m v1.19.0
[root@k8s-master manifests]# ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml

上面初始化工作总结如下:

kubeadm init初始化工作:

  1. [preflight] 环境检查和拉取镜像 kubeadm config images pull
  2. [certs] 生成k8s证书和etcd证书 /etc/kubernetes/pki
  3. [kubeconfig] 生成kubeconfig文件
  4. [kubelet-start] 生成kubelet配置文件
  5. [control-plane] 部署管理节点组件,用镜像启动容器  kubectl get pods -n kube-system
  6. [etcd] 部署etcd数据库,用镜像启动容器
  7. [upload-config] [kubelet] [upload-certs] 上传配置文件到k8s中
  8. [mark-control-plane] 给管理节点添加一个标签 node-role.kubernetes.io/master='',再添加一个污点[node-role.kubernetes.io/master:NoSchedule]
  9. [bootstrap-token] 自动为kubelet颁发证书
  10. [addons] 部署插件,CoreDNS、kube-proxy

加入Kubernetes Node


上面k8s节点初始化完成,还剩下两步没有完成

  • 一个是部署网络
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

还得准备网络组件

  • 另外一个是加入节点
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.179.102:6443 --token u7iclt.miuss90cwnjokuje \
--discovery-token-ca-cert-hash sha256:a3f0566e54fee79bff76bcd87c49c656a339dbdf59f874ac90992418f6a94157

在192.168.179.103/104(Node)执行

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:使用之前kube init生成的指令

[root@k8s-node1 ~]# kubeadm join 192.168.111.6:6443 --token 61d4wg.ktsy9ru26oseb2aa     --discovery-token-ca-cert-hash sha256:536ec429a1e2e1bd62eda768623805a7ae2a84aba650c5b1d09011bbf95b640e 
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.14. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node1 ~]# kubeadm join 192.168.179.102:6443 --token u7iclt.miuss90cwnjokuje     --discovery-token-ca-cert-hash sha256:a3f0566e54fee79bff76bcd87c49c656a339dbdf59f874ac90992418f6a94157 

[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 60m v1.19.0
k8s-node1 NotReady <none> 31s v1.19.0
k8s-node2 NotReady <none> 91s v1.19.0
[root@master ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-gp8cd 15m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:xxwuv9 Approved,Issued
csr-hdmjf 22m kubernetes.io/kube-apiserver-client-kubelet system:node:master Approved,Issued
csr-tlfnn 16m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:xxwuv9 Approved,Issued

可以看到没有就绪,这个时候就需要部署容器网络插件了

[root@k8s-master ~]# journalctl -u kubelet > a.xtx 可以查看日志,可以看到容器网络没有准备好,需要安装CNI插件,网络插件很多种,有很多公司开发的k8s网络组件比如calico,主流使用calico,建议使用这个

Nov 15 14:56:50 k8s-master kubelet[1616]: E1115 14:56:50.434600    1616 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 14:56:50 k8s-master kubelet[1616]: E1115 14:56:50.485711 1616 kubelet.go:2183] node "k8s-master" not found

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下: 

[root@master ~]# kubeadm token create --print-join-command

W0418 17:38:02.439937 18435 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.111.6:6443 --token qqf3xv.3ovyztz2jzkjsklq --discovery-token-ca-cert-hash sha256:861037155eac93e6890bbfccad7471e5cb56e710b240c8f2641212c2c0ecb460

 

部署calico网络


[root@k8s-master ~]# wget https://docs.projectcalico.org/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init指定的一样

将注释去掉
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
改为
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"

这个网段是kube init的 --pod-network-cidr=10.244.0.0/16网段,也就是pod网段

[root@k8s-master ~]# kubectl apply -f calico.yaml 

可以看到以容器方式去不是网络组件,这里可以通过READY查看网络是否准备就绪

[root@k8s-master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5c6f6b67db-q5qb6 0/1 Pending 0 39s
calico-node-6hgrq 0/1 Init:0/3 0 39s
calico-node-jxh4t 0/1 Init:2/3 0 39s
calico-node-xjklb 0/1 Init:1/3 0 39s

这些镜像下载比较慢,下载好之后启动完毕再查看node状态,下面是查看calico使用了哪些镜像

[root@k8s-master ~]# cat calico.yaml  | grep image
image: calico/cni:v3.16.5
image: calico/cni:v3.16.5
image: calico/pod2daemon-flexvol:v3.16.5
image: calico/node:v3.16.5
image: calico/kube-controllers:v3.16.5
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5c6f6b67db-q5qb6 1/1 Running 0 3m52s
calico-node-6hgrq 1/1 Running 0 3m52s
calico-node-jxh4t 1/1 Running 0 3m52s
calico-node-xjklb 1/1 Running 0 3m52s
coredns-6d56c8448f-ddt97 1/1 Running 0 82m
coredns-6d56c8448f-lwn8m 1/1 Running 0 82m
etcd-k8s-master 1/1 Running 0 82m
kube-apiserver-k8s-master 1/1 Running 0 82m
kube-controller-manager-k8s-master 1/1 Running 0 82m
kube-proxy-7wgls 1/1 Running 0 22m
kube-proxy-vkt7g 1/1 Running 0 23m
kube-proxy-xth6p 1/1 Running 0 82m
kube-scheduler-k8s-master 1/1 Running 0 82m


[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3d9h v1.19.0
k8s-node1 Ready <none> 3d8h v1.19.0
k8s-node2 Ready <none> 3d8h v1.19.0

 

 

测试kubernetes集群


  • 验证Pod工作
  • 验证Pod网络通信
  • 验证DNS解析

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc

访问地址:http://NodeIP:Port  

 

 

 

部署时遇到常见问题


如果执行kubeadm失败了就会出现一些错误信息,然后解决了,再次使用kubeadm init来初始化也不会成功,因为第一次运行环境是错误的环境。这个需要将清理当前环境,保持一个纯净的环境再去执行初始化

1、清空当前初始化环境

kubeadm reset

2、calico pod未准备就绪,那么需要每个节点手动拉取镜像看是否拉取到

grep image calico.yaml  每个节点拉取看看是否拉取完毕

docker pull calico/xxx