kubernetes初始化失败 kubeadm初始化_k8s

 

 

有两种方式安装集群:
  1、手动安装各个节点的各个组件,安装极其复杂困难。
  2、使用工具:kubeadm

 

kubeadm 是官方提供的专门部署集群的管理工具。

  1. 在kubeadm下每个节点都需要安装docker,包括master节点也必须安装docker
  2. 每个节点,包括master节点都必须安装kubelet
  3. API Server, Scheduler(调度器), Controller-Manager(控制器),etcd等以容器的方式跑在kubelet之上。也就是说连K8S自己的组件都直接运行在pod里。其它node节点也以容器的方式运行kube-proxy在Pod里。
  4. flannel网络插件也是需要运行在Pod里的。

 

kubernetes和docker的下载地址和配置YUM文件:

阿里云下载地址:https://mirrors.aliyun.com

yum安装K8S的路径是 https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

dock安装的路径: wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@master yum.repos.d]# cat kubernetes.repo 
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1

各node节点也需要安装yum配置文件

配置好yum源以后,就可以正式安装组件了。

 

在各个节点上安装ipvsadm
yum -y install ipvsadm

第一步:安装各组件。

[root@master yum.repos.d]# yum install docker-ce kubelet kubeadm kubectl

 

第二步:初始化主节点
[root@master yum.repos.d]# vim /usr/lib/systemd/system/docker.service
Enviironment="HTTPS_PROXY=http://www.ik8s.io:10080"

可以使用docker info 查看docker的配置信息。

必须保证下面的结果是1,不能是0。

cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

如果是0的话,就需要按照下面的解决

vim /etc/sysctl.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1sysctl -p

 

查看组件是否安装成功:

[root@master yum.repos.d]# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service

每个节点都不能打开swap设备,早期的时候,K8S是禁止swap的,一开swap就不能安装和启动。可以在下面的参数设置忽略swap
[root@master yum.repos.d]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=

改成下面的参数
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
KUBE_PROXY_MODE=ipvs

kube-proxy开启ipvs的前置条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在所有的Kubernetes节点node1和node2上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

初始化集群:

[root@master yum.repos.d]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU   /

/因为这里我用的是虚拟机练习的,只是一核CPU。所以也就选择忽略CPU了,生产服务器上不用考虑这个问题。

这里有可能会报错,因为没法拉取国外的镜像,所以会报错,先用docker把所有的镜像拉取下来之后再初始化。也可以选择国内的镜像网站。

拉取镜像脚本:

[root@master ~]# cat dockerpull.sh 
#!/bin/bash
K8S_VERSION=v1.14.1
ETCD_VERSION=3.3.10
#DASHBOARD_VERSION=v1.8.3
#FLANNEL_VERSION=v0.10.0-amd64
DNS_VERSION=1.3.1
PAUSE_VERSION=3.1
# 基本组件
docker pull mirrorgooglecontainers/kube-apiserver-amd64:$K8S_VERSION
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:$K8S_VERSION
docker pull mirrorgooglecontainers/kube-scheduler-amd64:$K8S_VERSION
docker pull mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION
docker pull mirrorgooglecontainers/etcd-amd64:$ETCD_VERSION
docker pull mirrorgooglecontainers/pause:$PAUSE_VERSION
docker pull coredns/coredns:$DNS_VERSION# 网络组件
#docker pull quay.io/coreos/flannel:$FLANNEL_VERSION# 修改tag
docker tag mirrorgooglecontainers/kube-apiserver-amd64:$K8S_VERSION k8s.gcr.io/kube-apiserver:$K8S_VERSION
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:$K8S_VERSION k8s.gcr.io/kube-controller-manager:$K8S_VERSION
docker tag mirrorgooglecontainers/kube-scheduler-amd64:$K8S_VERSION k8s.gcr.io/kube-scheduler:$K8S_VERSION
docker tag mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION k8s.gcr.io/kube-proxy:$K8S_VERSION
docker tag mirrorgooglecontainers/etcd-amd64:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
docker tag mirrorgooglecontainers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag coredns/coredns:$DNS_VERSION k8s.gcr.io/coredns:$DNS_VERSION#删除冗余的images
docker rmi mirrorgooglecontainers/kube-apiserver-amd64:$K8S_VERSION
docker rmi mirrorgooglecontainers/kube-controller-manager-amd64:$K8S_VERSION
docker rmi mirrorgooglecontainers/kube-scheduler-amd64:$K8S_VERSION
docker rmi mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION
docker rmi mirrorgooglecontainers/etcd-amd64:$ETCD_VERSION
docker rmi mirrorgooglecontainers/pause:$PAUSE_VERSION
docker rmi coredns/coredns:$DNS_VERSION

这时候再初始化:

[root@master yum.repos.d]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.163.100:6443 --token isl76u.fwmreocpbovdv5xh \
    --discovery-token-ca-cert-hash sha256:252ef008db8fee83cc0f652424bb9bbf0d66934595de0cd170ebb68a1f2d8d84

这时需要让你用一个普通用户执行这些命令,更改kube的属组等。
在其它节点上以root用户把节点加入到集群中,蓝色部分的命令。是一个密钥。防止别人加入这个集群。

DNS插件现在已经更新到3版了,第一版叫做sigai(英文不知道是哪个,就写的拼音) DNS 。 第二版叫做kubeDNS,在k8s1.11版本开始叫做CoreDNS

第三版本支持动态加载等等一些功能了。早期版本都是不支持的。

 

这里是练习,所以就直接用root了

[root@master ~]# ss -tnl
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     0      128    127.0.0.1:10248                     *:*                  
LISTEN     0      128    127.0.0.1:10249                     *:*                  
LISTEN     0      128    192.168.163.100:2379                      *:*                  
LISTEN     0      128    127.0.0.1:2379                      *:*                  
LISTEN     0      128    192.168.163.100:2380                      *:*                  
LISTEN     0      128    127.0.0.1:10257                     *:*                  
LISTEN     0      128    127.0.0.1:10259                     *:*                  
LISTEN     0      128           *:22                        *:*                  
LISTEN     0      128    127.0.0.1:35544                     *:*                  
LISTEN     0      100    127.0.0.1:25                        *:*                  
LISTEN     0      128          :::10250                    :::*                  
LISTEN     0      128          :::10251                    :::*                  
LISTEN     0      128          :::6443                     :::*                  
LISTEN     0      128          :::10252                    :::*                  
LISTEN     0      128          :::10256                    :::*                  
LISTEN     0      128          :::22                       :::*                  
LISTEN     0      100         ::1:25                       :::*

这里发现6443端口已经开始监控了。其它节点可以加入了。

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

这时候就可以使用kubectl命令了。

[root@master ~]# kubectl get cs (cs是简写 componentstatus) 检查组件状态信息
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"} [root@master ~]# kubectl get nodes  (获取集群节点信息)
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   43h   v1.14.1NotReady 未就绪状态,因为还缺网络组件
flannel的地址:https://github.com/coreos/flannel
For Kubernetes v1.7+  kubenetes1.7版本以上的话直接执行下面的命令
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

但是这些命令是不够的。

[root@master ~]# kubectl get pods
No resources found.

直接查看到所有镜像被拉下来部署成功才算成功。

docker image ls
会发现flannel已经拉取下来了

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   43h   v1.14.1[root@master ~]# kubectl get pods
No resources found.[root@master ~]# kubectl get pods -n kube-system   指定了命名空间
NAME                             READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-8lczd          1/1     Running   0          43h
coredns-fb8b8dccf-rljmp          1/1     Running   0          43h
etcd-master                      1/1     Running   2          43h
kube-apiserver-master            1/1     Running   3          43h
kube-controller-manager-master   1/1     Running   3          43h
kube-flannel-ds-amd64-mj4s6      1/1     Running   0          3m17s
kube-proxy-lwntd                 1/1     Running   2          43h
kube-scheduler-master            1/1     Running   3          43h[root@master ~]# kubectl get ns   查看都有哪些namespace
NAME              STATUS   AGE
default           Active   43h
kube-node-lease   Active   43h
kube-public       Active   43h
kube-system       Active   43h

系统级的POD都在kube-system命名空间当中。

 

这时就需要node节点加入到集群当中了。

[root@master ~]# scp /usr/lib/systemd/system/docker.service node1:/usr/lib/systemd/system/docker.service

[root@master ~]# scp /etc/sysconfig/kubelet node1:/etc/sysconfig/

node2也需要拷贝一下

需要把node1和node2节点的swap关闭掉,还要设置成开机自动启动kubelet

[root@node1 ~]# systemctl enable docker kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node1 ~]# swapoff -a
[root@node1 ~]# sed -i 's/.\\*swap.\\*/#&/' /etc/fstab

node1和node2都需要执行。

[root@node1 ~]# kubeadm join 192.168.163.100:6443 --token braag1.mmx3cektd73oo1yd --discovery-token-ca-cert-hash sha256:252ef008db8fee83cc0f652424bb9bbf0d66934595de0cd170ebb68a1f2d8d84 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU
[root@node2 ~]# kubeadm join 192.168.163.100:6443 --token braag1.mmx3cektd73oo1yd --discovery-token-ca-cert-hash 252ef008db8fee83cc0f652424bb9bbf0d66934595de0cd170ebb68a1f2d8d84 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU

如果忘记了token和hash的话

[root@master ~]# kubeadm token create
braag1.mmx3cektd73oo1yd
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
252ef008db8fee83cc0f652424bb9bbf0d66934595de0cd170ebb68a1f2d8d84

记的hash码前面加上sha256:

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE    VERSION
master   Ready      master   45h    v1.14.1
node1    Ready      <none>   2m8s   v1.14.1
node2    NotReady   <none>   64s    v1.14.1

至此,node节点已经正式加入到集群里了。

node节点下载镜像的脚本

[root@node2 ~]# cat dockerpull.sh 
#!/bin/bash
K8S_VERSION=v1.14.1
FLANNEL_VERSION=v0.11.0-amd64
PAUSE_VERSION=3.1
# 基本组件
docker pull mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION
docker tag mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION k8s.gcr.io/kube-proxy:$K8S_VERSION
docker pull mirrorgooglecontainers/pause:$PAUSE_VERSION
docker tag mirrorgooglecontainers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
# 网络组件
docker pull quay.io/coreos/flannel:$FLANNEL_VERSION
#删除冗余的images
#docker rmi mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION
#docker rmi mirrorgooglecontainers/pause:$PAUSE_VERSION 
[root@master ~]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-8lczd          1/1     Running   0          47h
coredns-fb8b8dccf-rljmp          1/1     Running   0          47h
etcd-master                      1/1     Running   2          47h
kube-apiserver-master            1/1     Running   3          46h
kube-controller-manager-master   1/1     Running   6          47h
kube-flannel-ds-amd64-26kk7      1/1     Running   0          85m
kube-flannel-ds-amd64-428x9      1/1     Running   0          86m
kube-flannel-ds-amd64-mj4s6      1/1     Running   0          3h26m
kube-proxy-5s2gz                 1/1     Running   0          86m
kube-proxy-lwntd                 1/1     Running   2          47h
kube-proxy-tjcpd                 1/1     Running   0          85m
kube-scheduler-master            1/1     Running   5          47h[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP                NODE     NOMINATED NODE   READINESS GATES
coredns-fb8b8dccf-8lczd          1/1     Running   0          47h     10.244.0.2        master   <none>           <none>
coredns-fb8b8dccf-rljmp          1/1     Running   0          47h     10.244.0.3        master   <none>           <none>
etcd-master                      1/1     Running   2          47h     192.168.163.100   master   <none>           <none>
kube-apiserver-master            1/1     Running   3          46h     192.168.163.100   master   <none>           <none>
kube-controller-manager-master   1/1     Running   6          47h     192.168.163.100   master   <none>           <none>
kube-flannel-ds-amd64-26kk7      1/1     Running   0          85m     192.168.163.102   node2    <none>           <none>
kube-flannel-ds-amd64-428x9      1/1     Running   0          86m     192.168.163.101   node1    <none>           <none>
kube-flannel-ds-amd64-mj4s6      1/1     Running   0          3h26m   192.168.163.100   master   <none>           <none>
kube-proxy-5s2gz                 1/1     Running   0          86m     192.168.163.101   node1    <none>           <none>
kube-proxy-lwntd                 1/1     Running   2          47h     192.168.163.100   master   <none>           <none>
kube-proxy-tjcpd                 1/1     Running   0          85m     192.168.163.102   node2    <none>           <none>
kube-scheduler-master            1/1     Running   5          47h     192.168.163.100   master   <none>           <none>