文章目录
- 一、Kubernetes高可用安装安装——1.18.4版本
- (1)下载地址、安装方式
- (2)实验环境
- (3)实验步骤
- 1、在四台服务器上进行基础配置
- 2、配置master01节点免密登录其他节点
- 3、在全部节点上安装yum源
- 4、所有节点安装ipvsadm
- 5、开启一些k8s集群中必须的内核参数,所有节点配置k8s内核
- 6、基本组件安装
- 7、高可用组件安装
- 8、下载镜像
- 9、开启kubelet,创建集群
- 10、Flannel组件的安装
- 11、把各节点加入集群
- 12、后续操作
- 二、Metrics、Dashboard部署
- (1)先安装Metrics
- (2)安装Dashboard
- (3)创建serviceaccount和clusterrolebinding资源YAML文件
- 三、测试
- (1)使用浏览器访问Dashboard UI
- (2)后续操作
- 四、Kubernetes集群初始化步骤
一、Kubernetes高可用安装安装——1.18.4版本
(1)下载地址、安装方式
- 官网:https://kubernetes.io/zh/docs/setup/
- K8S github地址:https://github.com/kubernetes/kubernetes
- Kubernetes安装方式:
Kubeadm安装:官方推荐,建议学习研究使用,所有服务跑成容器
二进制安装:生产环境使用,所有服务跑成物理机
Ansible安装
- 最新版本高可用安装步骤:
(2)实验环境
- 实际环境配置(这里的网络使用calico)
系统 | 主机名 | ip | 配置 | 运行服务 | 扮演角色 |
Centos7.4 | Master01 | 192.168.100.202 | 4G 双核 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、calico、HAProxy、KeepAlived | master节点 |
Centos7.4 | Master02 | 192.168.100.203 | 4G 双核 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、calico、HAProxy、KeepAlived | master节点 |
Centos7.4 | Master-3 | 192.168.100.204 | 4G 双核 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、calico、HAProxy、KeepAlived | master节点 |
Centos7.4 | k8s-master-lb | 192.168.100.205 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、calico、HAProxy、KeepAlived | keepalived虚拟IP | |
Centos7.4 | k8s-node1 | 192.168.100.206 | 2G 单核 | docker、kubelet、proxy、calico | worker节点 |
Centos7.4 | k8s-node2 | 192.168.100.207 | 2G 单核 | docker、kubelet、proxy、calico | worker节点 |
这里的虚拟ip为192.168.100.205,无需专门搭建keepalived的机器,在三台master上指定keepalived配置文件中的虚拟ip为192.168.100.205即可
- 实验环境配置(资源有限,网络使用flannel)
系统 | 主机名 | ip | 配置 | 运行服务 | 扮演角色 |
Centos7.4 | k8s-master01 | 192.168.100.202 | 4G双核 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、flannel、HAProxy、KeepAlived | master节点 |
Centos7.4 | k8s-master02 | 192.168.100.203 | 4G双核 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、flannel、HAProxy、KeepAlived | master节点 |
Centos7.4 | k8s-master-lb | 192.168.100.204 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、flannel、HAProxy、KeepAlived | keepalived虚拟IP | |
Centos7.4 | k8s-node01 | 192.168.100.205 | 2G单核 | docker、kubelet、proxy、flannel | master节点 |
Centos7.4 | k8s-node02 | 192.168.100.206 | 2G单核 | docker、kubelet、proxy、flannel | master节点 |
这里的虚拟ip为192.168.100.204,无需专门搭建keepalived的机器,在两台master上指定keepalived配置文件中的虚拟ip为192.168.100.204即可。
以上机器全部都要联网!!
- Kubernetes的高可用主要指的是控制平面的高可用,即指多套Master节点组件和Etcd组件,工作节点通过负载均衡连接到各Master,即连接HAProxy反向代理去连接master,通过反向代理可以配置负载均衡,然后配合keepalived设置两台master的虚拟ip,实现高可用
- Kubernetes高可用架构中etcd与Master节点组件混布方式的特点:
- 所需机器资源少
- 部署简单易于扩展
- 容易进行横向扩展
- 风险大,一台宿主机(master)挂了,master和etcd就都少一套(因为etcd和master部署在了一起,etcd会收集node节点的信息),集群冗余度受到的影响比较大
(3)实验步骤
- 本实验使用Keepalived+HAProxy实现Kubernetes的高可用,采用Kubeadm安装
1、在四台服务器上进行基础配置
四台服务操作差不多
#192.168.100.202配置
[root@Centos7 ~]# hostnamectl set-hostname k8s-master01
[root@Centos7 ~]# su
[root@k8s-master01 ~]# systemctl disable --now firewalld
[root@k8s-master01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-master01 ~]# mount /dev/cdrom /mnt/
[root@k8s-master01 ~]# cat <<aa>> /etc/hosts
192.168.100.202 k8s-master01
192.168.100.203 k8s-master02
192.168.100.205 k8s-node01
192.168.100.206 k8s-node02
aa
[root@k8s-master01 ~]# vim /etc/sysconfig/selinux #修改selinux文件
SELINUX=disabled
#保存退出
[root@k8s-master01 ~]# swapoff -a && sysctl -w vm.swappiness=0 #关闭交换空间,开启交换空间会影响k8s性能
vm.swappiness = 0
[root@k8s-master01 ~]# vim /etc/fstab #把自动挂载交换空间的配置项注释
#
##
## /etc/fstab
## Created by anaconda on Tue Jan 12 18:24:41 2021
##
## Accessible filesystems, by reference, are maintained under '/dev/disk'
## See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
##
/dev/mapper/centos-root / xfs defaults 0 0
UUID=f9ce4501-7cf6-4d2b-903a-4fc1044410ef /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0 #注释
/dev/cdrom /mnt iso9660 defaults 0 0
#保存退出
[root@k8s-master01 ~]# yum install ntpdate -y
[root@k8s-master01 ~]# ntpdate time2.aliyun.com #同步时间
[root@k8s-master01 ~]# cat <<aaa>> /etc/rc.local
ntpdate time2.aliyun.com
aaa
[root@k8s-master01 ~]# crontab -e #可以选择加计划任务,也可以不加
*/5 * * * * ntpdate time2.aliyun.com
[root@k8s-master01 ~]# ulimit -SHn 65535 #设置最大进程数
#192.168.100.203配置
[root@Centos7 ~]# hostnamectl set-hostname k8s-master02
[root@Centos7 ~]# su
[root@k8s-master02 ~]# systemctl disable --now firewalld
[root@k8s-master02 ~]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-master02 ~]# mount /dev/cdrom /mnt/
[root@k8s-master02 ~]# cat <<aa>> /etc/hosts
192.168.100.202 k8s-master01
192.168.100.203 k8s-master02
192.168.100.205 k8s-node01
192.168.100.206 k8s-node02
aa
[root@k8s-master02 ~]# vim /etc/sysconfig/selinux #修改selinux文件
SELINUX=disabled
#保存退出
[root@k8s-master02 ~]# swapoff -a && sysctl -w vm.swappiness=0
vm.swappiness = 0
[root@k8s-master02 ~]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 12 18:24:41 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=f9ce4501-7cf6-4d2b-903a-4fc1044410ef /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
/dev/cdrom /mnt iso9660 defaults 0 0
#保存退出
[root@k8s-master01 ~]# yum install ntpdate -y
[root@k8s-master02 ~]# ntpdate time2.aliyun.com
[root@k8s-master02 ~]# cat <<aaa>> /etc/rc.local
> ntpdate time2.aliyun.com
> aaa
[root@k8s-master02 ~]# ulimit -SHn 65535
#192.168.100.205配置
[root@Centos7 ~]# hostnamectl set-hostname k8s-node01
[root@Centos7 ~]# su
[root@k8s-node01 ~]# systemctl disable --now firewalld
[root@k8s-node01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-node01 ~]# mount /dev/cdrom /mnt/
[root@k8s-node01 ~]# cat <<aa>> /etc/hosts
192.168.100.202 k8s-master01
192.168.100.203 k8s-master02
192.168.100.205 k8s-node01
192.168.100.206 k8s-node02
aa
[root@k8s-node01 ~]# vim /etc/sysconfig/selinux #修改selinux文件
SELINUX=disabled
#保存退出
[root@k8s-node01 ~]# swapoff -a && sysctl -w vm.swappiness=0
vm.swappiness = 0
[root@k8s-node01 ~]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 12 18:24:41 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=f9ce4501-7cf6-4d2b-903a-4fc1044410ef /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
/dev/cdrom /mnt iso9660 defaults 0 0
#保存退出
[root@k8s-master01 ~]# yum install ntpdate -y
[root@k8s-node01 ~]# ntpdate time2.aliyun.com
[root@k8s-node01 ~]# cat <<aaa>> /etc/rc.local
> ntpdate time2.aliyun.com
> aaa
[root@k8s-node01 ~]# ulimit -SHn 65535
#192.168.100.206配置
[root@Centos7 ~]# hostnamectl set-hostname k8s-node02
[root@Centos7 ~]# su
[root@k8s-node02 ~]# systemctl disable --now firewalld
[root@k8s-node02 ~]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-node02 ~]# mount /dev/cdrom /mnt/
[root@k8s-node02 ~]# cat <<aa>> /etc/hosts
192.168.100.202 k8s-master01
192.168.100.203 k8s-master02
192.168.100.205 k8s-node01
192.168.100.206 k8s-node02
aa
[root@k8s-node02 ~]# vim /etc/sysconfig/selinux #修改selinux文件
SELINUX=disabled
#保存退出
[root@k8s-node02 ~]# swapoff -a && sysctl -w vm.swappiness=0
vm.swappiness = 0
[root@k8s-node02 ~]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 12 18:24:41 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=f9ce4501-7cf6-4d2b-903a-4fc1044410ef /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
/dev/cdrom /mnt iso9660 defaults 0 0
#保存退出
[root@k8s-master01 ~]# yum install ntpdate -y
[root@k8s-node02 ~]# ntpdate time2.aliyun.com
[root@k8s-node02 ~]# cat <<aaa>> /etc/rc.local
> ntpdate time2.aliyun.com
> aaa
[root@k8s-node02 ~]# ulimit -SHn 65535
2、配置master01节点免密登录其他节点
#过程操作全部都在master01节点上执行
[root@k8s-master01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I+Qfhb1MAGzWOknpb9k0x9mnAJOTaWlZmapuDaC7m5k root@k8s-master01
The key's randomart image is:
+---[RSA 2048]----+
| ..+. B.o |
| * .+% o |
| =.o.o+* o |
| o* ++.= . .|
| .o+S=oo . o |
| . o*o. . |
| . o.o |
| .+ o . |
| Eo . |
+----[SHA256]-----+
[root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
3、在全部节点上安装yum源
#全部节点操作相同
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
yum makecache #增加yum缓存,过程中输入y
4、所有节点安装ipvsadm
#安装ipvsadm是因为ipvs性能比iptables性能好,以下操作全部节点相同
yum install ipvsadm ipset sysstat conntrack libseccomp -y
#所有节点配置ipvs模块,在内核4.19版本nf_conntrack_ipv4已经改为nf_conntrack,本例安装的内核为4.18,使用nf_conntrack_ipv4即可,下面操作所有节点相同
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
vi /etc/modules-load.d/ipvs.conf #配置开机自动加载
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
#保存退出
systemctl enable --now systemd-modules-load.service #编写完执行这个命令
lsmod | grep -e ip_vs -e nf_conntrack_ipv4 #检查是否加载,会输出以下信息
nf_conntrack_ipv4 15053 0
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 141092 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 133387 2 ip_vs,nf_conntrack_ipv4
libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
5、开启一些k8s集群中必须的内核参数,所有节点配置k8s内核
#下面操作所有节点相同
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system #输入完上面的命令,输入这两个命令使其生效
sysctl -p
6、基本组件安装
- 需要在每个节点上都安装以下的软件包:
**kubeadm: **用来初始化集群的指令;
**kubelet: **在集群中的每个节点上用来启动 pod 和 container 等;
**kubectl: **用来与集群通信的命令行工具。
- kubeadm不能安装或管理 kubelet 或 kubectl ,所以得保证他们满足通过 kubeadm 安装的 Kubernetes 控制层对版本的要求。如果版本没有满足要求,可能导致一些意外错误或问题。
- Kubernetes主要做Docker的容器化管理,总结一下如何查看k8s对应支持的docker版本的方法。
- 在GitHub可以查看所有Kubernetets版本信息:https://github.com/kubernetes/kubernetes/releases
- 这里的基本组件是主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等。
#使用这条命令可以查看可用docker-ce版本:
yum list docker-ce.x86_64 --showduplicates | sort -r
#使用这条命令可以查看可用kubeadm版本:
yum list kubeadm.x86_64 --showduplicates | sort -r
#下面操作所有节点相同
yum -y install docker-ce-17.09.1.ce-1.el7.centos #安装指定版本的docker,如果要安装最新版则yum install docker-ce -y
yum install -y kubeadm-1.18.4-0.x86_64 kubectl-1.18.4-0.x86_64 kubelet-1.18.4-0.x86_64 #安装指定版本的kubeadm,同样安装最新版则yum install kubeadm -y
systemctl enable --now docker #所有节点设置开机自启动Docker
#默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像:
DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF
#设置Kubelet开机自启动
systemctl daemon-reload
7、高可用组件安装
#下面操作只有master节点进行
yum install keepalived haproxy -y #通过yum安装keepalived和haporxy
#所有Master节点配置HAProxy,操作相同
vim /etc/haproxy/haproxy.cfg #修改haproxy配置文件,全部删除,重新编写即可
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
listen stats
bind *:8006
mode http
stats enable
stats hide-version
stats uri /stats
stats refresh 30s
stats realm Haproxy\ Statistics
stats auth admin:admin
frontend k8s-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.100.202:6443 check
server k8s-master02 192.168.100.203:6443 check
#保存退出,这里注意修改最后的master节点的ip地址
#这里修改keepalived的配置文件,两台master节点配置不一样
————————————————————master01操作
[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf #同样进去先删除之前的
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh" #keepalived用来指定健康指标的文件
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens32 #网卡命令要一致
mcast_src_ip 192.168.100.202 #本机ip
virtual_router_id 51
priority 100 #优先级为100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.100.204 #虚拟ip
}
# track_script { #这里是注释的,表示先不进行健康检查,在集群建立好之后删除注释即可
# chk_apiserver
# }
}
#保存退出
————————————————————master02操作
[root@k8s-master02 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
mcast_src_ip 192.168.100.203
virtual_router_id 51
priority 99 #这里可以看到优先级比master01的低,这样再开启健康检查后虚拟ip会在master01上
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.100.204
}
# track_script { #这里的注释同样在集群开启之后再删除
# chk_apiserver
# }
}
#保存退出
#在两台master节点上编写Keepalived健康检查文件
vim /etc/keepalived/check_apiserver.sh #两台都需要编写
#!/bin/bash
err=0
for k in $(seq 1 5)
do
check_code=$(pgrep kube-apiserver)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 5
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
#保存退出
chmod +x /etc/keepalived/check_apiserver.sh #编写完成后添加可执行权限
#启动keepalived和haproxy
systemctl enable --now haproxy
systemctl enable --now keepalived
8、下载镜像
#下面是两台master的操作
kubeadm config images list #使用这条命令查看所需镜像
#因为这些镜像源都在国外,所以可以先缓存下载直接使用docker load上传镜像即可
[root@k8s-master01 ~]# ll
总用量 922352
-rw-------. 1 root root 1264 1月 12 2021 anaconda-ks.cfg
-rw-r--r-- 1 root root 43932160 8月 4 13:21 coredns.tar.gz
-rw-r--r-- 1 root root 290010624 8月 4 13:22 etcd.tar.gz
-rw-r--r-- 1 root root 55390720 8月 4 13:22 flannel.tar.gz
-rw-r--r-- 1 root root 174554624 8月 4 13:22 kube-apiserver.tar.gz
-rw-r--r-- 1 root root 163945984 8月 4 13:22 kube-controller-manager.tar.gz
-rw-r--r-- 1 root root 119103488 8月 4 13:23 kube-proxy.tar.gz
-rw-r--r-- 1 root root 96841216 8月 4 13:23 kube-scheduler.tar.gz
-rw-r--r-- 1 root root 692736 8月 4 13:23 pause.tar.gz
#直接复制,批量上传镜像
docker load -i coredns.tar.gz
docker load -i kube-apiserver.tar.gz
docker load -i kube-proxy.tar.gz
docker load -i pause.tar.gz
docker load -i etcd.tar.gz
docker load -i kube-controller-manager.tar.gz
docker load -i kube-scheduler.tar.gz
docker load -i flannel.tar.gz
[root@k8s-master01 ~]# docker images #确认镜像已经上传
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.18.4 718fa77019f2 13 months ago 117MB
k8s.gcr.io/kube-apiserver v1.18.4 408913fc18eb 13 months ago 173MB
k8s.gcr.io/kube-controller-manager v1.18.4 e8f1690127c4 13 months ago 162MB
k8s.gcr.io/kube-scheduler v1.18.4 c663567f869e 13 months ago 95.3MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 17 months ago 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 18 months ago 43.8MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 21 months ago 288MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 2 years ago 52.5MB
#两台node节点的操作
[root@k8s-node01 ~]# ll #上传镜像
总用量 171092
-rw-------. 1 root root 1264 1月 12 2021 anaconda-ks.cfg
-rw-r--r-- 1 root root 55390720 8月 4 13:28 flannel.tar.gz
-rw-r--r-- 1 root root 119103488 8月 4 13:28 kube-proxy.tar.gz
-rw-r--r-- 1 root root 692736 8月 4 13:28 pause.tar.gz
#批量上传镜像
docker load -i flannel.tar.gz
docker load -i kube-proxy.tar.gz
docker load -i pause.tar.gz
[root@k8s-node01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.18.4 718fa77019f2 13 months ago 117MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 17 months ago 683kB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 2 years ago 52.5MB
9、开启kubelet,创建集群
#所有节点全部开机自启kubelet
systemctl enable --now kubelet
#Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可,下面操作只在master1上做
[root@k8s-master01 ~]# kubeadm init --kubernetes-version=v1.18.4 --control-plane-endpoint "192.168.100.204:16443" --upload-certs --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
。。。。。。
Your Kubernetes control-plane has initialized successfully! #提示这条信息表示成功
#初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值),master节点加入就打master节点的token值,node节点加入就打node节点的token值
#下面的命令master01节点进行操作
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
附加:初始化过程大致步骤如下:
• [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
• [certificates]生成相关的各种证书
• [kubeconfig]生成相关的kubeconfig文件
• [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
#所有master节点配置环境变量,用于访问k8s集群
cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc
#在master01上查看集群状态
[root@k8s-master01 ~]# kubectl get nodes #可以看到现在只有当前master01节点在集群中
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 10m v1.18.4
#采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide #使用这条命令查看pod状态,可以看到有两个是pending状态
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66bff467f8-265b9 0/1 Pending 0 13m <none> <none> <none> <none>
coredns-66bff467f8-4ktf9 0/1 Pending 0 13m <none> <none> <none> <none>
etcd-k8s-master01 1/1 Running 0 13m 192.168.100.202 k8s-master01 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 3 13m 192.168.100.202 k8s-master01 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 0 13m 192.168.100.202 k8s-master01 <none> <none>
kube-proxy-425r9 1/1 Running 0 13m 192.168.100.202 k8s-master01 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 0 13m 192.168.100.202 k8s-master01 <none> <none>
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X #查看完后执行这条命令
10、Flannel组件的安装
- NIC插件介绍
- Calico 是一个安全的 L3 网络和网络策略提供者。
- Canal 结合 Flannel 和 Calico, 提供网络和网络策略。
- Cilium 是一个 L3 网络和网络策略插件, 能够透明的实施 HTTP/API/L7 策略。 同时支持路由(routing)和叠加/封装( overlay/encapsulation)模式。
- Contiv 为多种用例提供可配置网络(使用 BGP 的原生 L3,使用 vxlan 的 overlay,经典 L2 和 Cisco-SDN/ACI)和丰富的策略框架。Contiv 项目完全开源。安装工具同时提供基于和不基于 kubeadm 的安装选项。
- Flannel 是一个可以用于 Kubernetes 的 overlay 网络提供者。
- Romana 是一个 pod 网络的层 3 解决方案,并且支持 NetworkPolicy API。Kubeadm add-on 安装细节可以在这里找到。
- Weave Net 提供了在网络分组两端参与工作的网络和网络策略,并且不需要额外的数据库。
- CNI-Genie 使 Kubernetes 无缝连接到一种 CNI 插件,例如:Flannel、Calico、Canal、Romana 或者 Weave。
- 提示:本方案使用flannel插件。
#在master01节点上,上传flannel.yml文件
[root@k8s-master01 ~]# ll
总用量 922368
-rw-------. 1 root root 1264 1月 12 2021 anaconda-ks.cfg
-rw-r--r-- 1 root root 43932160 8月 4 13:21 coredns.tar.gz
-rw-r--r-- 1 root root 290010624 8月 4 13:22 etcd.tar.gz
-rw-r--r-- 1 root root 55390720 8月 4 13:22 flannel.tar.gz
-rw-r--r-- 1 root root 14366 8月 3 16:07 flannel.yml
-rw-r--r-- 1 root root 174554624 8月 4 13:22 kube-apiserver.tar.gz
-rw-r--r-- 1 root root 163945984 8月 4 13:22 kube-controller-manager.tar.gz
-rw-r--r-- 1 root root 119103488 8月 4 13:23 kube-proxy.tar.gz
-rw-r--r-- 1 root root 96841216 8月 4 13:23 kube-scheduler.tar.gz
-rw-r--r-- 1 root root 692736 8月 4 13:23 pause.tar.gz
[root@k8s-master01 ~]# kubectl apply -f flannel.yml #执行命令
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-cg9mb 1/1 Running 0 3m32s
coredns-66bff467f8-f5ng5 1/1 Running 0 3m22s
etcd-k8s-master01 1/1 Running 0 4m59s
kube-apiserver-k8s-master01 1/1 Running 0 4m59s
kube-controller-manager-k8s-master01 1/1 Running 0 4m59s
kube-flannel-ds-amd64-h79fj 1/1 Running 0 50s #命令执行成功后会生产这个pod,如果没有就多执行几遍上面的命令
kube-proxy-4w8zc 1/1 Running 0 4m50s
kube-scheduler-k8s-master01 1/1 Running 0 4m59s
[root@k8s-master01 ~]# kubectl get -A pods -o wide #执行完成后再次查看pod状态,稍等一会会发现所有的pod都running了
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-66bff467f8-cg9mb 1/1 Running 0 4m24s 10.244.0.2 k8s-master01 <none> <none>
kube-system coredns-66bff467f8-f5ng5 1/1 Running 0 4m14s 10.244.0.3 k8s-master01 <none> <none>
kube-system etcd-k8s-master01 1/1 Running 0 5m51s 192.168.100.202 k8s-master01 <none> <none>
kube-system kube-apiserver-k8s-master01 1/1 Running 0 5m51s 192.168.100.202 k8s-master01 <none> <none>
kube-system kube-controller-manager-k8s-master01 1/1 Running 0 5m51s 192.168.100.202 k8s-master01 <none> <none>
kube-system kube-flannel-ds-amd64-h79fj 1/1 Running 0 102s 192.168.100.202 k8s-master01 <none> <none>
kube-system kube-proxy-4w8zc 1/1 Running 0 5m42s 192.168.100.202 k8s-master01 <none> <none>
kube-system kube-scheduler-k8s-master01 1/1 Running 0 5m51s 192.168.100.202 k8s-master01 <none>
#标签设置,允许master部署pod,默认是不允许,但是是不推荐master部署pod的,但是由于资源配置的关系,可以配置master也可以部署pod
[root@k8s-master01 ~]# kubectl describe nodes k8s-master01 |grep -E '(Roles|Taints)' #先查看污点,默认是有的
Roles: master
Taints: node-role.kubernetes.io/master:NoSchedule
[root@k8s-master01 ~]# kubectl taint nodes --all node-role.kubernetes.io/master- #去除当前集群所有主机的污点
————————————————————————————————————————————————————————————————————————————————————————————————————————————————
提示:部署完内部应用后可使用kubectl taint node master01 node-role.kubernetes.io/master="":NoSchedule重新设置Master为Master Only 状态。
node节点的taint(污点)和toleration(容忍)
使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点被打上了node-role.kubernetes.io/master:NoSchedule的污点:
#先看一下taint命令的语法格式,下面有taints各种污点的含义
kubectl taint node [node] key=value[effect]
其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]
NoSchedule: 一定不能被调度
PreferNoSchedule: 尽量不要调度,实在没有地方调度的情况下,才考虑可以调度过来
NoExecute: 不仅不会调度, 还会立即驱逐Node上已有的Pod
————————————————————————————————————————————————————————————————————————————————————————————————————————————————
#查看master01节点,Taints为none就说明成功去掉了污点
[root@k8s-master01 ~]# kubectl describe nodes k8s-master01 |grep -E '(Roles|Taints)'
Roles: master
Taints: <none>
#如果要重新使master打上污点,即恢复Master 的不可调度状态,执行如下命令,给master01再加上污点NoSchedule
kubectl taint nodes k8s-master01 node-role.kubernetes.io/master=:NoSchedule
11、把各节点加入集群
#master02节点,这里的token值要看上面创建集群输出的token值,如果不小心看不到了,可以执行四、的集群初始化,重新创建
kubeadm reset #先初始化以下再加
kubeadm join 192.168.100.204:16443 --token pkdd2v.skh3vh49u6en6v20 \
--discovery-token-ca-cert-hash sha256:4265fce05ae1956366e82f04e87aa40b55e9806a55a556515867f760043fe629 \
--control-plane --certificate-key 6782fe5b7133c1fdf9ab9467124d08945ee37e9514bf441759ddd8f583f9296d
#node1节点与node2节点
kubeadm reset #先初始化以下再加
kubeadm join 192.168.100.204:16443 --token pkdd2v.skh3vh49u6en6v20 \
--discovery-token-ca-cert-hash sha256:4265fce05ae1956366e82f04e87aa40b55e9806a55a556515867f760043fe629
#在master01节点确认其他节点成功加入集群
[root@k8s-master01 ~]# kubectl get nodes #成功加入集群,状态都是ready
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 12m v1.18.4
k8s-master02 Ready master 11m v1.18.4
k8s-node01 Ready <none> 10m v1.18.4
k8s-node02 Ready <none> 10m v1.18.4
[root@k8s-master01 ~]# kubectl get pod -n kube-system #都是running即可,如果有没有running的可以先等一会,可能是node节点在创建镜像
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-l75z6 1/1 Running 0 8m15s
coredns-66bff467f8-x4crp 1/1 Running 0 8m15s
etcd-k8s-master01 1/1 Running 0 8m23s
etcd-k8s-master02 1/1 Running 0 7m26s
kube-apiserver-k8s-master01 1/1 Running 0 8m23s
kube-apiserver-k8s-master02 1/1 Running 0 7m27s
kube-controller-manager-k8s-master01 1/1 Running 1 8m23s
kube-controller-manager-k8s-master02 1/1 Running 0 7m26s
kube-flannel-ds-amd64-6rj6k 1/1 Running 0 5m16s
kube-flannel-ds-amd64-dnv8s 1/1 Running 0 5m16s
kube-flannel-ds-amd64-q8cgj 1/1 Running 0 5m16s
kube-flannel-ds-amd64-wl4d6 1/1 Running 0 5m16s
kube-proxy-66lsh 1/1 Running 0 6m37s
kube-proxy-lfcfw 1/1 Running 0 6m34s
kube-proxy-q7q45 1/1 Running 0 8m15s
kube-proxy-zwwkc 1/1 Running 0 7m27s
kube-scheduler-k8s-master01 1/1 Running 1 8m23s
kube-scheduler-k8s-master02 1/1 Running 0 7m26s
至此,k8s高可用集群部署完成!!!!!
12、后续操作
[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf #把两台master主机的keepalived文件的最后几行删除注释
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens32
mcast_src_ip 192.168.100.202
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.100.204
}
track_script { #删除注释
chk_apiserver
}
}
二、Metrics、Dashboard部署
Dashboard介绍:
- Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。web界面
- Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
Metrics介绍:
- 在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
- Kubernetes的早期版本依靠Heapster来实现完整的性能数据采集和监控功能,Kubernetes从1.8版本开始,性能数据开始以Metrics API的方式提供标准化接口,并且从1.10版本开始将Heapster替换为Metrics Server。在Kubernetes新的监控体系中,Metrics Server用于提供核心指标(Core Metrics),包括Node、Pod的CPU和内存使用指标。
对其他自定义指标(Custom Metrics)的监控则由Prometheus等组件来完成。
记得上传metrics-server metrics-scraper_v1.0.1到四个节点
(1)先安装Metrics
#在四个节点上都上传metrics-server和metrics-scraper_v1.0.1
[root@k8s-master01 ~]# ll #在master01节点上上传
总用量 1001808
。。。。。。
-rw-r--r-- 1 root root 40124928 6月 29 2020 metrics-scraper_v1.0.1.tar
-rw-r--r-- 1 root root 41199616 6月 26 2020 metrics-server.tar.gz
。。。。。。
[root@k8s-master01 ~]# scp metrics-* root@192.168.100.203:/root/ #分别传给另外三台服务器
[root@k8s-master01 ~]# scp metrics-* root@192.168.100.205:/root/
[root@k8s-master01 ~]# scp metrics-* root@192.168.100.206:/root/
#四台节点上都上传镜像
docker load -i metrics-scraper_v1.0.1.tar
docker load -i metrics-server.tar.gz
#在master01上传components.yaml文件
[root@k8s-master01 ~]# ll | grep com
-rw-r--r-- 1 root root 3509 6月 26 2020 components.yaml
[root@k8s-master01 ~]# kubectl apply -f components.yaml
[root@k8s-master01 ~]# kubectl -n kube-system get pods -l k8s-app=metrics-server #这里为running即可
NAME READY STATUS RESTARTS AGE
metrics-server-7b97647899-bt7wx 1/1 Running 0 77s
metrics-server-7b97647899-j9m4b 1/1 Running 0 77s
#master01查看资源监控
[root@k8s-master01 ~]# kubectl top nodes #可以查看集群中所有节点的资源负载
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 155m 7% 1053Mi 27%
k8s-master02 116m 5% 857Mi 22%
k8s-node01 15m 1% 362Mi 19%
k8s-node02 33m 3% 337Mi 17%
[root@k8s-master01 ~]# kubectl top pods --all-namespaces #查看所有pod的负载
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-66bff467f8-l75z6 4m 13Mi
kube-system coredns-66bff467f8-x4crp 4m 13Mi
kube-system etcd-k8s-master01 39m 84Mi
kube-system etcd-k8s-master02 35m 83Mi
kube-system kube-apiserver-k8s-master01 51m 363Mi
kube-system kube-apiserver-k8s-master02 39m 355Mi
kube-system kube-controller-manager-k8s-master01 21m 43Mi
kube-system kube-controller-manager-k8s-master02 2m 20Mi
kube-system kube-flannel-ds-amd64-6rj6k 1m 11Mi
kube-system kube-flannel-ds-amd64-dnv8s 2m 12Mi
kube-system kube-flannel-ds-amd64-q8cgj 1m 11Mi
kube-system kube-flannel-ds-amd64-wl4d6 6m 15Mi
kube-system kube-proxy-66lsh 1m 16Mi
kube-system kube-proxy-lfcfw 1m 16Mi
kube-system kube-proxy-q7q45 1m 17Mi
kube-system kube-proxy-zwwkc 1m 17Mi
kube-system kube-scheduler-k8s-master01 2m 16Mi
kube-system kube-scheduler-k8s-master02 5m 20Mi
kube-system metrics-server-7b97647899-bt7wx 1m 11Mi
kube-system metrics-server-7b97647899-j9m4b 1m 11Mi
kubernetes-dashboard kubernetes-dashboard-7b544877d5-h5m9x 1m 16Mi
(2)安装Dashboard
#下面的操作全部都是master01一台机器进行
#网络下载:(这个可能会失效,无法下载,下面的dashboard.yaml文件和这个recommended.yaml是相同的,只是名称不一样)
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
#下载后修改
[root@k8s-master01 ~]# ll
总用量 922376
-rw-------. 1 root root 1264 1月 12 2021 anaconda-ks.cfg
-rw-r--r-- 1 root root 43932160 8月 4 13:21 coredns.tar.gz
-rw-r--r-- 1 root root 290010624 8月 4 13:22 etcd.tar.gz
-rw-r--r-- 1 root root 55390720 8月 4 13:22 flannel.tar.gz
-rw-r--r-- 1 root root 14366 8月 3 16:07 flannel.yml
-rw-r--r-- 1 root root 174554624 8月 4 13:22 kube-apiserver.tar.gz
-rw-r--r-- 1 root root 163945984 8月 4 13:22 kube-controller-manager.tar.gz
-rw-r--r-- 1 root root 119103488 8月 4 13:23 kube-proxy.tar.gz
-rw-r--r-- 1 root root 96841216 8月 4 13:23 kube-scheduler.tar.gz
-rw-r--r-- 1 root root 692736 8月 4 13:23 pause.tar.gz
-rw-r--r-- 1 root root 7591 8月 4 15:23 dashboard.yaml #上传这个
[root@k8s-master01 ~]# vim dashboard.yaml
。。。。。。
37 name: kubernetes-dashboard
38 namespace: kubernetes-dashboard
39 spec:
40 type: NodePort #修改类型
41 ports:
42 - port: 443
43 targetPort: 8443
44 nodePort: 30001 #修改端口为30001
45 selector:
46 k8s-app: kubernetes-dashboard
47
48 ---
#保存退出
[root@k8s-master01 ~]# kubectl create -f dashboard.yaml
[root@k8s-master01 ~]# kubectl get pod -n kubernetes-dashboard #都是running即可
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-6b4884c9d5-mbfws 1/1 Running 0 46m
kubernetes-dashboard-7b544877d5-h5m9x 1/1 Running 0 46m
[root@k8s-master01 ~]# kubectl get pod,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-6b4884c9d5-mbfws 1/1 Running 0 46m
pod/kubernetes-dashboard-7b544877d5-h5m9x 1/1 Running 0 46m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.1.45.12 <none> 8000/TCP 46m
service/kubernetes-dashboard NodePort 10.1.58.130 <none> 443:30001/TCP 46m
(3)创建serviceaccount和clusterrolebinding资源YAML文件
#下面的操作全部都是master01一台机器进行,默认Dashboard为最小RBAC权限,添加集群管理员权限以便从Dashboard操作集群资源
[root@k8s-master01 ~]# vim adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
#保存退出
[root@k8s-master01 ~]# kubectl create -f adminuser.yaml #做完这部即可
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
三、测试
(1)使用浏览器访问Dashboard UI
- 使用浏览器访问https://192.168.100.204:30001,访问的是虚拟ip,30001端口在dashboard.yaml文件中已经指定了
#在master01上获取Token值
[root@k8s-master01 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-jgs6t
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 8e0b1c00-814e-4984-900b-be3938e33642
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjJzTEpWNDRoVkVrVHA0RExzUzFrdzk4ZmdSeUVqX0ZLRWNPWm10aUFKWWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpnczZ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4ZTBiMWMwMC04MTRlLTQ5ODQtOTAwYi1iZTM5MzhlMzM2NDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.oelr-6lROucibVIcC3FNwI5ubm4FcrBT3BFRT2wDCSEmjqvOFvx_5KUrJjcsW6mHy7BGPHsHmeXZDgavOQKc9hB6cQOlI0BUFCP1FciCQw3rBXrOY2CYfapW8nztMaIzsCyZl0C0xO35jI0REyp9Gx7laoPb6-4-bFpWcQIR5WrQAoJ9sPuFbcYLWMLsdYVUdct8PKY4MzrYN-pEqteb-QNm96XfrUV98idyQ1bx2rvR8KyEfSvF8Glg2i627bD-GKkMsZuGRvlWs2cIw5CA0l1mkadiZgASpFK4CQaiPmxXK2W3fYBTmavaBWrmXhFV40cFgsPJccoWiH9V9Y__-Q
成功进入!!
(2)后续操作
#将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下
[root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-system
。。。。。。
41 kind: KubeProxyConfiguration
42 metricsBindAddress: ""
43 mode: "ipvs" #修改为ipvs
44 nodePortAddresses: null
45 oomScoreAdj:
。。。。。。
#保存退出
#更新Kube-Proxy的Pod
[root@k8s-master01 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
#验证Kube-Proxy模式
[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode
ipvs #这里是ipvs表示成功更换
四、Kubernetes集群初始化步骤
#先移除k8s集群中的所有主机
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 19m v1.18.4
[root@k8s-master01 ~]# kubectl delete node k8s-master01
node "k8s-master01" deleted
[root@k8s-master01 ~]# kubectl get nodes
No resources found in default namespace.
#所有工作节点删除工作目录,并且重置kubeadm,这里的工作节点是指已经加入集群并且移除集群的节点
[root@k8s-master01 ~]# rm -rf /etc/kubernetes/*
[root@k8s-master01 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
#Master01节点删除工作目录,并重置kubeadm,然后重新创建
[root@k8s-master01 ~]# rm -rf /etc/kubernetes/*
[root@k8s-master01 ~]# rm -rf ~/.kube/*
[root@k8s-master01 ~]# rm -rf /var/lib/etcd/*
[root@k8s-master01 ~]# kubeadm reset -f