二进制方式搭建集群优缺点:

优点:
    易于维护,灵活,升级方便
    
缺点:
    安装复杂,没有文档

 

将master1作为deploy节点,未指定节点时默认在master1上进行操作。

建议deploy节点与其它节点配置ssh免密登录,配置过程参考:批量实现SSH免密登录


环境准备

环境准备工作请在所有节点进行。

  • 主机说明:
系统 ip 角色 cpu 内存 hostname
CentOS 7.7 192.168.100.128 master、deploy >=2 >=2G master1
CentOS 7.7 192.168.100.129 master >=2 >=2G master2
CentOS 7.7 192.168.100.130 master >=2 >=2G master3
CentOS 7.7 192.168.100.131 node >=2 >=2G node1
CentOS 7.7 192.168.100.132 node >=2 >=2G node2
CentOS 7.7 192.168.100.133 node >=2 >=2G node3
  • 设置主机名:

每个节点的主机名必须不一样,且保证节点之间可以通过hostname互相访问。

以master1为例,

# mkdir /software# hostnamectl set-hostname master1# vim /etc/hosts192.168.100.128 master1
192.168.100.129 master2
192.168.100.130 master3
192.168.100.131 node1
192.168.100.132 node2
192.168.100.133 node3

 

  • 安装依赖包:
# yum update -y# yum install -y curl git iptables conntrack ipvsadm ipset jq sysstat libseccomp

 

  • 关闭防火墙、selinux和swap,重置iptables:
# systemctl stop firewalld && systemctl disable firewalld# sed -i 's/=enforcing/=disabled/g' /etc/selinux/config && setenforce 0# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT# swapoff -a# sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab# systemctl stop dnsmasq && systemctl disable dnsmasq               #否则可能导致docker容器无法解析域名

 

  • 系统参数设置:
# cat > /etc/sysctl.d/kubernetes.conf <<EOFnet.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF# sysctl -p /etc/sysctl.d/kubernetes.conf

 

sysctl -p /etc/sysctl.d/kubernetes.conf这一步如果报错:

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

 

请这样执行:

# modprobe br_netfilter# sysctl -p /etc/sysctl.d/kubernetes.conf

 

  • 安装docker:
# curl http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker.repo# yum makecache fast# yum install -y docker-ce# systemctl start docker && systemctl enable docker# cat </etc/docker/daemon.json{
 "exec-opts":["native.cgroupdriver=cgroupfs"]}EOF# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io# systemctl restart docker

 

  • 下载二进制文件(deploy节点):

官方下载地址: https://github.com/kubernetes/kubernetes/releases

百度云地址(推荐): https://pan.baidu.com/s/1j27LqsSXeNXzu-HtWC6CPg 提取码: ujpo

将下载的tar包传至deploy节点的/software目录下,

# cd /software# tar xf kubernetes-bin-1.14.0.tar.gz# cd kubernetes-bin-1.14.0/# tree ..├── master
│   ├── etcd
│   ├── etcdctl
│   ├── kubeadm
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   └── kube-scheduler
└── worker
    ├── kubelet
    └── kube-proxy

2 directories, 9 files

 

分发文件到其它节点,

# ssh root@master1 "mkdir -p /opt/kubernetes/bin"# ssh root@master2 "mkdir -p /opt/kubernetes/bin"# ssh root@master3 "mkdir -p /opt/kubernetes/bin"# ssh root@node1 "mkdir -p /opt/kubernetes/bin"# ssh root@node2 "mkdir -p /opt/kubernetes/bin"# ssh root@node3 "mkdir -p /opt/kubernetes/bin"# scp master/* master1:/opt/kubernetes/bin/# scp master/* master2:/opt/kubernetes/bin/# scp master/* master3:/opt/kubernetes/bin/# scp worker/* node1:/opt/kubernetes/bin/# scp worker/* node2:/opt/kubernetes/bin/# scp worker/* node3:/opt/kubernetes/bin/

 

设置PATH,

# ssh root@master1 "echo 'PATH=/opt/kubernetes/bin:$PATH' >> ~/.bashrc"# ssh root@master2 "echo 'PATH=/opt/kubernetes/bin:$PATH' >> ~/.bashrc"# ssh root@master3 "echo 'PATH=/opt/kubernetes/bin:$PATH' >> ~/.bashrc"# ssh root@node1 "echo 'PATH=/opt/kubernetes/bin:$PATH' >> ~/.bashrc"# ssh root@node2 "echo 'PATH=/opt/kubernetes/bin:$PATH' >> ~/.bashrc"# ssh root@node3 "echo 'PATH=/opt/kubernetes/bin:$PATH' >> ~/.bashrc"# ssh root@master1 "source ~/.bashrc"# ssh root@master2 "source ~/.bashrc"# ssh root@master3 "source ~/.bashrc"# ssh root@node1 "source ~/.bashrc"# ssh root@node2 "source ~/.bashrc"# ssh root@node3 "source ~/.bashrc"

 

  • 配置文件(deploy节点):

git压缩包下载:https://pan.baidu.com/s/1_jntuO_LunAV_NvnbV4NIw 提取码:j2a7

# cd /software && git clone https://git.imooc.com/LZXLINUX/kubernetes-ha-binary.git# cd kubernetes-ha-binary/ && lsaddons  configs  global-config.properties  init.sh  LICENSE  pki  README.md  services

 

文件说明:

addons      kubernetes的插件,比如calico、coredns和dashboard

configs     包含了部署集群过程中用到的配置文件及脚本

pki         各个组件的认证授权相关证书配置

services    所有的kubernetes服务(service)配置文件

global-configs.properties   全局配置,包含各种易变的配置内容

init.sh     初始化脚本,配置好global-config之后,会自动生成所有配置文件

 

# vim global-config.properties

 

#3个master节点的ip
MASTER_0_IP=192.168.100.128
MASTER_1_IP=192.168.100.129
MASTER_2_IP=192.168.100.130

#3个master节点的hostname
MASTER_0_HOSTNAME=master1
MASTER_1_HOSTNAME=master2
MASTER_2_HOSTNAME=master3

#api-server的高可用虚拟ip(建议为同网段ip)
MASTER_VIP=192.168.100.188

#keepalived用到的网卡接口名,一般是eth0
VIP_IF=ens33

#worker节点的ip列表
WORKER_IPS=192.168.100.131,192.168.100.132,192.168.100.133

#kubernetes服务ip网段
SERVICE_CIDR=10.254.0.0/16

#kubernetes的默认服务ip,一般是cidr的第一个
KUBERNETES_SVC_IP=10.254.0.1

#dns服务的ip地址,一般是cidr的第二个
CLUSTER_DNS=10.254.0.2

#pod网段
POD_CIDR=172.10.0.0/16

#NodePort的取值范围
NODE_PORT_RANGE=8400-8900

 

# ./init.sh====替换变量列表====MASTER_0_IP=192.168.100.128
MASTER_1_IP=192.168.100.129
MASTER_2_IP=192.168.100.130
MASTER_0_HOSTNAME=master1
MASTER_1_HOSTNAME=master2
MASTER_2_HOSTNAME=master3
MASTER_VIP=192.168.100.188
VIP_IF=ens33
WORKER_IPS=192.168.100.131,192.168.100.132,192.168.100.133
SERVICE_CIDR=10.254.0.0/16
KUBERNETES_SVC_IP=10.254.0.1
CLUSTER_DNS=10.254.0.2
POD_CIDR=172.10.0.0/16
NODE_PORT_RANGE=8400-8900====替换证书配置文件====pki/admin/admin-csr.json
pki/apiserver/kubernetes-csr.json
pki/ca-config.json
pki/ca-csr.json
pki/controller-manager/controller-manager-csr.json
pki/etcd/etcd-csr.json
pki/proxy/kube-proxy-csr.json
pki/scheduler/scheduler-csr.json====替换配置文件====configs/check-apiserver.sh
configs/download-images.sh
configs/keepalived-backup.conf
configs/keepalived-master.conf
configs/kubelet.config.json
configs/kube-proxy.config.yaml
addons/calico-rbac-kdd.yaml
addons/calico.yaml
addons/coredns.yaml
addons/dashboard-all.yaml
worker-192.168.100.131/kubelet.config.json
worker-192.168.100.131/kubelet.service
worker-192.168.100.131/kube-proxy.config.yaml
worker-192.168.100.132/kubelet.config.json
worker-192.168.100.132/kubelet.service
worker-192.168.100.132/kube-proxy.config.yaml
worker-192.168.100.133/kubelet.config.json
worker-192.168.100.133/kubelet.service
worker-192.168.100.133/kube-proxy.config.yaml====替换service文件====192.168.100.128/services/etcd.service
192.168.100.128/services/kube-apiserver.service
192.168.100.128/services/kube-controller-manager.service
192.168.100.128/services/kubelet.service
192.168.100.128/services/kube-proxy.service
192.168.100.128/services/kube-scheduler.service
192.168.100.129/services/etcd.service
192.168.100.129/services/kube-apiserver.service
192.168.100.129/services/kube-controller-manager.service
192.168.100.129/services/kubelet.service
192.168.100.129/services/kube-proxy.service
192.168.100.129/services/kube-scheduler.service
192.168.100.130/services/etcd.service
192.168.100.130/services/kube-apiserver.service
192.168.100.130/services/kube-controller-manager.service
192.168.100.130/services/kubelet.service
192.168.100.130/services/kube-proxy.service
192.168.100.130/services/kube-scheduler.service
services/etcd.service
services/kube-apiserver.service
services/kube-controller-manager.service
services/kubelet.service
services/kube-proxy.service
services/kube-scheduler.service
配置生成成功,位置: /home/kubernetes-ha-binary/target# find target/ -type ftarget/pki/admin/admin-csr.json
target/pki/apiserver/kubernetes-csr.json
target/pki/ca-config.json
target/pki/ca-csr.json
target/pki/controller-manager/controller-manager-csr.json
target/pki/etcd/etcd-csr.json
target/pki/proxy/kube-proxy-csr.json
target/pki/scheduler/scheduler-csr.json
target/configs/check-apiserver.sh
target/configs/download-images.sh
target/configs/keepalived-backup.conf
target/configs/keepalived-master.conf
target/configs/kube-proxy.config.yaml
target/configs/kubelet.config.json
target/addons/calico-rbac-kdd.yaml
target/addons/calico.yaml
target/addons/coredns.yaml
target/addons/dashboard-all.yaml
target/services/etcd.service
target/services/kube-apiserver.service
target/services/kube-controller-manager.service
target/services/kube-proxy.service
target/services/kube-scheduler.service
target/services/kubelet.service
target/worker-192.168.100.131/kubelet.config.json
target/worker-192.168.100.131/kubelet.service
target/worker-192.168.100.131/kube-proxy.config.yaml
target/worker-192.168.100.132/kubelet.config.json
target/worker-192.168.100.132/kubelet.service
target/worker-192.168.100.132/kube-proxy.config.yaml
target/worker-192.168.100.133/kubelet.config.json
target/worker-192.168.100.133/kubelet.service
target/worker-192.168.100.133/kube-proxy.config.yaml
target/192.168.100.128/services/etcd.service
target/192.168.100.128/services/kube-apiserver.service
target/192.168.100.128/services/kube-controller-manager.service
target/192.168.100.128/services/kube-proxy.service
target/192.168.100.128/services/kube-scheduler.service
target/192.168.100.128/services/kubelet.service
target/192.168.100.129/services/etcd.service
target/192.168.100.129/services/kube-apiserver.service
target/192.168.100.129/services/kube-controller-manager.service
target/192.168.100.129/services/kube-proxy.service
target/192.168.100.129/services/kube-scheduler.service
target/192.168.100.129/services/kubelet.service
target/192.168.100.130/services/etcd.service
target/192.168.100.130/services/kube-apiserver.service
target/192.168.100.130/services/kube-controller-manager.service
target/192.168.100.130/services/kube-proxy.service
target/192.168.100.130/services/kube-scheduler.service
target/192.168.100.130/services/kubelet.service

 


搭建高可用集群

  • 生成CA证书:

cfssl是非常好用的CA工具,我们用它来生成证书和秘钥文件。

# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson# cfssl version

 

根证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。

# cd /software/kubernetes-ha-binary/target/pki/# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

 

分发到每个master节点,

# ssh root@master1 "mkdir -p /etc/kubernetes/pki"# ssh root@master2 "mkdir -p /etc/kubernetes/pki"# ssh root@master3 "mkdir -p /etc/kubernetes/pki"# scp ca*.pem master1:/etc/kubernetes/pki/# scp ca*.pem master2:/etc/kubernetes/pki/# scp ca*.pem master3:/etc/kubernetes/pki/

 

  • 部署etcd集群:

如果是从百度云网盘下载的二进制文件可以跳过下载etcd,否则需要先下载etcd。

# wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz

 

生成证书、私钥,并分发到每个master节点,

# cd /software/kubernetes-ha-binary/target/pki/etcd/# cfssl gencert -ca=../ca.pem \
  -ca-key=../ca-key.pem \
  -config=../ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd    
# scp etcd*.pem master1:/etc/kubernetes/pki/# scp etcd*.pem master2:/etc/kubernetes/pki/# scp etcd*.pem master3:/etc/kubernetes/pki/

 

分发etcd服务文件到每个master节点,并创建数据目录,

# cd /software/kubernetes-ha-binary/# scp target/192.168.100.128/services/etcd.service master1:/etc/systemd/system/# scp target/192.168.100.129/services/etcd.service master2:/etc/systemd/system/# scp target/192.168.100.130/services/etcd.service master3:/etc/systemd/system/# ssh root@master1 "mkdir /var/lib/etcd"# ssh root@master2 "mkdir /var/lib/etcd"# ssh root@master3 "mkdir /var/lib/etcd"

 

master节点启动etcd服务,

# ssh root@master1 "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd"# ssh root@master2 "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd"# ssh root@master3 "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd"

 

# systemctl status etcd# journalctl -f -u etcd             #查看日志

 

  • 部署api-server:

生成证书、私钥,并分发到每个master节点,

# cd /software/kubernetes-ha-binary/target/pki/apiserver/# cfssl gencert -ca=../ca.pem \
  -ca-key=../ca-key.pem \
  -config=../ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes  
# scp kubernetes*.pem master1:/etc/kubernetes/pki/# scp kubernetes*.pem master2:/etc/kubernetes/pki/# scp kubernetes*.pem master3:/etc/kubernetes/pki/

 

分发api-server服务文件到每个master节点,并创建日志目录,

# cd /software/kubernetes-ha-binary/# scp target/192.168.100.128/services/kube-apiserver.service master1:/etc/systemd/system/# scp target/192.168.100.129/services/kube-apiserver.service master2:/etc/systemd/system/# scp target/192.168.100.130/services/kube-apiserver.service master3:/etc/systemd/system/# ssh root@master1 "mkdir /var/log/kubernetes"# ssh root@master2 "mkdir /var/log/kubernetes"# ssh root@master3 "mkdir /var/log/kubernetes"

 

master节点启动api-server,

# ssh root@master1 "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver"# ssh root@master2 "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver"# ssh root@master3 "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver"

 

# systemctl status kube-apiserver# journalctl -f -u kube-apiserver# netstat -lntp |grep 6443

 

  • 部署keepalived:

3个master节点安装keepalived,保证master节点的api server进程高可用。

注意:云服务器一般不支持自定义虚拟ip,请跳过安装keepalived。高可用可以使用云商的负载均衡服务(比如阿里云的SLB),把backends设置成你的3个master节点,然后虚拟ip就配置成负载均衡的内网ip即可。

安装keepalived,

# ssh root@master1 "yum install -y keepalived"# ssh root@master2 "yum install -y keepalived"# ssh root@master3 "yum install -y keepalived"

 

分发keepalived配置文件到每个master节点,

# cd /software/kubernetes-ha-binary/# scp target/configs/keepalived-master.conf master1:/etc/keepalived/keepalived.conf# scp target/configs/keepalived-backup.conf master2:/etc/keepalived/keepalived.conf# scp target/configs/keepalived-backup.conf master3:/etc/keepalived/keepalived.conf# scp target/configs/check-apiserver.sh master1:/etc/keepalived/# scp target/configs/check-apiserver.sh master2:/etc/keepalived/# scp target/configs/check-apiserver.sh master3:/etc/keepalived/

 

分别在3个master节点上修改keepalived配置文件,

master1

# vim /etc/keepalived/keepalived.conf! Configuration File for keepalived
global_defs {
 router_id keepalive-master}vrrp_script check_apiserver {
 script "/etc/keepalived/check-apiserver.sh"
 interval 3
 weight -3}vrrp_instance VI-kube-master {
   state MASTER
   interface ens33
   virtual_router_id 68
   priority 100
   dont_track_primary
   advert_int 3
   virtual_ipaddress {
     192.168.100.188   }
   track_script {
       check_apiserver   }}

 

master2

# vim /etc/keepalived/keepalived.conf! Configuration File for keepalived
global_defs {
 router_id keepalive-backup1}vrrp_script check_apiserver {
 script "/etc/keepalived/check-apiserver.sh"
 interval 3
 weight -3}vrrp_instance VI-kube-master {
   state BACKUP
   interface ens33
   virtual_router_id 68
   priority 99
   dont_track_primary
   advert_int 3
   virtual_ipaddress {
     192.168.100.188   }
   track_script {
       check_apiserver   }}

 

master3

# vim /etc/keepalived/keepalived.conf! Configuration File for keepalived
global_defs {
 router_id keepalive-backup2}vrrp_script check_apiserver {
 script "/etc/keepalived/check-apiserver.sh"
 interval 3
 weight -3}vrrp_instance VI-kube-master {
   state BACKUP
   interface ens33
   virtual_router_id 68
   priority 98
   dont_track_primary
   advert_int 3
   virtual_ipaddress {
     192.168.100.188   }
   track_script {
       check_apiserver   }}

 

master节点启动keepalived,

# ssh root@master1 "systemctl enable keepalived && systemctl start keepalived"# ssh root@master2 "systemctl enable keepalived && systemctl start keepalived"# ssh root@master3 "systemctl enable keepalived && systemctl start keepalived"

 

# systemctl status keepalived# journalctl -f -u keepalived

 

  • 部署kubectl:

kubectl 是 kubernetes 集群的命令行管理工具,它默认从~/.kube/config文件读取 kube-apiserver 地址、证书、用户名等信息。

kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。kubectl 作为集群的管理工具,需要被授予最高权限。

生成admin证书、私钥,

# cd /software/kubernetes-ha-binary/target/pki/admin/# cfssl gencert -ca=../ca.pem \
  -ca-key=../ca-key.pem \
  -config=../ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin

 

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书。

创建kubeconfig配置文件,

# kubectl config set-cluster kubernetes \
  --certificate-authority=../ca.pem \
  --embed-certs=true \
  --server=https://192.168.100.188:6443 \
  --kubeconfig=kube.config# kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kube.config# kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kube.config  
# kubectl config use-context kubernetes --kubeconfig=kube.config# ssh root@master1 "mkdir ~/.kube"# ssh root@master2 "mkdir ~/.kube"# ssh root@master3 "mkdir ~/.kube"# scp kube.config master1:~/.kube/config# scp kube.config master2:~/.kube/config# scp kube.config master3:~/.kube/config

 

在执行 kubectl 命令时,apiserver 会转发到 kubelet。这里定义 RBAC 规则,授权 apiserver 调用 kubelet API。

# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

 

# kubectl cluster-info# kubectl get all --all-namespaces# kubectl get componentstatuses

 

  • 部署controller-manager(master节点):

controller-manager启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

生成证书、私钥,并分发到每个master节点,

# cd /software/kubernetes-ha-binary/target/pki/controller-manager/# cfssl gencert -ca=../ca.pem \
  -ca-key=../ca-key.pem \
  -config=../ca-config.json \
  -profile=kubernetes controller-manager-csr.json | cfssljson -bare controller-manager  
# scp controller-manager*.pem master1:/etc/kubernetes/pki/# scp controller-manager*.pem master2:/etc/kubernetes/pki/# scp controller-manager*.pem master3:/etc/kubernetes/pki/

 

创建controller-manager的kubeconfig,并分发到每个master节点,

# kubectl config set-cluster kubernetes \
  --certificate-authority=../ca.pem \
  --embed-certs=true \
  --server=https://192.168.100.188:6443 \
  --kubeconfig=controller-manager.kubeconfig  
# kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=controller-manager.pem \
  --client-key=controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=controller-manager.kubeconfig  
# kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=controller-manager.kubeconfig  
# kubectl config use-context system:kube-controller-manager --kubeconfig=controller-manager.kubeconfig# scp controller-manager.kubeconfig master1:/etc/kubernetes/# scp controller-manager.kubeconfig master2:/etc/kubernetes/# scp controller-manager.kubeconfig master3:/etc/kubernetes/

 

分发controller-manager服务文件到每个master节点,

# cd /software/kubernetes-ha-binary/# scp target/services/kube-controller-manager.service master1:/etc/systemd/system/# scp target/services/kube-controller-manager.service master2:/etc/systemd/system/# scp target/services/kube-controller-manager.service master3:/etc/systemd/system/

 

master节点启动controller-manager,

# ssh root@master1 "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager"# ssh root@master2 "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager"# ssh root@master3 "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager"

 

# systemctl status kube-controller-manager# journalctl -f -u kube-controller-manager

 

  • 部署scheduler:

scheduler启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

生成证书、私钥,

# cd /software/kubernetes-ha-binary/target/pki/scheduler/# cfssl gencert -ca=../ca.pem \
  -ca-key=../ca-key.pem \
  -config=../ca-config.json \
  -profile=kubernetes scheduler-csr.json | cfssljson -bare kube-scheduler

 

创建scheduler的kubeconfig,并分发kubeconfig到每个master节点,

# kubectl config set-cluster kubernetes \
  --certificate-authority=../ca.pem \
  --embed-certs=true \
  --server=https://192.168.100.188:6443 \
  --kubeconfig=kube-scheduler.kubeconfig  
# kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig  
# kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig  
# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig# scp kube-scheduler.kubeconfig master1:/etc/kubernetes/# scp kube-scheduler.kubeconfig master2:/etc/kubernetes/# scp kube-scheduler.kubeconfig master3:/etc/kubernetes/

 

分发scheduler服务文件到每个master节点,

# cd /software/kubernetes-ha-binary/# scp target/services/kube-scheduler.service master1:/etc/systemd/system/# scp target/services/kube-scheduler.service master2:/etc/systemd/system/# scp target/services/kube-scheduler.service master3:/etc/systemd/system/

 

master节点启动scheduler,

# ssh root@master1 "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler"# ssh root@master2 "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler"# ssh root@master3 "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler"

 

# systemctl status kube-scheduler# journalctl -f -u kube-scheduler

 

  • 部署kubelet:

预先下载镜像到每个node节点,

# scp target/configs/download-images.sh node1:/software# scp target/configs/download-images.sh node2:/software# scp target/configs/download-images.sh node3:/software# ssh root@node1 "sh /software/download-images.sh"# ssh root@node2 "sh /software/download-images.sh"# ssh root@node3 "sh /software/download-images.sh"

 

创建bootstrap配置文件,并分发到每个node节点,

# cd /software/kubernetes-ha-binary/target/pki/admin/# export BOOTSTRAP_TOKEN=$(kubeadm token create \
  --description kubelet-bootstrap-token \
  --groups system:bootstrappers:worker \
  --kubeconfig kube.config)
  # kubectl config set-cluster kubernetes \
  --certificate-authority=../ca.pem \
  --embed-certs=true \
  --server=https://192.168.100.188:6443 \
  --kubeconfig=kubelet-bootstrap.kubeconfig  
# kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=kubelet-bootstrap.kubeconfig  
# kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=kubelet-bootstrap.kubeconfig  
# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig# ssh root@node1 "mkdir -p /etc/kubernetes/pki"# ssh root@node2 "mkdir -p /etc/kubernetes/pki"# ssh root@node3 "mkdir -p /etc/kubernetes/pki"# scp kubelet-bootstrap.kubeconfig node1:/etc/kubernetes/# scp kubelet-bootstrap.kubeconfig node2:/etc/kubernetes/# scp kubelet-bootstrap.kubeconfig node3:/etc/kubernetes/# cd /software/kubernetes-ha-binary/# scp target/pki/ca.pem node1:/etc/kubernetes/pki/# scp target/pki/ca.pem node2:/etc/kubernetes/pki/# scp target/pki/ca.pem node3:/etc/kubernetes/pki/

 

分发kubelet配置文件和服务文件到每个node节点,

# scp target/worker-192.168.100.131/kubelet.config.json node1:/etc/kubernetes/# scp target/worker-192.168.100.132/kubelet.config.json node2:/etc/kubernetes/# scp target/worker-192.168.100.133/kubelet.config.json node3:/etc/kubernetes/# scp target/worker-192.168.100.131/kubelet.service node1:/etc/systemd/system/# scp target/worker-192.168.100.132/kubelet.service node2:/etc/systemd/system/# scp target/worker-192.168.100.133/kubelet.service node3:/etc/systemd/system/

 

bootstrap附权,

# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

 

node节点启动kubelet,

# ssh root@node1 "mkdir /var/lib/kubelet"# ssh root@node2 "mkdir /var/lib/kubelet"# ssh root@node3 "mkdir /var/lib/kubelet"# ssh root@node1 "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"# ssh root@node2 "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"# ssh root@node3 "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"

 

同意bootstrap请求,

# kubectl get csr               #获取csr nameNAME                                                   AGE   REQUESTOR                 CONDITION
node-csr-IxZr_CqrO0DyR8EgmEY8QzEKcVIb1fq3CakxzyNMz-E   18s   system:bootstrap:2z33di   Pending
node-csr-TwfuVIfLWJ21im_p64jTCjnakyIAB3LYtbJ_O0bfYfg   9s    system:bootstrap:2z33di   Pending
node-csr-oxYYnJ8-WGbxEJ8gL-Q41FHHUM2FOb8dv3g8_GNs7Ok   3s    system:bootstrap:2z33di   Pending# kubectl certificate approve node-csr-IxZr_CqrO0DyR8EgmEY8QzEKcVIb1fq3CakxzyNMz-E# kubectl certificate approve node-csr-TwfuVIfLWJ21im_p64jTCjnakyIAB3LYtbJ_O0bfYfg# kubectl certificate approve node-csr-oxYYnJ8-WGbxEJ8gL-Q41FHHUM2FOb8dv3g8_GNs7Ok

 

node节点查看服务状态与日志,

# ssh root@node1 "systemctl status kubelet"# ssh root@node1 "journalctl -f -u kubelet"

 

  • 部署kube-proxy:

生成证书、私钥,

# cd /software/kubernetes-ha-binary/target/pki/proxy/# cfssl gencert -ca=../ca.pem \
  -ca-key=../ca-key.pem \
  -config=../ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

 

创建kubeconfig文件,并分发到每个node节点上,

# kubectl config set-cluster kubernetes \
  --certificate-authority=../ca.pem \
  --embed-certs=true \
  --server=https://192.168.100.188:6443 \
  --kubeconfig=kube-proxy.kubeconfig# kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig  
# kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig  
# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig# scp kube-proxy.kubeconfig node1:/etc/kubernetes/# scp kube-proxy.kubeconfig node2:/etc/kubernetes/# scp kube-proxy.kubeconfig node3:/etc/kubernetes/

 

分发kube-proxy配置文件和服务文件到每个node节点,

# cd /software/kubernetes-ha-binary/# scp target/worker-192.168.100.131/kube-proxy.config.yaml node1:/etc/kubernetes/# scp target/worker-192.168.100.132/kube-proxy.config.yaml node2:/etc/kubernetes/# scp target/worker-192.168.100.133/kube-proxy.config.yaml node3:/etc/kubernetes/# scp target/services/kube-proxy.service node1:/etc/systemd/system/# scp target/services/kube-proxy.service node2:/etc/systemd/system/# scp target/services/kube-proxy.service node3:/etc/systemd/system/

 

node节点启动kube-proxy,

# ssh root@node1 "mkdir /var/lib/kube-proxy && mkdir /var/log/kubernetes"# ssh root@node2 "mkdir /var/lib/kube-proxy && mkdir /var/log/kubernetes"# ssh root@node3 "mkdir /var/lib/kube-proxy && mkdir /var/log/kubernetes"# ssh root@node1 "systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy"# ssh root@node2 "systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy"# ssh root@node3 "systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy"

 

# ssh root@node1 "systemctl status kube-proxy"# ssh root@node1 "journalctl -f -u kube-proxy"

 

  • 部署calico:
# mkdir /etc/kubernetes/addons# cd /software/kubernetes-ha-binary/# scp target/addons/calico* master1:/etc/kubernetes/addons/# kubectl create -f /etc/kubernetes/addons/calico-rbac-kdd.yaml# kubectl create -f /etc/kubernetes/addons/calico.yaml# kubectl get pods -n kube-system               #查看pod状态

 

  • 部署DNS插件coredns:
# cd /software/kubernetes-ha-binary/# scp target/addons/coredns.yaml master1:/etc/kubernetes/addons/# kubectl create -f /etc/kubernetes/addons/coredns.yaml

 


可用性测试

集群已经初步搭建起来,下面进行集群的可用性测试。

一个DaemonSet对象能确保其创建的Pod在集群中的每一台(或指定)Node上都运行一个副本。如果集群中动态加入了新的Node,DaemonSet中的Pod也会被添加在新加入的Node上运行。删除一个DaemonSet也会级联删除所有其创建的Pod。

因此,创建一个DaemonSet对象来测试可用性比较合适。

  • 创建nginx daemonset:
# cd /software && vim nginx-ds.yaml

 

apiVersion: v1kind: Servicemetadata:
  name: nginx-ds  labels:
    app: nginx-dsspec:
  type: NodePort  selector:
    app: nginx-ds  ports:
  - name: http    port: 80
    targetPort: 80---apiVersion: extensions/v1beta1kind: DaemonSetmetadata:
  name: nginx-ds  labels:
    addonmanager.kubernetes.io/mode: Reconcilespec:
  template:
    metadata:
      labels:
        app: nginx-ds    spec:
      containers:
      - name: my-nginx        image: nginx:1.14.0        ports:
        - containerPort: 80

 

# kubectl create -f nginx-ds.yamlservice/nginx-ds created
daemonset.extensions/nginx-ds created

 

  • 检查ip连通性:
# kubectl get pods -o wideNAME             READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
nginx-ds-8tjhm   1/1     Running   0          42s   172.10.0.2   node1   <none>           <none>nginx-ds-jdhmw   1/1     Running   0          42s   172.10.1.2   node2   <none>           <none>nginx-ds-s2kwt   1/1     Running   0          42s   172.10.2.3   node3   <none>           <none># kubectl get svcNAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)       AGE
kubernetes   ClusterIP   10.254.0.1      <none>        443/TCP       126m
nginx-ds     NodePort    10.254.119.71   <none>        80:8610/TCP   66s

 

在每个node节点上ping pod ip,同时访问服务ip及其端口,在每个node节点检查node-port可用性。

  • 检查dns可用性:
# vim pod-nginx.yaml

 

apiVersion: v1kind: Podmetadata:
  name: nginxspec:
  containers:
  - name: nginx    image: nginx:1.14.0    ports:
    - containerPort: 80

 

# kubectl create -f pod-nginx.yamlpod/nginx created# kubectl exec -it nginx bashroot@nginx:/# apt-get updateroot@nginx:/# apt install -y iputils-pingroot@nginx:/# ping nginx-dsPING nginx-ds.default.svc.cluster.local (10.254.119.71) 56(84) bytes of data.# kubectl get svcNAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)       AGE
kubernetes   ClusterIP   10.254.0.1      <none>        443/TCP       144m
nginx-ds     NodePort    10.254.119.71   <none>        80:8610/TCP   19m

 

可以看到,在nginx pod中ping nginx-ds时dns解析没问题,返回的是nginx-ds的cluster-ip。这说明之前搭建的集群正常可用。


部署dashboard

  • 部署dashboard:
# cd /software/kubernetes-ha-binary/# scp target/addons/dashboard-all.yaml master1:/etc/kubernetes/addons/# kubectl apply -f /etc/kubernetes/addons/dashboard-all.yaml# kubectl get deploy kubernetes-dashboard -n kube-systemNAME                   READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1/1     1            1           8s# kubectl get svc kubernetes-dashboard -n kube-systemNAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.254.39.199   <none>        443:8401/TCP   24s

 

  • 访问dashboard:

dashboard只允许通过https访问,不过使用nodeport的方式暴露服务后,可以使用https://NodeIP:NodePort地址访问。

关于自定义证书,默认dashboard的证书是自动生成的非安全的证书。如果有域名和对应的安全证书可以自行替换,使用安全的域名方式访问dashboard。

dashboard-all.yaml中增加dashboard启动参数,可以指定证书文件,其中证书文件是通过secret注进来的。

- –tls-cert-file 
- dashboard.cer 
- –tls-key-file 
- dashboard.key

 

  • 登录dashboard:

dashboard默认只支持token认证,所以如果使用KubeConfig文件,需要在该文件中指定token,这里使用token的方式登录。

# kubectl create sa dashboard-admin -n kube-system# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')# kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}'              #打印token

 

使用打印的token登录dashboard,

3. 二进制方式搭建k8s集群_K8S

使用谷歌浏览器无法访问,不过可以使用火狐浏览器访问,这里省略该步骤。

如果想要使用谷歌浏览器访问,可以这样做,

# mkdir /software/key && cd /software/key# openssl genrsa -out dashboard.key 2048# openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.100.131'# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt# kubectl delete secret kubernetes-dashboard-certs -n kube-system# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system# kubectl get pod -n kube-system |grep dashboardkubernetes-dashboard-6f4595c9c9-bj8s6   1/1     Running   0          14m# kubectl delete pod kubernetes-dashboard-6f4595c9c9-bj8s6 -n kube-system

 

3. 二进制方式搭建k8s集群_Kubernetes_02

3. 二进制方式搭建k8s集群_K8S_03

kubernetes dashboard部署完成,kubernetes的高可用集群到此也搭建完成。