一、k8s集群节点

3.127.10.209

master

3.127.10.95

master

3.127.10.66

master

3.127.10.233

node

3.127.33.173

node


二、环境准备

# 关闭防火墙
systemctl stop firewalld

# 禁用selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
# 或者临时修改重启失效
setenforce 0

# 时间同步
这里使用chrony时间同步,或者就使用ntp时间同步就行

# 启用ipvs内核
at > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules 
&& bash /etc/sysconfig/modules/ipvs.modules 
&& lsmod | grep -e ip_vs -e nf_conntrack_ipv4



三、yum源配置

有网的情况下可以直接使用此方法,如果没网就需要把相关镜像拷贝到内网服务器进行安装,相关安装方式在部署redis哨兵集群时候有提到,使用yum install --downloadonly

cd /etc/yum.repos.d配置docker-ce源

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

配置kubernetes源

vim /etc/yum.repos.d/kubernetes.repo

k8s nginx配置nodeport k8s nginx yaml_docker

gpgkey为校验文件

yum repolist查看配置是否成功

四、安装docker、kubelet、kubeadm、kubectl

node节点无需安装kubectl

修改网络策略(docker会将网络策略默认修改为drop)

iptables -vnL

k8s nginx配置nodeport k8s nginx yaml_vim_02

vim /usr/lib/systemd/system/docker.service

[Service]
增加如下配置

#访问这些网络的时候,不需要经过代理。  127.0.0.1/8表示本机的网络。
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/24"
# -P, --policy chain target制定链表的策略(ACCEPT|DROP|REJECT)
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT

sysctl -a | grep bridge

k8s nginx配置nodeport k8s nginx yaml_docker_03


如果没有下面两项

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vim/etc/sysctl.conf 或者vim /etc/sysctl.d/k8s.conf添加

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

执行sysctl -p /etc/sysctl.d/k8s.conf

启动docker修改docker配置并重启docker

# vim /etc/docker/daemon.json

{
"graph": "/home/docker/docker",
"storage-driver": "overlay2",
"bip": "172.17.0.1/24",
"exec-opts": ["native.cgroupdriver=systemd"],
"live-restore": true
}

设置kubelet启动时忽略swap报错

# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

开机启动

# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"



配置keepalived

这个在哨兵集群部署中也有写明,这里部署过程就不做说明
集群配置参考

! Configuration File for keepalived

global_defs {
   router_id k8s-master02

# 添加如下内容
   script_user root
   enable_script_security
}

vrrp_script check_nginx {
    script "/usr/local/etc/shutdown.sh"         # 检测脚本路径
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER            # MASTER
    interface eth0         # 本机网卡名
    virtual_router_id 95
    priority 99             # 权重100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        3.127.10.72/24      # 虚拟IP
    }
    track_script {
        check_nginx       # 模块
    }
}

检测脚本

#!/bin/bash
A=`ps -C nginx --no-header | wc -l`
if [ $A -eq 0 ]; then
        /home/nginx/nginx-1.20.1/sbin/nginx  #重启nginx
        if [ `ps -C nginx --no-header | wc -l` -eq 0 ]; then
                killall keepalived
        fi
fi

六、nginx负载配置

k8s nginx配置nodeport k8s nginx yaml_docker_04

负载k8s集群的api-server安全端口,注意nginx这里是用的steam模块做四层代理

七、master节点初始化

kubeadm config print init-defaults > kube-init.yaml

# cat kube-init.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.13.217
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "3.127.10.72:7443" //集群加上,ip地址填写vip的地址
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16  
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

执行初始化操作
kubeadm init --config kube-init.yaml

其他master节点需要复制相关证书如下

/etc/kubernetes/pki/ca.*                 #根证书私钥等
/etc/kubernetes/pki/sa.*                 
/etc/kubernetes/pki/front-proxy-ca.*     #代理根证书签发的客户端证书私钥信息
/etc/kubernetes/pki/etcd/ca.*            #etcd的根证书私钥
/etc/kubernetes/admin.conf               #用于k8s组件访问集群认证的配置文件

其余master节点加入集群命令

# k8s初始化成功后会有提示

kubeadm join 172.18.178.240:7443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:274a3c078548240887903b51e75e2cc9548343e06dcd2a3ca0c3087c3fdd3175 \
--control-plane

node节点加入集群

node 节点加入:
kubeadm join 172.18.178.240:7443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:274a3c078548240887903b51e75e2cc9548343e06dcd2a3ca0c3087c3fdd3175



八、flannel网络插件安装

# 安装网络插件:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

kubectl get nodes查看集群节点状态 都显示ready则说明运行成功

k8s nginx配置nodeport k8s nginx yaml_nginx_05