一、基础环境部署
1.节点规划
角色 | hostname | ip地址 |
master | k8s-master | 192.168.20.17 |
node | k8s-node1 | 192.168.20.18 |
node | k8s-node2 | 192.168.20.19 |
node | k8s-node3 | 192.168.20.20 |
2.主机hosts信息(所有节点均需要操作)
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.20.17 k8s-master
192.168.20.18 k8s-node1
192.168.20.19 k8s-node2
192.168.20.20 k8s-node33.创建master节点到node节点的信任关系
[root@k8s-master ~]# ssh-copy-id k8s-master
[root@k8s-master ~]# ssh-copy-id k8s-node1
[root@k8s-master ~]# ssh-copy-id k8s-node2
[root@k8s-master ~]# ssh-copy-id k8s-node34.关闭selinux和防火墙(所有节点均需要操作)
# vim /etc/selinux/config
将selinux=enforcing改成selinux=disabled
#service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
#systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.5.关闭交换分区(所有节点均需要操作)
交换分区是当内存不够时,会使用swap分区,但是swap分区的性能不高,所以k8s默认不允许使用;
临时关闭:
# swapoff -a
永久关闭:
# vim /etc/fstab注释最后一行
# mount -a6.调整内核参数和模块(所有节点均需要操作)
overlay用于支持overlay文件系统,提供容器所需的文件系统隔离;
br_netfilter用于Linux桥接网络和IP数据包过滤,实现K8S的网络通信;
###(所有节点均需要操作)
# cat /etc/modules-load.d/k8s.conf
br_netfilter
overlay
# modprobe br_netfilter
# modprobe overlay
---------------------------------------
# cat /etc/sysctl.conf |grep -Ev '^#|^$'
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# sysctl -p7.更新和配置软件源(所有节点均需要操作)
添加阿里云yum源(所有节点均需要操作)
# wget -O /etc/yum.repos.d/Centos-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
生成yum元数据缓存:
# yum clean all && yum makecache
配置阿里云docker yum仓库源:
# yum install -y yum-utils
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo8.配置ipvs功能(实现负载均衡)(所有节点均需要操作)
# yum install -y ipset ipvsadm
# cat /etc/sysconfig/modules/ipvs.modules
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
# chmod a+x /etc/sysconfig/modules/ipvs.modules
# bash /etc/sysconfig/modules/ipvs.modules9.配置时间同步(所有节点均需要操作)
9.1 设置市区(Asia/Shanghai )
# timedatectl9.2 选择时区命令
# tzselect
Please identify a location so that time zone rules can be set correctly.
Please select a continent or ocean.
1) Africa
2) Americas
3) Antarctica
4) Arctic Ocean
5) Asia
6) Atlantic Ocean
7) Australia
8) Europe
9) Indian Ocean
10) Pacific Ocean
11) none - I want to specify the time zone using the Posix TZ format.
#? 5
Please select a country.
1) Afghanistan 18) Israel 35) Palestine
2) Armenia 19) Japan 36) Philippines
3) Azerbaijan 20) Jordan 37) Qatar
4) Bahrain 21) Kazakhstan 38) Russia
5) Bangladesh 22) Korea (North) 39) Saudi Arabia
6) Bhutan 23) Korea (South) 40) Singapore
7) Brunei 24) Kuwait 41) Sri Lanka
8) Cambodia 25) Kyrgyzstan 42) Syria
9) China 26) Laos 43) Taiwan
10) Cyprus 27) Lebanon 44) Tajikistan
11) East Timor 28) Macau 45) Thailand
12) Georgia 29) Malaysia 46) Turkmenistan
13) Hong Kong 30) Mongolia 47) United Arab Emirates
14) India 31) Myanmar (Burma) 48) Uzbekistan
15) Indonesia 32) Nepal 49) Vietnam
16) Iran 33) Oman 50) Yemen
17) Iraq 34) Pakistan
#? 9
Please select one of the following time zone regions.
1) Beijing Time
2) Xinjiang Time
#? 1
The following information has been given:
China
Beijing Time
Therefore TZ='Asia/Shanghai' will be used.
Local time is now: Thu May 11 17:50:40 CST 2023.
Universal Time is now: Thu May 11 09:50:40 UTC 2023.
Is the above information OK?
1) Yes
2) No
#? 1
You can make this change permanent for yourself by appending the line
TZ='Asia/Shanghai'; export TZ
to the file '.profile' in your home directory; then log out and log in again.
Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
Asia/Shanghai9.3 设置timezone的时区
# sudo timedatectl set-timezone 'Asia/Shanghai'
或者
# echo "Asia/Shanghai" > /etc/timezone9.4 设置时间
# rm -rf /etc/localtime
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime9.5 使用NTP服务时间同步
1、安装服务
# yum -y install ntp
# systemctl start ntpd
# systemctl enable ntpd
2、修改ntp.conf文件
# vim /etc/ntp.conf (如下是修改的部分)
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server iburst
3、重启服务
#systemctl restart ntpd
4、检查同步状态
# ntpq -p
5、执行硬件时间向软件时间同步
# hwclock -w二、功能组件环境部署
1.docker环境部署(所有节点均需要操作)
1.1 下载并启动docker
# yum install -y docker-ce-20.10.24-3.el7 docker-ce-cli-20.10.24-3.el7 containerd.io
启动docker:
# systemctl start docker
设置开机启动:
# systemctl enable docker
验证:
# systemctl status docker1.2 配置docker,然后重启docker
# cat /etc/docker/daemon.json
{
"registry-mirrors":[
"https://",
"http://",
"https://",
"https://"
],
"exec-opts":["native.cgroupdriver=systemd"]
}1.3 重启虚拟机,验证是否装好
# reboot
# docker ps
#这里使用docker ps看docker是否启动成功2.配置k8s集群(所有节点均需要操作)
2.1 配置K8S组件源
# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg2.2 构建本地yum缓存
# yum makecache2.3 安装K8S
# yum install -y kubeadm-1.23.17-0 kubelet-1.23.17-0 kubectl-1.23.17.0
# cat <<EOF > /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF
# systemctl enable --now kubelet3.集群初始化
3.1 mster集群初始化(只在maste上做):
# kubeadm init \
--kubernetes-version=v1.23.17 \
--pod-network-cidr=10.224.0.0/16 \
--service-cidr=10.96.0.0/12 --apiserver-advertise-address=192.168.20.17 \
--image-repository=/google_containers如下输出表示成功
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.20.17:6443 --token zqw81t.n1walus16xxm4fg6 \
--discovery-token-ca-cert-hash sha256:886c942d75eb7f46b5396e1888fab92a5b565e396deb99273a5ad2ab2272e1923.2 上面图片里copy下来一条条运行的
#上面图片里copy下来一条条运行的!!!
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config3.3 将node加入master集群(在所有node节点执行)
#kubeadm join 192.168.20.17:6443 --token zqw81t.n1walus16xxm4fg6 \
--discovery-token-ca-cert-hash sha256:886c942d75eb7f46b5396e1888fab92a5b565e396deb99273a5ad2ab2272e1923.4 在master上给node打标签:
# kubectl label node k8s-node1 /worker=worker
# kubectl label node k8s-node2 /worker=worker
# kubectl label node k8s-node3 /worker=worker3.5 安装Calico网络插件(只在master上做)
# wget https://docs.projectcalico.org/archive/v3.25/manifests/calico.yaml
# kubectl apply -f calico.yaml
等待status都变成ready(时间比较久,如果太久了失败了,也可以看看为啥失败,失败的原因多是镜像拉取失败,就手动去拉一下镜像)
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 155m v1.23.17
k8s-node1 Ready worker 153m v1.23.17
k8s-node2 Ready worker 153m v1.23.17
k8s-node3 Ready worker 153m v1.23.173.6 K8S配置ipvs
# kubectl edit configmap kube-proxy -n kube-system
就修改如下(加上ipvs就行)
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: "ipvs"
删掉所有kube-proxy使之重启:
# kubectl delete pods -n kube-system -l k8s-app=kube-proxy3.7 验证集群搭建成功
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-64cc74d646-xfj58 1/1 Running 0 154m
calico-node-94nhm 1/1 Running 0 154m
calico-node-jn9cf 1/1 Running 0 154m
calico-node-lq6qh 1/1 Running 0 154m
calico-node-xbzf4 1/1 Running 0 154m
coredns-6d8c4cb4d-5xxhf 1/1 Running 0 158m
coredns-6d8c4cb4d-78872 1/1 Running 0 158m
etcd-k8s-master 1/1 Running 0 158m
kube-apiserver-k8s-master 1/1 Running 0 158m
kube-controller-manager-k8s-master 1/1 Running 0 158m
kube-proxy-5npm9 1/1 Running 0 148m
kube-proxy-bzwvp 1/1 Running 0 148m
kube-proxy-htfrm 1/1 Running 0 148m
kube-proxy-zw2qp 1/1 Running 0 148m
kube-scheduler-k8s-master 1/1 Running 0 158m
# kubectl get ns
NAME STATUS AGE
default Active 159m
kube-node-lease Active 159m
kube-public Active 159m
kube-system Active 159m
kubernetes-dashboard Active 140m四、安装dashboard软件(选择性安装)
1. 安装软件Dahboard(只在master上执行))
# wget https://mirror.ghproxy.com/https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# vim recommended.yaml (新增如下部分,端口可以根据自己的需求来改)
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30088
selector:
k8s-app: kubernetes-dashboard
启动:
# kubectl apply -f recommended.yaml
查看:
# kubectl get pods,svc -n kubernetes-dashboard
如上步骤都成功之后,执行如下步骤
# vim dashboard-access-token.yaml
完成之后如下
# cat dashboard-access-token.yaml
#Creating a Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
# Creating a ClusterRoleBinding
apiVersion: /v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup:
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
---
# Getting a long-lived Bearer Token for ServiceAccount
apiVersion: v1
kind: Secret
metadata:
name: admin-user
namespace: kubernetes-dashboard
annotations:
kubernetes.io/: "admin-user"
type: kubernetes.io/service-account-token
---------------------------------------
# kubectl apply -f dashboard-access-token.yaml
访问dashboard:[root@k8s-master ~]# kubectl get svc -n kubernetes-dashboard2.获取tocken登录
# kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"}|base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IkdCamM4dHBOal82ZG5kS0o4ODZEMTBRZEFFbnVfZVRkZlBqM0ZtbTVhNE0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1ZDQwNDM1Ny05ZDFmLTQ0OGQtOTQwNC02ZmFiOWYzMDgwM2YiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.jiOKOp1SlFXADOHo00-do76q7u3_C83ZOutTubx2JEyNBB-E4JtQ-ZHnqcvo3f_b4ny9YadCoAGOouLTN3adzwaoI-UPIhA_KWd4A1mV6qKNZPbwOTjxAjHYfbLF1to3T91yplApsusjmCg_6bOGsG9XJMyio0kGh6pR2IGRwmGqd_eN8N71jRenyRoSXDTINDkYsKmSZ_uF5M22wMcoXFLKuRrypbdd8lUO8pKg8eqSqeeldCGg9YMvqdLOxTgUPI36or_hvXFgGngPoDehXUvAohFvE6V4IEA692lxYxL15vinc7NShQQ2LcPxwVE6ZJhEZFjYGkODen9l4avzpAhttps://192.168.20.17:30088/#/login


















