2 架构图
2.1.1 整体架构
k8s运行在node上,node包括 master节点 和 worker(Slave)节点两种
因此K8S属于主从设备模型(Master-Slave架构),Master节点负责核心的调度、管理和运维,Slave节点则在执行用户的程序
2.2 High Level架构
3、安装Kubernetes集群
3.1 使用kind安装
kind官网:https://kind.sigs.k8s.io/
3.1.1 关闭防火墙
systemctl disable firewalld --now
3.1.2 关闭SELinux
sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
setenforce 0
3.1.3 设置主机名
hostnamectl set-hostname kind-Kubernetes
3.1.4 设置代理
填自己的!!!
export http_proxy=192.168.0.10:7890
export https_proxy=192.168.0.10:7890
3.1.4 下载安装工具
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.19.0/kind-linux-amd64
3.1.5 给文件授权
chmod +x kind
mv kind /usr/local/bin/
restorecon -RvF /usr/local/bin/
3.1.6 安装docker
yum -y install yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install docker-ce docker-ce-cli containerd.io --allowerasing
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://0wz2hvl3.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
3.1.7 创建集群
[root@kind-Kubernetes ~]# kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.27.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
3.1.8 安装kubectl
# 下载软件包
curl -LO https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
# 授权、移动到/usr/local/bin目录下
chmod +x kubectl
mv kubectl /usr/local/bin
# 编辑/etc/bashrc文件,允许kubectl自动补全
echo "source <(kubectl completion bash)" >> /etc/bashrc
# 测试
[root@kind-Kubernetes ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:34597
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
3.1.9 kubectl创建资源
#kubectl创建资源
kubectl create -f ./my-manifest.yaml # 创建资源
kubectl create -f ./my1.yaml -f ./my2.yaml # 使用多个文件创建资源
kubectl create -f ./dir # 使用目录下的所有清单文件来创建资源
kubectl create -f https://git.io/vPieo # 使用 url 来创建资源
kubectl run nginx --image=nginx # 启动一个 nginx 实例
kubectl explain pods,svc # 获取 pod 和 svc 的文档
3.1.10 kubectl显示和查找资源
#显示和查找资源
kubectl get services # 列出所有 namespace 中的所有 service
kubectl get pods --all-namespaces # 列出所有 namespace 中的所有 pod
kubectl get pods -o wide # 列出所有 pod 并显示详细信息
kubectl get deployment my-dep # 列出指定 deployment
kubectl get pods --include-uninitialized # 列出该 namespace 中的所有 pod 包括未初始化的
3.1.11 删除kind
kind delete cluster
3.2 minikube
与 kind 类似,minikube 是一个工具, 能让你在本地运行 Kubernetes,minikube 在你本地的个人计算机(包括 Windows、macOS 和 Linux PC)运行一个单节点的 Kubernetes 集群,以便你来尝试 Kubernetes 或者开展每天的开发工作。
3.2.1 安装前准备
# 关闭防火墙
systemctl disable firewalld --now
# 关闭SELinux
sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
setenforce 0
# 设置主机名
hostnamectl set-hostname mini-Kubernetes
# 设置代理
export http_proxy=192.168.0.10:7890
export https_proxy=192.168.0.10:7890
3.2.2 配置软件仓库
cat > /etc/yum.repos.d/Rocky.repo <<END
[BaseOS]
name=BaseOS
baseurl=https://mirrors.aliyun.com/rockylinux/9.2/BaseOS/x86_64/os/
gpgcheck=0
enabled=1
[AppStream]
name=AppStream
baseurl=https://mirrors.aliyun.com/rockylinux/9.2/AppStream/x86_64/os/
gpgcheck=0
enabled=1
END
3.2.3 下载rpm包
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
sudo rpm -Uvh minikube-latest.x86_64.rpm
3.2.4 安装必要软件
yum -y install conntrack
3.2.5 启动minikube
# 容器化启动
minikube start --force
# QEMU-KVM启动
yum -y install libvirt qemu-kvm
systemctl enable libvirtd --now
minikube start --driver=kvm2 --force
3.2.6 配置kubectl
# 方式1(推荐,直接复制代码即可)
# 下载软件包
curl -LO https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
# 授权、移动到/usr/local/bin目录下
chmod +x kubectl
mv kubectl /usr/local/bin
# 编辑/etc/bashrc文件,允许kubectl自动补全
echo "source <(kubectl completion bash)" >> /etc/bashrc
# 方式2(kubectl命令使用可能较为繁琐)
minikube kubectl
3.2.7查看minikube信息
minikube kubectl -- 命令
# 查看集群pod命令
kubectl get pods -A
minikube kubectl -- get pods -A
[root@kind-Kubernetes ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-minikube-7474c89846-tftgh 1/1 Running 0 16m
kube-system coredns-787d4945fb-pt95f 1/1 Running 0 66m
kube-system etcd-minikube 1/1 Running 0 67m
kube-system kube-apiserver-minikube 1/1 Running 0 66m
kube-system kube-controller-manager-minikube 1/1 Running 0 67m
kube-system kube-proxy-q647d 1/1 Running 0 66m
kube-system kube-scheduler-minikube 1/1 Running 0 66m
kube-system storage-provisioner 1/1 Running 1 (66m ago) 66m
[root@kind-Kubernetes ~]# minikube kubectl -- get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-minikube-7474c89846-tftgh 1/1 Running 0 16m
kube-system coredns-787d4945fb-pt95f 1/1 Running 0 66m
kube-system etcd-minikube 1/1 Running 0 67m
kube-system kube-apiserver-minikube 1/1 Running 0 67m
kube-system kube-controller-manager-minikube 1/1 Running 0 67m
kube-system kube-proxy-q647d 1/1 Running 0 66m
kube-system kube-scheduler-minikube 1/1 Running 0 67m
kube-system storage-provisioner 1/1 Running 1 (66m ago) 67m
3.2.8 部署应用
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube --type=NodePort --port=8080
kubectl get services hello-minikube
minikube service hello-minikube
kubectl port-forward service/hello-minikube 7080:8080
curl http://localhost:7080/
# 测试
[root@kind-Kubernetes ~]# curl localhost:7080
CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=localhost:7080
user-agent=curl/7.76.1
BODY:
-no body in request-
3.2.9 部署负载均衡器
kubectl create deployment balanced --image k8s.gcr.io/echoserver:1.4
kubectl expose deployment balanced --type NodePort --port 8080
[root@kind-Kubernetes ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
balanced NodePort 10.97.25.19 <none> 8080:30265/TCP 106s
hello-minikube NodePort 10.105.85.113 <none> 8080:30306/TCP 22m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 73m
# 另一个窗口
[root@kind-Kubernetes ~]# minikube tunnel
Status:
machine: minikube
pid: 296510
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: []
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
# 测试(回到之前窗口)
[root@kind-Kubernetes ~]# curl 10.97.25.19:8080
CLIENT VALUES:
client_address=10.244.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.97.25.19:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=10.97.25.19:8080
user-agent=curl/7.76.1
BODY:
-no body in request-
3.2.10 删除minikube集群
minikube delete
3.3 Kubeadm
使用kubeadm工具来创建和管理 Kubernetes 集群。 该工具能够执行必要的动作并用一种用户友好的方 式启动一个可用的、安全的集群。
3.3.1 环境拓扑
3.3.2 设备说明
节点描述 | 系统 | IP地址 |
master | Rocky9.2 | 192.168.0.100 |
node1 | Rocky9.2 | 192.168.0.101 |
node2 | Rocky9.2 | 192.168.0.102 |
node3 | Rocky9.2 | 192.168.0.103 |
3.3.2 配置ssh免密登录
master节点配置(注意要有ssh秘钥)
# 如果没有ssh秘钥,使用 ssh-keygen 手动配置
#!/bin/bash
yum -y install sshpass
for i in {master,node1,node2,node3}
do
sshpass -p 123 ssh -o StrictHostKeyChecking=no root@$i "sed -i 's/^#.*StrictHostKeyChecking.*/StrictHostKeyChecking no/' /etc/ssh/ssh_config"
sshpass -p 123 ssh-copy-id $i;
done
3.3.3 hosts文件
master节点配置
cat > /etc/hosts <<END
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.100 master
192.168.0.101 node1
192.168.0.102 node2
192.168.0.103 node3
END
3.3.4 基础配置
master节点配置初始化脚本(关闭防火墙、SELinux、设置主机名)
#!/bin/bash
for i in {master,node1,node2,node3}
do
scp /etc/hosts $i:/etc/hosts
ssh root@$i "hostnamectl set-hostname $i"
ssh root@$i "systemctl disable firewalld --now"
ssh root@$i "sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config"
ssh root@$i "setenforce 0"
done
3.3.5 允许iptables检查bridge流量
所有节点配置
modprobe overlay
modprobe br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
overlay
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
vm.swappiness=0
EOF
sudo sysctl --system
检查
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
3.3.6 安装containerd
所有节点安装最新版:1.7.2,压缩包中已经按照官方二进制部署推荐的目录结构布局好
#!/bin/bash
wget https://github.com/containerd/containerd/releases/download/v1.7.2/cri-containerd-cni-1.7.2-linux-amd64.tar.gz
# 解压到根目录
yum -y install tar
tar xf cri-containerd-cni-1.7.2-linux-amd64.tar.gz -C /
# 将配置文件放入/etc目录
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
# 修改配置文件
sed -i 's|sandbox_image =.*|sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"|' /etc/containerd/config.toml
# 启动containerd
systemctl enable --now containerd.service
3.3.7 crictl常用命令
基本上命令和docker和podman差不多
# 查看镜像
crictl images
# 查看版本
crictl version
# 查看当前运行容器
crictl ps
3.3.8 安装kubeadm
所有节点
#!/bin/bash
# 配置yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=0
exclude=kubelet kubeadm kubectl
EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
3.3.9 使用命令初始化
变量 | 说明 |
apiserver-advertise-address | Kubernetes API 服务器应该使用的网络地址 |
image-repository | 镜像仓库 |
kubernetes-version | 版本 |
service-cidr | 容器地址范围 |
pod-network-cidr | pod地址范围 |
#!/bin/bash
IP_ADDR=`ip addr show | grep inet | grep -v inet6 | grep -v 127.0.0.1 | awk '{print $2}' | cut -d/ -f1`
kubeadm init \
--apiserver-advertise-address=$IP_ADDR \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.27.2 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
安装时不要使用ctrl c,不然会出问题(我不知道怎么解决……)
# 安装成功后
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.100:6443 --token l0y7pb.d9bqusf9dib3ff75 \
--discovery-token-ca-cert-hash sha256:fde40a585f868f71e568cb303e3de874913283fdf8ad86f5a5fdf94b140fd586
查看运行的容器
[root@master ~]# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
54e62dc8eb088 ead0a4a53df89 20 seconds ago Running coredns 1 e05ebde779d3a coredns-7bdc4cb885-cx4cz
a979e9cbb7fc7 ead0a4a53df89 20 seconds ago Running coredns 1 577ecfd8213c4 coredns-7bdc4cb885-4vpdb
8963fb69890ea b8aa50768fd67 30 seconds ago Running kube-proxy 1 f27da8f8202d3 kube-proxy-vwdlc
bbc4d7620e53e ac2b7465ebba9 About a minute ago Running kube-controller-manager 1 ee371a760d552 kube-controller-manager-master
2772082789973 89e70da428d29 About a minute ago Running kube-scheduler 1 af26b2ee9e597 kube-scheduler-master
04ad73c6d8fa2 c5b13e4f7806d About a minute ago Running kube-apiserver 1 426ce97f88d27 kube-apiserver-master
5ddf244e34f60 86b6af7dd652c About a minute ago Running etcd 1 6369ea3821e79 etcd-master
3.3.10 配置文件初始化
kubeadm config print init-defaults --component-configs KubeletConfiguration
# 打印集群初始化默认的使用配置
kubeadm config print init-defaults --component-configs KubeletConfiguration
创建kubeadm.yml
cat > kubeadm.yml <<END
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.0.100
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
taints:
- effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta3
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.27.0
networking:
podSubnet: 10.244.0.0/16
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
END
3.3.11 下载镜像
kubeadm config images pull --config kubeadm.yml
3.3.12 集群初始化
kubeadm init --config kubeadm.yml
记住输出最后一段话
kubeadm join 192.168.0.100:6443 --token qh23jn.u39gj0cbiwkfmf57 \
--discovery-token-ca-cert-hash sha256:edcfcf8509b3064dcfac69a3baa61cd1f47c2f7bf14b21fedeabd076e8a0c779
如果刷屏刷没了,可以重新生成
[root@master ~]# kubeadm token generate
p2dv5o.krpxq686taalfezf
# 直接在node上刷生成的命令即可
[root@master ~]# kubeadm token create p2dv5o.krpxq686taalfezf --print-join-command
kubeadm join 192.168.0.100:6443 --token p2dv5o.krpxq686taalfezf --discovery-token-ca-cert-hash sha256:8fa1a54de09bbfe9cd3ba790c9910e382f2d72bdf0c9e29aae0ef61b84960c35
3.3.13 node加入集群
在node1~node3上执行
kubeadm join 192.168.0.100:6443 --token p2dv5o.krpxq686taalfezf --discovery-token-ca-cert-hash sha256:8fa1a54de09bbfe9cd3ba790c9910e382f2d72bdf0c9e29aae0ef61b84960c35
3.3.14 查看集群状态信息
开始前设置对于权限
#普通用户管理
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#root用户管理
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/bashrc
查看集群状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node Ready control-plane 13m v1.27.2
node1 Ready <none> 6m36s v1.27.2
node2 Ready <none> 3m10s v1.27.2
node3 Ready <none> 3m9s v1.27.2