标签(空格分隔): kubernetes系列
一: k8s 的网络方案
1.1 overylay 的网络方案
Overlay网络:
Overlay 叫叠加网络也叫覆盖网络,指的是在物理网络的基础之上叠加实现新的虚拟网络,
即可使网络的中的容器可以相互通信。
优点是对物理网络的兼容性比较好,可以实现pod的夸宿
主机子网通信。
calico与flannel等网络插件都支持overlay网络 缺点是有额外的封装与解封性能开销
目前私有云使用比较多。
flannel 的网络模型
calico 的 网络模式
1.2 underlay网络模式
Underlay网络就是传统IT基础设施网络,由交换机和路由器等设备组成,借助以太网协议、
路由协议和VLAN协议等驱动,它还是Overlay网络的底层网络,为Overlay网络提供数据通信服务。
容器网络中的Underlay网络是指借助驱动程序将宿主机的底层网络接口直接暴露给容器使用的
一种网络构建技术,较为常见的解决方案有MAC VLAN、IP VLAN和直接路由等。
Mac Vlan模式:
MAC VLAN:支持在同一个以太网接口上虚拟出多个网络接口(子接口),每个虚拟接口都拥有唯一的MAC地址
并可配置网卡子接口IP。
IP VLAN模式:
IP VLAN类似于MAC VLAN,它同样创建新的虚拟网络接口并为每个接口分配唯一的IP地址,
不同之处在于,每个虚拟接口将共享使用物理接口的MAC地址。
网络通信-MAC Vlan工作模式:
bridge模式:
在bridge这种模式下,使用同一个宿主机网络的macvlan容器可以直接实现通信,推荐使用此模式。
网络通信-总结:
Overlay:基于VXLAN、NVGRE等封装技术实现overlay叠加网络。
Underlay(Macvlan):基于宿主机物理网卡虚拟出多个网络接口(子接口),每个虚拟接口都拥有
唯一的MAC地址并可配置网卡子接口IP。
二:k8s1.28.x 的安装
2.1 系统安装介绍
操作系统:
CentOS7.9x64
主机名:
cat /etc/hosts
---
172.16.10.11 flyfish11
172.16.10.12 flyfish12
172.16.10.13 flyfish13
172.16.10.14 flyfish14
172.16.10.15 flyfish15
----
注: 本次安装前三台,flyfish11 作为 master flyfish12/flyfish13/flyfish14 作为worker
系统关闭selinux / 关闭firewalld 清空iptables防火墙规则
2.2 系统初始化
#修改时区,同步时间
yum install chrond -y
vim /etc/chrony.conf
-----
ntpdate ntp1.aliyun.com iburst
-----
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone
#关闭防火墙,selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
## 关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
#系统优化
cat > /etc/sysctl.d/k8s_better.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
modprobe br_netfilter
lsmod |grep conntrack
modprobe ip_conntrack
sysctl -p /etc/sysctl.d/k8s_better.conf
#确保每台机器的uuid不一致,如果是克隆机器,修改网卡配置文件删除uuid那一行
cat /sys/class/dmi/id/product_uuid
2.2 安装ipvs 转发支持 【所有节点】
###系统依赖包
yum -y install wget jq psmisc vim net-tools nfs-utils socat telnet device-mapper-persistent-data lvm2 git network-scripts tar curl -y
yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
### 开启ipvs 转发
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack
2.3 安装containerd [全部节点安装]
创建 /etc/modules-load.d/containerd.conf 配置文件:
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
获取阿里云YUM源
vim /etc/yum.repos.d/docker-ce.repo
------------------
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
----------------------
yum makecache fast
下载安装:
yum install -y containerd.io
生成containerd的配置文件
mkdir /etc/containerd -p
生成配置文件
containerd config default > /etc/containerd/config.toml
编辑配置文件
vim /etc/containerd/config.toml
-----
SystemdCgroup = false 改为 SystemdCgroup = true
# sandbox_image = "k8s.gcr.io/pause:3.6"
改为:
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
------
# systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
# systemctl start containerd
# ctr images ls
三:安装 k8s1.28.x
3.1 配置k8s1.28.x的yum 源
1.添加阿里云YUM软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast
## 查看所有的可用版本
yum list kubelet --showduplicates | sort -r |grep 1.28
3.2 安装kubeadm,kubelet和kubectl
目前最新版本是1.28.2,我们直接上最新版
yum install -y kubectl kubelet kubeadm
为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。
# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动
# systemctl enable kubelet
准备k8s1.28.2 所需要的镜像
kubeadm config images list --kubernetes-version=v1.28.2
## 使用以下命令从阿里云仓库拉取镜像
# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
集群初始化
使用kubeadm init命令初始化
在flyfish11上执行,报错请看k8s报错汇总
kubeadm init --kubernetes-version=v1.28.2 --pod-network-cidr=10.224.0.0/16 -service-cidr=l0.96.0.0/12 --service-dns-domain=cluster.1ocal --apiserver-advertise-address=172.16.10.11 --image-repository registry.aliyuncs.com/google_containers
--apiserver-advertise-address 集群通告地址
--image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
--kubernetes-version K8s版本,与上面安装的一致
--service-cidr 集群内部虚拟网络,Pod统一访问入口
--pod-network-cidr Pod网络,,与下面部署的CNI网络组件yaml中保持一致
另外一种初始化方式:SVC 的网络也在物理网络里面
underlay的SVC 初始化方式用底层地址段
集群初始化
使用kubeadm init命令初始化
在flyfish11上执行,报错请看k8s报错汇总
kubeadm init --kubernetes-version=v1.28.3 --pod-network-cidr=10.224.0.0/16 --service-cidr=172.16.10.0/24 --service-dns-domain=cluster.1ocal --apiserver-advertise-address=172.16.10.11 --image-repository registry.aliyuncs.com/google_containers
--apiserver-advertise-address 集群通告地址
--image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
--kubernetes-version K8s版本,与上面安装的一致
--service-cidr 集群内部虚拟网络,Pod统一访问入口
--pod-network-cidr Pod网络,,与下面部署的CNI网络组件yaml中保持一致
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.10.11:6443 --token xe79gx.nyq56omv5980frdc \
--discovery-token-ca-cert-hash sha256:31ba8cd9ec92c98a9cc43dcefc2731a5ef1ebddbb1021fdafe2eca497cd2e683
flyfish12/flyfish13/flyfish14 执行
kubeadm join 172.16.10.11:6443 --token xe79gx.nyq56omv5980frdc \
--discovery-token-ca-cert-hash sha256:31ba8cd9ec92c98a9cc43dcefc2731a5ef1ebddbb1021fdafe2eca497cd2e683
# 查看集群节点:
kubectl get node
四:部署underlay的网络
4.1安装helm 包管理工具
# 下载helm
wget https://get.helm.sh/helm-v3.13.0-linux-amd64.tar.gz
# 解压文件
tar zxvf helm-v3.13.0-linux-amd64.tar.gz
# 移动到bin目录
mv linux-amd64/helm /usr/bin/helm
# 查看版本
helm version
4.2 部署网络组件hybridnet
增加helm 源
helm repo add hybridnet https://alibaba.github.io/hybridnet/
更新源
helm repo update
##配置overlay pod网络(使用kubeadm初始化时指定的pod网络),如果不指定--set init.cidr=10.200.0.0/16
默认会使用100.64.0.0/16
helm install hybridnet hybridnet/hybridnet -n kube-system --set init.cidr=10.244.0.0/16
kubectl get pod -n kube-system
kubectl describe pod hybridnet-manager-bf5988977-6shck -n kube-system
kubectl get node --show-labels
打标签:
kubectl label node flyfish11 node-role.kubernetes.io/master=
kubectl label node flyfish12 node-role.kubernetes.io/master=
kubectl label node flyfish13 node-role.kubernetes.io/master=
kubectl label node flyfish14 node-role.kubernetes.io/master=
kubectl get node
kubectl get pod -n kube-system
calico 网络使用的物理网络了
4.3 创建underlay网络并与node节点关联
mkdir hybridnet
cd hybridnet/
kubectl label node flyfish11 network=underlay-nethost
kubectl label node flyfish12 network=underlay-nethost
kubectl label node flyfish13 network=underlay-nethost
kubectl label node flyfish14 network=underlay-nethost
##创建underlay网络
vim 1.create-underlay-network.yaml
---
apiVersion: networking.alibaba.com/v1
kind: Network
metadata:
name: underlay-network1
spec:
netID: 0
type: Underlay
nodeSelector:
network: "underlay-nethost"
---
apiVersion: networking.alibaba.com/v1
kind: Subnet
metadata:
name: underlay-network1
spec:
network: underlay-network1
netID: 0
range:
version: "4"
cidr: "172.16.10.0/24"
gateway: "172.16.10.2" # 外部网关地址
start: "172.16.10.20"
end: "172.16.10.254"
---
kubectl apply -f 1.create-underlay-network.yaml
kubectl get network
创建overlay 的pod
vim 2.tomcat-app1-overlay.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: myserver-tomcat-app1-deployment-overlay-label
name: myserver-tomcat-app1-deployment-overlay
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: myserver-tomcat-app1-overlay-selector
template:
metadata:
labels:
app: myserver-tomcat-app1-overlay-selector
spec:
nodeName: flyfish12
containers:
- name: myserver-tomcat-app1-container
#image: tomcat:7.0.93-alpine
image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
imagePullPolicy: IfNotPresent
##imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
# resources:
# limits:
# cpu: 0.5
# memory: "512Mi"
# requests:
# cpu: 0.5
# memory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: myserver-tomcat-app1-service-overlay-label
name: myserver-tomcat-app1-service-overlay
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30003
selector:
app: myserver-tomcat-app1-overlay-selector
---
kubectl create ns myserver
kubectl apply -f 2.tomcat-app1-overlay.yaml
准备下载镜像
ctr -n k8s.io image pull registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
kubectl get pod -n myserver
curl http://10.244.0.5:8080/myapp/
kubectl get svc -n myserver
浏览器访问:
http://172.16.10.11:30003/myapp/
使用underlay的网络
vim 3.tomcat-app1-underlay.yaml
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: myserver-tomcat-app1-deployment-underlay-label
name: myserver-tomcat-app1-deployment-underlay
namespace: myserver
spec:
replicas: 3
selector:
matchLabels:
app: myserver-tomcat-app1-underlay-selector
template:
metadata:
labels:
app: myserver-tomcat-app1-underlay-selector
annotations: #使用Underlay或者Overlay网络
networking.alibaba.com/network-type: Underlay
spec:
#nodeName: k8s-node2.example.com
containers:
- name: myserver-tomcat-app1-container
#image: tomcat:7.0.93-alpine
image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v2
imagePullPolicy: IfNotPresent
##imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
# resources:
# limits:
# cpu: 0.5
# memory: "512Mi"
# requests:
# cpu: 0.5
# memory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: myserver-tomcat-app1-service-underlay-label
name: myserver-tomcat-app1-service-underlay
namespace: myserver
spec:
# type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
#nodePort: 40003
selector:
app: myserver-tomcat-app1-underlay-selector
kubectl apply -f 3.tomcat-app1-underlay.yaml
kubectl get pod -n myserver -o wide
测试联通:
自主式pod 测试:
vim 4.pod-underlay.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: annotations-demo
annotations:
networking.alibaba.com/network-type: Underlay
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
kubectl apply -f 4.pod-underlay.yaml
curl http://10.16.10.23
svc 的访问
route add -net 10.96.183.0 netmask 255.255.255.0 gateway 172.16.10.11
route -n
curl http://10.96.183.52/myapp/index.html