微服务:一个模块分成多个模块

分布式:多台机器组成一台机器

Day01 K8S安装部署

一、主机相关配置

1 关闭selinux,关闭防火墙

1.1关于防火墙的原因(nftables后端兼容性问题,产生重复的防火墙规则)
1.2关于selinux的原因(关闭selinux以允许容器访问宿主机的文件系统)
# 永久关闭
sed -i 's#enforcing#disabled#g' /etc/sysconfig/selinux
# 零时关闭
setenforce 0

`vi /etc/selinux/config`

k8s 部署zookeeper kafka集群 k8s elk部署_内核

1.3 swap,这个当内存不足时,linux会自动使用swap,将部分内存数据存放到磁盘中,这个这样会使性能下降,为了性能考虑推荐关掉
swapoff -a
sed -i.bak 's/^.*centos-swap/#&/g' /etc/fstab
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet

2 配置基础yum源

2.1 备份镜像
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
2.2 下载并安装
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

3 刷新缓存

yum makecache  # 实质是把镜像源的文件名拉下来,加速本地安装包的安装

4 更新内核

yum -y install wget  # 安装 wget工具
yum update --exclud=kernel* -y  # 更新内核

5 升级内核版本

由于 Docker 运行需要较新的系统内核功能,例如 ipvs 等等,所以一般情况下,我们需要使用 4.0+以上版本 的系统内核。

内核要求是 4.18+,如果是CentOS 8则不需要升级内核

wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-4.4.245-1.el7.elrepo.x86_64.rpm

wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-4.4.245-1.el7.elrepo.x86_64.rpm

yum localinstall -y kernel-lt*  # 本地安装

grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg #设置为默认启动第一个内核,重新生成新的内核配置文件
grubby --default-kernel
reboot  # 重启

二、安装依赖组件

2.1 IPVS安装

# 下载IPVS模块

yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp

k8s 部署zookeeper kafka集群 k8s elk部署_内核_02

# 加载IPVS模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr
ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
# 
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

k8s 部署zookeeper kafka集群 k8s elk部署_docker_03

2.2 内核参数调优

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF

# 立即生效
sysctl --system

三、安装基础软件

yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables -y >> install.log 2>&1

四、安装dokcer

K8S组件依赖docker

yum install -y yum-utils device-mapper-persistent-data lvm2 >> install.log 2>&1
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo >> install.log 2>&1
yum install docker-ce -y >> install.log 2>&1  # 安装最新版 并写入安装日志(错误,正确)
sudo mkdir -p /etc/docker  # 创建文件夹 读取规则

# 设置镜像加速源
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://8mh75mhz.mirror.aliyuncs.com"]
}
EOF

systemctl daemon-reload ; systemctl restart docker;systemctl enable --now docker.service >> install.log 2>&1
# 分号相当于 && 重新加载镜像配置 重启docker  设置docker默认启动 写入日志
echo '设置docker开机启动'  # 给个提醒
echo '设置docker开机启动'

五、同步集群时间

yum install ntp -y  # 安装 时间插件
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime  # 软连接  使用上海时间
echo 'Asia/Shanghai' > /etc/timezone
ntpdate time2.aliyun.com  # 同步时间

六、配置 Kubernetes 源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0  # 检查是否关闭selinux

6.2安装 kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl

k8s 部署zookeeper kafka集群 k8s elk部署_centos_04

6.3 启动并设置自动开启

systemctl enable kubelet && systemctl start kubelet
-----------------------------------------------------------------------------------------------------------
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# 没有提示也可以

到这里 节点机器 任务基本完成,如果出现 notready 是由于网络插件没有的原因可以跳到第十步,然后节点机通过 token加入 master即刻

七、节点初始化(master主机配置即可)

kubeadm init \
--image-repository=registry.cn-hangzhou.aliyuncs.com/k8s2me \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

k8s 部署zookeeper kafka集群 k8s elk部署_centos_05

八、配置 kubernetes 用户信息

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

九、编辑节点

vim /etc/hosts

172.16.0.50    kubernetes-master-01  # 端口名和节点名
172.16.0.53    kubernetes-node-01
172.16.0.54    kubernetes-node-02

get nodes 只能在master节点执行,子节点不用执行

k8s 部署zookeeper kafka集群 k8s elk部署_内核_06

notreday是由于没有网络插件的原因

注意:如果安装好了,重启后无法使用,一般是docker没有启动,需要重新 自定义节点,然后将kub和docker都设置为自启

systemctl enable --now docker.service  # docker自启

十、 安装集群网络插件

docker pull registry.cn-hangzhou.aliyuncs.com/k8sos/flannel:v0.12.0-amd64 ;\
docker tag registry.cn-hangzhou.aliyuncs.com/k8sos/flannel:v0.12.0-amd64 \
quay.io/coreos/flannel:v0.12.0-amd64

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl get pods -n kube-system  # 命令来查询每个 Pod 的更多信息
------------------------------------------------------------------------
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7dcc599b9f-gn7cs             1/1     Running   1          15h
coredns-7dcc599b9f-s94x7             1/1     Running   1          15h
etcd-k8s-master                      1/1     Running   1          15h
kube-apiserver-k8s-master            1/1     Running   1          15h
kube-controller-manager-k8s-master   1/1     Running   1          15h
kube-flannel-ds-drgbk                1/1     Running   1          15h  # init 代表正在初始化 等待即可
kube-flannel-ds-gcwbr                1/1     Running   0          13h
kube-flannel-ds-tchxk                1/1     Running   0          13h
kube-proxy-brx4c                     1/1     Running   1          15h
kube-proxy-mwk4w                     1/1     Running   0          13h
kube-proxy-qspfb                     1/1     Running   0          13h
kube-scheduler-k8s-master            1/1     Running   1          15h


参数 -w whatch 监听
会一只停留 ctrl+c退出

十一、 Node 节点加入集群

# master机器创建
# 创建 TOKEN  
kubeadm token create --print-join-command
------------------------------------------------------------------------------------------------------------
W1207 20:57:38.878150   70266 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 10.0.0.80:6443 --token 7fsv9l.zydosm4486o8aatk     --discovery-token-ca-cert-hash sha256:cea884131c7cb53b477f5bc42090072ba1e4a2ec81c34d29c27d743dd07497b0 

# 节点机器  token的有效期是24小时
kubeadm join 10.0.0.80:6443 --token 7fsv9l.zydosm4486o8aatk     --discovery-token-ca-cert-hash sha256:cea884131c7cb53b477f5bc42090072ba1e4a2ec81c34d29c27d743dd07497b0   # 加入配置

此时不用操作操作任何节点

十二、End 集群卸载清理

出现端口被占用是可以起重新清理安装初始化的

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum clean all
yum remove kube*

注意:如果重新安装kubelet kubeadm kubectl的已存在,需要手动删除这三个插件

yum remove kubelet kubeadm kubectl

然后重新安装就好了

十三、测试集群DNS是否正常

kubectl run test -it --rm --image=busybox:1.28.3

nslookup kubernetes

k8s 部署zookeeper kafka集群 k8s elk部署_docker_07

如果出现下图说明集群正常

k8s 部署zookeeper kafka集群 k8s elk部署_docker_08

错误:

排错方式

1.检查环境变量情况

env | grep -i kub

2.检查docker服务

systemctl status docker.service

3.检查kubelet服务

systemctl status kubelet.service

一、重启后6443端口问题(sawpoff问题)

[root@k8s-master ~]# kubectl get node
The connection to the server 10.0.0.80:6443 was refused - did you specify the right host or port?

# 通过 systemctl status kubelet.service 查看状态
[root@k8s-master ~]systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Tue 2020-12-08 22:19:05 CST; 6s ago
     Docs: https://kubernetes.io/docs/
  Process: 1949 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 1949 (code=exited, status=255)  # 这里出现错误
 
 #关闭swapoff分区关闭 重启即可
 swapoff -a
 systemctl restart kubelet.service