Kubernetes 安装单机(v1.23.5)版本
1.查看系统版本信息以及修改配置信息
1.1 查看cpu信息 k8s安装至少需要2核2G的环境,否则会安装失败
~]# lscpu
1.2 安装k8s时,临时关闭swap ,如果不关闭在执行kubeadm部分命令会报错
~]# swapoff -a
1.3 安装k8s时,可以临时关闭selinux,减少额外配置
~]# setenforce 0
1.4 关闭防火墙
~]# systemctl stop firewalld
~]# systemctl disable firewalld
1.5 设置网桥参数
~]# cat << EOF > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
1.6 修改hosts文件 方便查看域名映射
~]#  cat /etc/hosts
::1	localhost	localhost.localdomain	localhost6	localhost6.localdomain6
127.0.0.1	localhost	localhost.localdomain	localhost4	localhost4.localdomain4
172.19.120.142	iZ2ze9699wxpu0ij18w0mxZ	iZ2ze9699wxpu0ij18w0mxZ
172.19.120.142	master //这里我设置为master了
1.7 修改hostname
~]# hostnamectl set-hostname master
2. 安装docker
2.1 使用阿里云安装
~]# yum install -y yum-utils device-mapper-persistent-data lvm2
设置repo源为阿里云
~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
~]# yum -y install docker-ce
2.2 修改docker的 /etc/docker/daemon.json文件
如果没有该文件则直接创建
~]# sudo mkdir -p /etc/docker
    设置国内阿里云docker源,地址改为自己在阿里云容器镜像服务申请的地址即可。
    设置cgroupdriver为systemd,这步尤为重要,笔者就是因为没设置走了很多弯路 
~]# sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://xcjha0pw.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    EOF
 2.3 修改完成后 重启docker ,使docker与kubelet的cgroup 驱动一致
~]# sudo systemctl daemon-reload
~]# sudo systemctl restart docker
2.4 设置docker开机启动
~]# systemctl enable docker
3.安装kubeadm kubelet kubectl
3.1 配置k8s下载资源配置文件(阿里云yum源)
~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
3.2 安装 kubelet kubeadm kubectl
~]# yum install -y --nogpgcheck kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5
    kubelet :运行在cluster,负责启动pod管理容器
    kubeadm :k8s快速构建工具,用于初始化cluster
    kubectl :k8s命令工具,部署和管理应用,维护组件
 3.2.1 查看是否安装成功
~]# kubelet --version
~]# kubectl version
~]# kubeadm version
3.3 启动kubelet
~]# systemctl daemon-reload
~]# systemctl start kubelet
   设置kubelet开机启动
~]# systemctl enable kubelet
3.4 拉取init-config配置 并修改配置
    init-config 主要由 api server、etcd、scheduler、controller-manager、coredns等镜像构成 
~]# kubeadm config print init-defaults > init-config.yaml
3.4.1 修改 刚才拉取的init-config.yaml文件,共3处修改
    apiVersion: kubeadm.k8s.io/v1beta3
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 1.2.3.4   //修改1 master节点IP地址 可cat /etc/hosts 看到
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      imagePullPolicy: IfNotPresent
      name: master   //修改2 master节点node的名称
      taints: null
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers   //修改3 修改为阿里云地址
    kind: ClusterConfiguration
    kubernetesVersion: 1.23.0
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
    scheduler: {}

3.5 拉取k8s相关镜像
~]# kubeadm config images pull --config=init-config.yaml
 如果镜像拉取失败,可以通过命令列出需要的镜像后逐个拉取,命令如下:
~]# kubeadm config images list --config init-config.yaml
显示如下(版本可能有所差别):
    registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
    registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
    registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
    registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
    registry.aliyuncs.com/google_containers/pause:3.6
    registry.aliyuncs.com/google_containers/etcd:3.5.1-0
    registry.aliyuncs.com/google_containers/coredns:v1.8.6
3.6 运行kubeadm init安装master节点
    注意要把172.19.120.142替换成你自己的网卡IP,此行命令建议不要直接点复制按钮复制,因为粘贴后立马执行了
~]# kubeadm init --apiserver-advertise-address=172.19.120.142 --apiserver-bind-port=6443 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --kubernetes-version=1.23.5 --image-repository registry.aliyuncs.com/google_containers
不出意外的话,到此你已经看到安装成功的信息了
3.7 如果安装失败可以执行命令重新安装
    kubeadm reset //重置集群 
   //然后再重新执行上面提到的kubeadm init命令
3.8 在master节点运行以下三行命令 执行完成后可以通过 kubeadm token list获取token
~]# mkdir -p $HOME/.kube
~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
4. 部署网络插件kube-flannel
~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
~]# kubectl apply -f kube-flannel.yml 
如果网络不好下载不下来也可以直接复制以下内容新建yml文件执行
    ---
    kind: Namespace
    apiVersion: v1
    metadata:
      name: kube-flannel
      labels:
        pod-security.kubernetes.io/enforce: privileged
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes/status
      verbs:
      - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-flannel
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-flannel
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-flannel
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds
      namespace: kube-flannel
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                    - linux
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni-plugin
           #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
            command:
            - cp
            args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
            volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
          - name: install-cni
           #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
           #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: EVENT_QUEUE_DEPTH
              value: "5000"
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
            - name: xtables-lock
              mountPath: /run/xtables.lock
          volumes:
          - name: run
            hostPath:
              path: /run/flannel
          - name: cni-plugin
            hostPath:
              path: /opt/cni/bin
          - name: cni
            hostPath:
              path: /etc/cni/net.d
          - name: flannel-cfg
            configMap:
              name: kube-flannel-cfg
          - name: xtables-lock
            hostPath:
              path: /run/xtables.lock
              type: FileOrCreate
5.去除master节点的污点
~]# kubectl describe node master |grep Taints
Taints:					node-role.kubernetes.io/master:NoSchedule
删除污点,根据查询到的污点名称进行删除,在污点名称后加-符号即可
~]# kubectl taint node master  node-role.kubernetes.io/master:NoSchedule-(注意要加个 - 号)
6.K8S命令tab键补全
~]# echo 'source <(kubectl completion bash)' >> /etc/profile
~]# source /etc/profile
~]# kubectl get pod -A
NAMESPACE      NAME                               READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-s9b8l              1/1     Running   6          12d
kube-system    coredns-6d8c4cb4d-f9ms4            1/1     Running   6          12d
kube-system    coredns-6d8c4cb4d-kq6vg            1/1     Running   6          12d
kube-system    etcd-rocky9-3                      1/1     Running   6          12d
kube-system    kube-apiserver-rocky9-3            1/1     Running   6          12d
kube-system    kube-controller-manager-rocky9-3   1/1     Running   6          12d
kube-system    kube-proxy-dkmss                   1/1     Running   6          12d
kube-system    kube-scheduler-rocky9-3            1/1     Running   6          12d