Centos也适用

目录

一、环境准备

1、主机准备

2、关闭selinux、防火墙等

3、关闭swap分区

二、安装docker

1、配置yum源

1、1配置docker源

2、安装依赖包

3、安装docker

二、安装kubernets

1、配置国内源

2、内核配置

2.1、配置IPVS(可选)

3、安装相应版本

4、初始化Master

4、1master部署calico网络

5、master重置*

6、将work node加入

四、安装KubeSphere

1、前置工作

搭建NFS作为默认存储驱动(sc)

创建存储类和provisioner

安装metrics-server

2、部署KubeSphere

获取文件

安装

安装进度检查

登录


一、环境准备

1、主机准备

三台虚拟机并配置好主机名及映射(本次实验主机为虚拟机openEuler22.03 SP1)

2、关闭selinux、防火墙等

# 三台都要操作
# 临时关闭,可用getenforce查看是否关闭
setenforce 0
# 通过配置文件关闭
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭防火墙
sudo systemctl stop firewalld
sudo systemctl disable firewalld

直接root执行如下吧
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

3、关闭swap分区

#三台都要操作
swapoff -a
sed -ri 's/(.*swap.*)/#\1/' /etc/fstab

二、安装docker

--每台都要安装--

1、配置yum源

# 编辑yum源,可参考如下,此处使用的移动云源,各大厂商均有,如华为源
vi /etc/yum.repos.d/openEuler.repo
[OS]
name=OS
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/OS/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler

[everything]
name=everything
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/everything/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/everything/$basearch/RPM-GPG-KEY-openEuler

[EPOL]
name=EPOL
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler

[debuginfo]
name=debuginfo
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/debuginfo/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/debuginfo/$basearch/RPM-GPG-KEY-openEuler

[source]
name=source
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/source/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/source/RPM-GPG-KEY-openEuler

[update]
name=update
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/update/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler

[update-source]
name=update-source
baseurl=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/update/source/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cmecloud.cn/openeuler/openEuler-22.03-LTS-SP1/source/RPM-GPG-KEY-openEuler

1、1配置docker源

sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
****
将docker-ce.repo中[docker-ce-stable]中的$releasever改为7
即 cat /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

.......

2、安装依赖包

#非root请加sudo
yum clean all && yum makecache && yum -y update
yum install -y  device-mapper-persistent-data lvm2

3、安装docker

1、查看docker可安装版本
yum list docker-ce.x86_64 --showduplicates | sort -r

2、根据自己需要安装
sudo yum install -y docker-ce-24.0.2 docker-ce-cli-24.0.2

3、将使用的用户加入docker组
sudo gpasswd -a $USER docker
newgrp docker ##或者重新ssh连接进入

4、启动并加入开机自启
sudo systemctl enable docker --now

docker info #可查看详细信息

5、配置镜像地址和修改驱动
mkdir -p /etc/docker
sudo vim /etc/docker/daemon.json
写入
{
  "registry-mirrors": ["https://*****.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
#registry-mirrors地址可通过aliyun个人账户获取
另外还有:
http://hub-mirror.c.163.com/
https://docker.mirrors.ustc.edu.cn/
可在registry-mirrors中用逗号分隔

接着执行
sudo systemctl daemon-reload
sudo systemctl restart docker

二、安装kubernets

--三台都要安装--

1、配置国内源

sudo vim /etc/yum.repos.d/kubernetes.repo
写入:
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

sudo yum clean all && sudo yum makecache

2、内核配置

1、配置内核桥接流量
sudo vim /etc/modules-load.d/k8s.conf
写入:
br_netfilter
或者手动加载
modprobe br_netfilter
lsmod | grep br_netfilter # 查看网桥过滤模块是否加载成功

sudo vim /etc/sysctl.d/k8s.conf
写入:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

sudo sed -ri 's/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/' /etc/sysctl.conf
即修改:
net.ipv4.ip_forward = 1
#0代表禁止转发,1代表转发,禁止转发master初始化将报错

sudo sysctl --system

2.1、配置IPVS(可选)

---IPVS是用来应对负载均衡场景的组件,其比iptables有更高的转发性能---
yum install ipset ipvsadm ebtables socat ipset conntrack

vim /etc/sysconfig/modules/ipvs.modules
#写入:
modprobe -- ip_vs                                                                                                           modprobe -- ip_vs_rr                                                                                                        modprobe -- ip_vs_wrr                                                                                                       modprobe -- ip_vs_sh
modprobe -- nf_conntrack

#使配置生效
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
#查看
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

3、安装相应版本

sudo yum install -y kubelet-1.23.9 kubeadm-1.23.9 kubectl-1.23.9
#1.24之后的版本需要使用cri-dockerd
sudo systemctl  enable kubelet --now
#此时无需检查kubelet服务状态,master节点还未初始化,将报错
kubelet --version  ##查看版本号,虽然安装时已经指定了,看看也无妨

4、初始化Master

--在master节点执行--

sudo kubeadm init --kubernetes-version=1.23.9 --apiserver-advertise-address=192.168.11.11 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16


--kubernetes-version=1.23.9 安装的版本号,kubelet --version可查看
--apiserver-advertise-address master节点的ip
--image-repository 镜像仓库地址,registry.cn-hangzhou.aliyuncs.com和registry.aliyuncs.com一样
--pod-network-cidr pod的ip地址,flannel网络好像默认10.244.0.0/16,calico的可以通过yml配置,其默认为192.168.0.0/16
--service-cidr 可暂不管

出现:

Your Kubernetes control-plane has initialized successfully!

代表master初始化成功

接着master执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、1master部署calico网络

下载calico yml文件
curl -O https://docs.projectcalico.org/v3.20/manifests/calico.yaml
##注意,请查看https://docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements隔版本的要求,确保calico的版本符合操作系统和k8s的版本,避免报错(v3.20换成其他版本就可查看)

sed -i "s#docker.io/##g" calico.yaml 取消掉yaml中image的docker.io地址,采用国内源

修改pod-network-cidr

将此处的注释取消并修改为初始化Master时的pod-network-cidr

kubespray 更改安装路径 kubesphere安装部署_kubespray 更改安装路径

 接着执行:

kubectl apply -f calico.yaml
未出现error则正常

过一段时间查看所以pod运行情况:
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-77d676778d-lkqrv   1/1     Running   0          8m19s
kube-system   calico-node-j2bc9                          1/1     Running   0          8m19s
kube-system   coredns-7f89b7bc75-4qzcm                   1/1     Running   0          10m
kube-system   coredns-7f89b7bc75-dbfbt                   1/1     Running   0          10m
kube-system   etcd-master                                1/1     Running   0          10m
kube-system   kube-apiserver-master                      1/1     Running   0          10m
kube-system   kube-controller-manager-master             1/1     Running   0          10m
kube-system   kube-proxy-lbmwf                           1/1     Running   0          10m
kube-system   kube-scheduler-master                      1/1     Running   0          10m
全为running则成功

5、master重置*

在master初始化或者网络配置中出现不明错误或者操作失误,可进行重置

sudo kubeadm reset && sudo systemctl daemon-reload && sudo systemctl restart kubelet
重置后,必须删除./kube/文件,否则再次初始化后查看状态会报错
rm -rf $HOME/.kube
初始化后,再次执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

6、将work node加入

master执行:
sudo kubeadm token create --print-join-command
将生成命令在work node上执行
过一段时间 master查看:
kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   2m31s   v1.23.9
node1    Ready    <none>                 29s     v1.23.9
node2    Ready    <none>                 14s     v1.23.9

可对node节点自行打标,如:
kubectl label node node1 node-role.kubernetes.io/worker1=true
kubectl label node node2 node-role.kubernetes.io/worker2=true

kubectl get nodes
NAME     STATUS   ROLES                  AGE    VERSION
master   Ready    control-plane,master   4m6s   v1.23.9
node1    Ready    worker1                2m4s   v1.23.9
node2    Ready    worker2                109s   v1.23.9

四、安装KubeSphere

1、前置工作

搭建NFS作为默认存储驱动(sc)

master节点执行

sudo yum install -y nfs-utils
sudo vim /etc/exports
写入:
/nfs/data/ *(insecure,rw,sync,no_root_squash,no_subtree_check)
#目录根据自己服务器实际进行选择
mkdir -p /data/nfs

sudo systemctl enable rpcbind --now && sudo systemctl enable nfs-server --now
sudo exportfs -r
sudo exportfs

work节点执行

#查看主节点挂载
showmount -e 192.168.11.11 #你的master ip
mkdir -p /data/nfsmount

sudo mount -t nfs 192.168.11.11:/data/nfs /data/nfsmount/

创建存储类和provisioner

编辑yml文件,vim sc-pro.yml,写入

## 创建存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true" 

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.11.11 ## 自己nfs服务器地址
            - name: NFS_PATH  
              value: /data/nfs       ## nfs服务器共享目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.11.11
            path: /data/nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

使用该文件 

#kubectl apply -f sc-pro.yml 

#kubectl get pod -n default  #打印如下
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7d8d494cc4-m94ln   1/1     Running   0          75s

验证
创建文件pvc.yml,内容可如下:
kind: PersistentVolumeClaim         #创建PVC资源
apiVersion: v1
metadata:
  name: nginx-pvc         #PVC的名称
spec:
  accessModes:            #定义对PV的访问模式,代表PV可以被多个PVC以读写模式挂载
    - ReadWriteMany
  resources:              #定义PVC资源参数
    requests:             #设置资源需求
      storage: 200Mi      #申请200MI的空间资源
  storageClassName: nfs-storage

#kubectl apply -f pvc.yml
#kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pvc-410987bb-9e80-4511-b832-04eaff610e11   200Mi      RWX            nfs-storage    34s
状态bound则为成功

安装metrics-server

       由于kubesphere里面自带的metrics-server存在安装问题,所以我们手动安装

编辑ms.yml文件并apply。

文件内容为:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

 开启 Aggregator Routing

sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.11.1
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --enable-aggregator-routing=true  ##文件中添加此行,开启聚合路由
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt

部署验证 

# kubectl apply -f ms.yml  #打印如下

使用如下命令可查看是否应用成功
#kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   218m         2%     3014Mi          19%       
node1    121m         1%     1421Mi          9%        
node2    217m         2%     1365Mi          8%

2、部署KubeSphere

获取文件

#目前已有3.3.2,自己使用时镜像拉取有问题,改到了3.2.1
下载安装器
curl -O  https://ghproxy.com/https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
##https://ghproxy.com/是一个国内代理,如果当前网络较好可去掉代理

下载安装配置文件
curl -O https://ghproxy.com/https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml

注意:
1、关于kubesphere版本选择及一些安装介绍可参考官网:
https://kubesphere.io/zh/docs/v3.3/installing-on-kubernetes/introduction/overview/

修改cluster-configuration.yaml

将ectd下的 endpointIps改为master节点IP地址即可,其余安装官网可插拔组件需要做如下修改(开启部分应用服务)

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.2.1
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: ""        # Add your private registry address if it is needed.
  # dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-install release version.
  etcd:
    monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: 192.168.11.11  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
        port: 30880
        type: NodePort
    # apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: true
      volumeSize: 2Gi # Redis PVC size.
    openldap:
      enabled: true
      volumeSize: 2Gi   # openldap PVC size.
    minio:
      volumeSize: 20Gi # Minio PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
      GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero. 
        enabled: false
    gpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs. 
      kinds:         
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:   # Storage backend for logging, events and auditing.
      # master:
      #   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.
      #   replicas: 1      # The total number of master nodes. Even numbers are not allowed.
      #   resources: {}
      # data:
      #   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.
      #   replicas: 1       # The total number of data nodes.
      #   resources: {}
      logMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true         # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: true         # Enable or disable the KubeSphere Auditing Log System.
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # Enable or disable the KubeSphere DevOps System.
    # resources: {}
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # Enable or disable the KubeSphere Events System.
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true         # Enable or disable the KubeSphere Logging System.
    containerruntime: docker
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    #   volumeSize: 20Gi  # Prometheus PVC size.
    #   resources: {}
    #   operator:
    #     resources: {}
    #   adapter:
    #     resources: {}
    # node_exporter:
    #   resources: {}
    # alertmanager:
    #   replicas: 1          # AlertManager Replicas.
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:                           # GPU monitoring-related plug-in installation. 
      nvidia_dcgm_exporter:        # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly.
        enabled: false             # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes.
        # resources: {}
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: true # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false   # Enable or disable KubeEdge.
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
          - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

安装

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml

安装进度检查

# 查看安装进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
#安装出现如下代表成功了
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.11.11:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

prometheus修改

安装好了KubeSphere之后查看相关Pod,会发现有两个Prometheus一直处于ContainerCreating,describe检查

kubesphere-monitoring-system   prometheus-k8s-0                                   0/2     ContainerCreating   0             61m
kubesphere-monitoring-system   prometheus-k8s-1                                   0/2     ContainerCreating   0

kubectl describe pods -n kubesphere-monitoring-system   prometheus-k8s-0
....
Events:
  Type     Reason       Age                  From     Message
  ----     ------       ----                 ----     -------
  Warning  FailedMount  56m (x3 over 65m)    kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[config config-out tls-assets prometheus-k8s-db prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs kube-api-access-l25wf]: timed out waiting for the condition
  Warning  FailedMount  46m (x4 over 62m)    kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs kube-api-access-l25wf config config-out tls-assets prometheus-k8s-db]: timed out waiting for the condition
  Warning  FailedMount  31m (x2 over 44m)    kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[kube-api-access-l25wf config config-out tls-assets prometheus-k8s-db prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs]: timed out waiting for the condition
  Warning  FailedMount  26m (x2 over 33m)    kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[config-out tls-assets prometheus-k8s-db prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs kube-api-access-l25wf config]: timed out waiting for the condition
  Warning  FailedMount  6m6s (x5 over 51m)   kubelet  Unable to attach or mount volumes: unmounted volumes=[secret-kube-etcd-client-certs], unattached volumes=[prometheus-k8s-db prometheus-k8s-rulefiles-0 secret-kube-etcd-client-certs kube-api-access-l25wf config config-out tls-assets]: timed out waiting for the condition
  Warning  FailedMount  112s (x40 over 67m)  kubelet  MountVolume.SetUp failed for volume "secret-kube-etcd-client-certs" : secret "kube-etcd-client-certs" not found
由于我们在cluster-configuration.yaml文件中开启了监控功能,但是Prometheus无法获取到etcd的证书
因此我们创建证书:
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

按提示所有pod running时就可以登录了,账号密码为日志打印那个

登录

kubespray 更改安装路径 kubesphere安装部署_服务器_02

done!