Kubesphere - 2 集群搭建 - 2 Kubernetes 上搭建

  • 一、Kubesphere - 集群搭建
  • 1. 安装 KubeSphere 前置环境
  • 1.1 nfs 文件系统
  • 1.2 metrics-server 集群指标监控组件
  • 2. 安装 KubeSphere
  • 2.1 下载安装文件
  • 2.2 修改 cluster-configuration.yaml
  • 2.3 执行部署
  • 2.4 查看安装进度
  • 2.5 访问系统
  • 3. 附录
  • 3.1 kubesphere-installer.yaml
  • 3.2 cluster-configuration.yaml
  • 二、FAQ
  • 1. etcd 监控证书找不到



一、Kubesphere - 集群搭建


1. 安装 KubeSphere 前置环境

1.1 nfs 文件系统

  1. 安装 nfs-server
    全节点执行
yum install -y nfs-utils
  1. master 节点执行
echo "/data/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
# 创建文件存储/共享目录
mkdir -p /data/nfs/data
# 启动 nfs 服务
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
# 使配置生效
exportfs -r
#检查配置是否生效
exportfs
  1. 输出内容:
/data/nfs/data  <world>
  1. 配置 nfs-client(非必选)
    建议至少在一个节点上配置 nfs-client。nfs-client 会自动同步 server 的内容,可用于文件数据备份,提高集群可用性。
# 挂载 nfs-server
showmount -e 192.168.80.200
mkdir -p /data/nfs/data
mount -t nfs 192.168.80.200:/data/nfs/data /data/nfs/data
  1. 配置 sc (动态供应的默认存储类)
    注意:文件内有内容需要自行调整。
# sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  # 删除 pv 的时候,pv 的内容是否要备份。

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.80.200 # 指定自己 nfs-server 服务器地址
            - name: NFS_PATH  
              value: /data/nfs/data # nfs 服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.80.200 # 指定自己 nfs-server 服务器地址
            path: /data/nfs/data # nfs 服务器共享的目录
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  1. 执行部署:
kubectl apply -f sc.yaml
  1. 验证配置是否生效:
kubectl get sc
  1. 输出内容:
NAME                    PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  2s

1.2 metrics-server 集群指标监控组件

# metrics-server.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

执行部署:

kubectl apply -f metrics-server.yaml

2. 安装 KubeSphere

2.1 下载安装文件

注意:下载不到可以到文末获取。

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

2.2 修改 cluster-configuration.yaml

在 cluster-configuration.yaml 中指定我们需要开启的功能,可参考 -> 官方文档

2.3 执行部署

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml

2.4 查看安装进度

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

2.5 访问系统

http://任意机器的:30880
账号 : admin
密码 : P@88w0rd

3. 附录

3.1 kubesphere-installer.yaml

# kubesphere-installer.yaml
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
  - name: v1alpha1
    served: true
    storage: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
    - cc

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kubeedge.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v3.1.1
        imagePullPolicy: "Always"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time

3.2 cluster-configuration.yaml

此内容有需要自行调整部分:(endpointIps)

# cluster-configuration.yaml
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.0
spec:
  persistence:
    storageClass: ""        # 如果您的集群中没有默认的StorageClass,则需要在此处指定现有的StorageClass。
  authentication:
    jwtSecret: ""           # 保持jwtSecret与主机集群一致。通过在主机集群上执行 “kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v” apiVersion “| grep jwtSecret” 来检索jwtSecret。
  local_registry: ""        # 如果需要,请添加您的私人注册表地址。
  etcd:
    monitoring: false       # 启用或禁用etcd监控仪表板安装。在启用它之前,您必须为etcd创建一个秘密。
    endpointIps: localhost # etcd集群端点。这里可以单节点iP也可以是一多个ip。
    port: 2379              # etcd端口。
    tlsEnable: true
  common:
    redis:
      enabled: false
    openldap:
      enabled: false
    minioVolumeSize: 20Gi # Minio PVC 尺寸.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    monitoring:
      # type: external   # 是否指定外部prometheus堆栈,并需要在下一行修改端点。
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus端点获取指标数据。
    es: # 用于日志记录、事件和审计的存储后端。
      # elasticsearchMasterReplicas: 1   # master节点的总数。偶数是不允许的。
      # elasticsearchDataReplicas: 1     # 数据节点总数。
      elasticsearchMasterVolumeSize: 4Gi   # Elasticsearch主节点的卷大小。
      elasticsearchDataVolumeSize: 20Gi    # Elasticsearch数据节点的卷大小。
      logMaxAge: 7                     # 内置Elasticsearch中的日志保留时间。默认情况下是7天。
      elkPrefix: logstash              # 组成索引名称的字符串。索引名称将被格式化为ks-<elk_prefix>-log。
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true  # 启用或禁用同时登录。它允许不同的用户同时使用同一帐户登录。
    port: 30880
  alerting: # (CPU: 0.1 Core, Memory: 100 MiB)它使用户能够自定义警报策略,以便以不同的时间间隔和警报级别及时向接收者发送消息,以供选择。
    enabled: false         # 启用或禁用KubeSphere警报系统。
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing: # 提供与安全性相关的时间顺序记录集,记录平台上由不同租户发起的活动顺序。
    enabled: false         # 启用或禁用kubespere审计日志系统。
  devops: # (CPU: 0.47核心,内存: 8.6G) 提供了一个基于Jenkins的开箱即用的CICD系统,以及包括源到图像和二进制到图像在内的自动化工作流工具。
    enabled: false             # 启用或禁用KubeSphere DevOps系统。
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # 以下三个字段是JVM参数。
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events: # 为多租户Kubernetes集群中的Kubernetes事件导出、过滤和报警提供图形化的web控制台。
    enabled: false         # 启用或禁用KubeSphere事件系统。
    ruler:
      enabled: true
      replicas: 2
  logging: # (CPU: 57 m,内存: 2.76G) 在统一的控制台中提供灵活的日志记录功能,用于日志查询,收集和管理。可以添加额外的日志收集器,例如Elasticsearch,Kafka和Fluentd。
    enabled: false         # 启用或禁用kubespere日志记录系统。
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server: # (CPU: 56 m,内存: 44.35 MiB) 它启用HPA (水平吊舱自动缩放器)。
    enabled: false                   # 启用或禁用指标-服务器。
  monitoring:
    storageClass: ""                 # 如果您需要Prometheus的独立StorageClass,则可以在此处指定。默认情况下使用默认的StorageClass。
    # prometheusReplicas: 1          # Prometheus副本负责监视数据源的不同细分并提供高可用性。
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1          # AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # 您可以安装单独集群,也可以将其指定为主机或成员集群。
  network:
    networkpolicy: # 网络策略允许在同一群集内进行网络隔离,这意味着可以在某些实例 (pod) 之间设置防火墙。
      # 确保集群使用的CNI网络插件支持NetworkPolicy。有许多支持NetworkPolicy的CNI网络插件,包括Calico,Cilium,Kube-router,Romana和Weave Net。
      enabled: false # 启用或禁用网络策略。
    ippool: # 使用Pod IP池管理Pod网络地址空间。要创建的Pod可以从Pod IP池中分配ip地址。
      type: none # 如果calico用作您的CNI插件,请为此字段指定 “Calico”。“无” 表示禁用Pod IP池。
    topology: # 使用服务拓扑基于编织范围查看服务到服务通信。
      type: none # 为此字段指定 “weave-scope” 以启用服务拓扑。“无” 表示服务拓扑被禁用。
  openpitrix: # 所有平台租户均可访问的应用商店。您可以使用它来管理应用程序的整个生命周期。
    store:
      enabled: false # 启用或禁用KubeSphere应用商店。
  servicemesh: # (0.3核心,300 MiB) 提供细粒度的流量管理、可观察性和跟踪以及可视化的流量拓扑。
    enabled: false     # 基本组件 (飞行员)。启用或禁用KubeSphere服务网格 (基于Istio)。
  kubeedge: # 将边缘节点添加到您的集群中,并在边缘节点上部署工作负载。
    enabled: false   # 启用或禁用KubeEdge。
    cloudCore:
      nodeSelector: { "node-role.kubernetes.io/worker": "" }
      tolerations: [ ]
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # 至少必须提供一个公共ip地址或边缘节点可以访问的ip地址。
          - ""            # 请注意,一旦启用KubeEdge,如果未提供地址,CloudCore将出现故障。
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: { "node-role.kubernetes.io/worker": "" }
      tolerations: [ ]
      edgeWatcherAgent:
        nodeSelector: { "node-role.kubernetes.io/worker": "" }
        tolerations: [ ]

二、FAQ

1. etcd 监控证书找不到

现象:

Task 'servicemesh' failed:
...
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: etcdserver: request timed out

解决方案:

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
kubectl delete -f cluster-configuration.yaml
kubectl apply -f cluster-configuration.yaml