我们先来单点部署NACOS服务:

1.准备数据存储

apiVersion: v1
kind: Namespace
metadata:
  name: {namespace}

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv-{namespace}
  labels:
    pv: nfs-pv-{namespace}
  annotations:
    volume.beta.kubernetes.io/mount-options: "noatime,nodiratime,noresvport,nolock,proto=udp,rsize=1048576,wsize=1048576,hard"
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/nfs/{namespace}
    server: 192.168.0.2

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-pvc
  namespace: {namespace}
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  selector:
    matchLabels:
      pv: nfs-pv-{namespace}
apiVersion: v1
kind: Service
metadata:
  name: nacos
  namespace: {namespace}
  labels:
    app: nacos
spec:
  ports:
  - protocol: TCP
    port: 8848
    targetPort: 8848
    name: nacos-http
  selector:
    app: nacos

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: {namespace}
spec:
  serviceName: nacos
  selector:
    matchLabels:
      app: nacos
  replicas: 1
  template:
    metadata:
      labels:
        app: nacos
    spec:
      containers:
      - name: nacos
        image: nacos/nacos-server:v2.0.1
        resources:
          requests:
            memory: "1024Mi"
            cpu: "256m"
          limits:
            memory: "2048Mi"
            cpu: "512m"
        ports:
        - containerPort: 8848
          name: nacos-http
        env:
          - name: MODE
            value: "standalone"
        volumeMounts:
          - name: nacos-data
            mountPath: /home/nacos/plugins/peer-finder
            subPath: peer-finder
          - name: nacos-data
            mountPath: /home/nacos/data
            subPath: data
          - name: nacos-data
            mountPath: /home/nacos/logs
            subPath: logs
      volumes:
      - name: nacos-data
        persistentVolumeClaim:
          claimName: nfs-pvc


我们用集群的方式部署:

1.我们要准备好NFS服务器,并且准备好动态存储

A.创建nfs-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
  ---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

B.创建nfs-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.mirrors.ustc.edu.cn/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.0.2
            - name: NFS_PATH
              value: /data/nfs/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.0.2
            path: /data/nfs/kubernetes

C.创建nfs-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

2.准备好数据库,我们集群里面是有数据库,这里我就不做配置直接使用,只是导入数据库而已

数据库的地址: mysql

数据库名称: nacos

数据库的账号: root

数据库的密码: 123456

然后导入https://github.com/alibaba/nacos/blob/develop/distribution/conf/nacos-mysql.sql 

3.创建配置文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: {namespace}
data:
  mysql.db.name: "nacos"
  mysql.db.host: "mysql"
  mysql.port: "3306"
  mysql.user: "root"
  mysql.password: "123456"

4.创建部署文件

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: {namespace}
spec:
  serviceName: nacos-headless
  selector:
    matchLabels:
      app: nacos
  replicas: 3
  template:
    metadata:
      labels:
        app: nacos
    spec:
      initContainers:
        - name: peer-finder-plugin-install
          image: nacos/nacos-peer-finder-plugin:1.1
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /home/nacos/plugins/peer-finder
              name: nacos-storage
              subPath: peer-finder
      containers:
      - name: nacos
        image: nacos/nacos-server:v2.0.1
        resources:
          requests:
            memory: "1024Mi"
            cpu: "256m"
          limits:
            memory: "2048Mi"
            cpu: "512m"
        ports:
        - containerPort: 8848
          name: nacos-http
        - containerPort: 9848
          name: nacos-rpc
        - containerPort: 9849
          name: raft-rpc
        - containerPort: 7848
          name: old-raft-rpc
        env:
          - name: NACOS_REPLICAS
            value: "3"
          - name: SERVICE_NAME
            value: "nacos-headless"
          - name: DOMAIN_NAME
            value: "cluster.local"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: MYSQL_SERVICE_HOST
            valueFrom:
              configMapKeyRef:
                name: nacos-cm
                key: mysql.db.host
          - name: MYSQL_SERVICE_DB_NAME
            valueFrom:
              configMapKeyRef:
                name: nacos-cm
                key: mysql.db.name
          - name: MYSQL_SERVICE_PORT
            valueFrom:
              configMapKeyRef:
                name: nacos-cm
                key: mysql.port
         - name: MYSQL_SERVICE_USER
            valueFrom:
              configMapKeyRef:
                name: nacos-cm
                key: mysql.user
          - name: MYSQL_SERVICE_PASSWORD
            valueFrom:
              configMapKeyRef:
                name: nacos-cm
                key: mysql.password
          - name: NACOS_SERVER_PORT
            value: "8848"
          - name: NACOS_APPLICATION_PORT
            value: "8848"
          - name: PREFER_HOST_MODE
            value: "hostname"
          - name: NACOS_SERVERS
            value: "nacos-0.nacos-headless.{namespace}.svc.cluster.local:8848 nacos-1.nacos-headless.{namespace}.svc.cluster.local:8848 nacos-2.nacos-headless.{namespace}.svc.cluster.local:8848"
        volumeMounts:
          - name: nacos-storage
            mountPath: /home/nacos/plugins/peer-finder
            subPath: peer-finder
          - name: nacos-storage
            mountPath: /home/nacos/data
            subPath: data
          - name: nacos-storage
            mountPath: /home/nacos/logs
            subPath: logs
  volumeClaimTemplates:
  - metadata:
      name: nacos-storage
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: "managed-nfs-storage"
      resources:
        requests:
          storage: 20Gi

5.创建服务

apiVersion: v1
kind: Service
metadata:
  name: nacos-headless
  namespace: {namespace}
  labels:
    app: nacos
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  ports:
  - protocol: TCP
    port: 8848
    targetPort: 8848
    name: nacos-http
  - protocol: TCP
    port: 9848
    targetPort: 9848
    name: nacos-rpc
  - protocol: TCP
    port: 9849
    targetPort: 9849
    name: raft-rpc
  - protocol: TCP
    port: 7848
    targetPort: 7848
    name: old-raft-rpc
  clusterIP: None
  selector:
    app: nacos

---
apiVersion: v1
kind: Service
metadata:
  name: nacos
  namespace: {namespace}
  labels:
    app: nacos
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  ports:
  - protocol: TCP
    port: 8848
    targetPort: 8848
    name: nacos-http
  - protocol: TCP
    port: 9848
    targetPort: 9848
    name: nacos-rpc
  - protocol: TCP
    port: 9849
    targetPort: 9849
    name: raft-rpc
  - protocol: TCP
    port: 7848
    targetPort: 7848
    name: old-raft-rpc
  selector:
    app: nacos


以上文章参考https://github.com/nacos-group/nacos-k8s

只是nacos-k8s这个里面有个坑,就是缺了

          - name: MYSQL_SERVICE_HOST

            valueFrom:

              configMapKeyRef:

                name: nacos-cm

                key: mysql.db.host


同时为方便集群访问,我这里建立一个nacos服务直接访问集群


以上需要将{namespace}修改为自己的命名空间