PV和PVC模式都是需要先创建好PV,然后定义好PVC和pv进行一对一的Bond,但是如果PVC请求成千上万,那么就需要创建成千上万的PV,对于运维人员来说维护成本很高,Kubernetes提供一种自动创建PV的机制,叫StorageClass,它的作用就是创建PV的模板。k8s集群管理员通过创建storageclass可以动态生成一个存储卷pv供k8spvc使用。每个StorageClass都包含字段provisioner,parameters和reclaimPolicy。 

具体来说,StorageClass会定义以下两部分:

  • PV的属性 ,比如存储的大小、类型等;
  • 创建这种PV需要使用到的存储插件,比如Ceph、NFS等

有了这两部分信息,Kubernetes就能够根据用户提交的PVC,找到对应的StorageClass,然后Kubernetes就会调用 StorageClass声明的存储插件,创建出需要的PV。

1.1、查看定义的storageclass需要的字段

帮助命令:kubectl explain storageclass 

root@k8s-master:~/K8sStudy/Chapter2-11# kubectl explain storageclass |grep '<*>'
  allowVolumeExpansion  <boolean>
  allowedTopologies     <[]TopologySelectorTerm>
  apiVersion    <string>
  kind  <string>
  metadata      <ObjectMeta>
  mountOptions  <[]string>
  parameters    <map[string]string>
  provisioner   <string> -required-
  reclaimPolicy <string>
  volumeBindingMode     <string>

provisioner:供应商,storageclass需要有一个供应者,用来确定我们使用什么样的存储来创建pv,常见的provisioner如下:

官方文档:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

k8s存储类:storageclass_nfs-provisioner

provisioner既可以由内部供应商提供,也可以由外部供应商提供,如果是外部供应商可以参考https://github.com/kubernetes-incubator/external-storage/下提供的方法创建:https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner

以NFS为例,要想使用NFS,我们需要一个nfs-client的自动装载程序,称之为provisioner,这个程序会使用我们已经配置好的NFS服务器自动创建持久卷,也就是自动帮我们创建PV。

reclaimPolicy:回收策略

allowVolumeExpansion:允许卷扩展,PersistentVolume 可以配置成可扩展。将此功能设置为true时,允许用户通过编辑相应的 PVC 对象来调整卷大小。当基础存储类的allowVolumeExpansion字段设置为 true 时,以下类型的卷支持卷扩展。

k8s存储类:storageclass_storageclass_02

注意:此功能仅用于扩容卷,不能用于缩小卷。

1.2、 安装nfs provisioner,用于配合存储类动态生成pv

1.2.1、创建运行nfs-provisioner需要的sa账号

创建serviceaccount账号帮助命令:kubectl explain sa

root@k8s-master:~/K8sStudy/Chapter2-11# kubectl explain sa |grep '<*>'
  apiVersion    <string>
  automountServiceAccountToken  <boolean>
  imagePullSecrets      <[]LocalObjectReference>
  kind  <string>
  metadata      <ObjectMeta>
  secrets       <[]ObjectReference>

创建运行nfs-provisioner的sa账号:

查看yaml清单文件:Eg-ServiceAccount.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# cat Eg-ServiceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner

应用/更新清单文件:Eg-ServiceAccount.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# kubectl apply -f Eg-ServiceAccount.yaml 
serviceaccount/nfs-provisioner created
root@k8s-master:~/K8sStudy/Chapter2-11# kubectl get sa
NAME              SECRETS   AGE
default           0         61d
nfs-provisioner   0         5s

扩展:什么是sa?

sa的全称是serviceaccount。

serviceaccount是为了方便Pod里面的进程调用Kubernetes API或其他外部服务而设计的。指定了serviceaccount之后,我们把pod创建出来了,我们在使用这个pod时,这个pod就有了我们指定的账户的权限了。

1.2.2、对sa授权 

通过clusterrolebinding授与sa账号nfs-provisioner管理员的权限:kubectl create clusterrolebinding nfs-provisioner-clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:nfs-provisioner

root@k8s-master:~/K8sStudy/Chapter2-11# kubectl create clusterrolebinding nfs-provisioner-clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:nfs-provisioner
clusterrolebinding.rbac.authorization.k8s.io/nfs-provisioner-clusterrolebinding created
root@k8s-master:~/K8sStudy/Chapter2-11# 
root@k8s-master:~/K8sStudy/Chapter2-11# kubectl get clusterrolebinding |grep nfs-provisioner
nfs-provisioner-clusterrolebinding                     ClusterRole/cluster-admin                                                          47s
root@k8s-master:~/K8sStudy/Chapter2-11#

1.2.3、安装nfs-provisioner程序

1、创建nfs共享目录/data/volumes/nfs-provisioner

root@k8s-master:~/K8sStudy/Chapter2-11# mkdir /data/volumes/nfs-provisioner
root@k8s-master:~/K8sStudy/Chapter2-11# vim /etc/exports 
root@k8s-master:~/K8sStudy/Chapter2-11# cat /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/data/volumes-nfs *(rw,no_root_squash)
/data/volumes/v1 *(rw,no_root_squash)
/data/volumes/v2 *(rw,no_root_squash)
/data/volumes/v3 *(rw,no_root_squash)
/data/volumes/nfs-provisioner *(rw,no_root_squash)
root@k8s-master:~/K8sStudy/Chapter2-11# exportfs -arv
exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes-nfs".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes/v1".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes/v2".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes/v3".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes/nfs-provisioner".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exporting *:/data/volumes/nfs-provisioner
exporting *:/data/volumes/v3
exporting *:/data/volumes/v2
exporting *:/data/volumes/v1
exporting *:/data/volumes-nfs
root@k8s-master:~/K8sStudy/Chapter2-11# showmount -e 192.168.60.140
Export list for 192.168.60.140:
/data/volumes/nfs-provisioner *
/data/volumes/v3              *
/data/volumes/v2              *
/data/volumes/v1              *
/data/volumes-nfs             *
root@k8s-master:~/K8sStudy/Chapter2-11#

创建nfs-provisioner:Eg-Deployment-nfs-provisioner.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# cat Eg-Deployment-nfs-provisioner.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
      - name: nfs-provisioner
        image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: nfs-client
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME
          value: nfs-provisioner
        - name: NFS_SERVER
          value: 192.168.60.140
        - name: NFS_PATH
          value: /data/volumes/nfs-provisioner
      volumes:
      - name: nfs-client
        nfs:
          server: 192.168.60.140
          path: /data/volumes/nfs-provisioner

应用/更新资源清单文件Eg-Deployment-nfs-provisioner.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# kubectl apply -f Eg-Deployment-nfs-provisioner.yaml
deployment.apps/nfs-provisioner created
root@k8s-master:~/K8sStudy/Chapter2-11# kubectl get deployment
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
nfs-provisioner   1/1     1            1           96s

查看nfs-provisioner是否正常运行

k8s存储类:storageclass_nfs-provisioner_03


1.3、创建storageclass,动态供给pv

查看yaml资源清单文件:Eg-nfs-storageclass.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# cat Eg-nfs-storageclass.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs-storageclass
provisioner: nfs-provisioner

应用/更新yaml资源清单文件:Eg-nfs-storageclass.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# kubectl apply -f Eg-nfs-storageclass.yaml
storageclass.storage.k8s.io/nfs-storageclass created
root@k8s-master:~/K8sStudy/Chapter2-11#

查看storageclass是否创建成功

k8s存储类:storageclass_storageclass_04

显示内容如上,说明storageclass创建成功了

注意:provisioner处写的nfs-provisioner应该跟安装nfs provisioner时候的env下的PROVISIONER_NAME的value值保持一致,如下:

env:
- name: PROVISIONER_NAME
  value: nfs-provisioner

1.4、创建pvc,通过storageclass动态生成pv

查看yaml资源清单文件:Eg-pvc.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# cat Eg-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-nfs-provisioner
spec:
  accessModes:  ["ReadWriteMany"]
  resources:
    requests:
      storage: 20M
  storageClassName:  nfs-storageclass

应用/更新资源清单文件:Eg-pvc.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# kubectl apply -f Eg-pvc.yaml
persistentvolumeclaim/pvc-nfs-provisioner created
root@k8s-master:~/K8sStudy/Chapter2-11#

查看是否动态生成了pv,pvc是否创建成功,并和pv绑定.

k8s存储类:storageclass_storageclass_05

通过上面可以看到pvc-nfs-provisioner的pvc已经成功创建了,绑定的pv是pvc-2232b6b3-1589-4573-a537-d732170e596b,这个pv是由storageclass调用nfs provisioner自动生成的。

验证pv在nfs共享目录中是否创建了一个子目录:

k8s存储类:storageclass_storageclass_06

步骤总结:

1、供应商:创建一个nfs provisioner

2、创建storageclass,storageclass指定刚才创建的供应商

3、创建pvc,这个pvc指定storageclass

1.5、创建pod,挂载storageclass动态生成的pvc:pvc-nfs-provisioner

查看yaml清单文件:Eg-Pod-client.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# cat Eg-Pod-client.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-client-storageclass
spec:
  containers:
  - name: nginx
    image: nginx:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - name: nfs-pvc
        mountPath: /usr/share/nginx/html
  volumes:
  - name: nfs-pvc
    persistentVolumeClaim:
      claimName: pvc-nfs-provisioner

应用/更新资源清单文件:Eg-Pod-client.yaml

root@k8s-master:~/K8sStudy/Chapter2-11# kubectl apply -f Eg-Pod-client.yaml
pod/pod-client-storageclass created

查看pod是否创建成功

k8s存储类:storageclass_storageclass_07

请求pod中的web站点

k8s存储类:storageclass_storageclass_08

发现站点报403错误,原因是我们挂载的pvc中pv是个空目录,挂载到pod中的/usr/share/nginx/html,将默认的网站页面index.html给覆盖掉了,nginx找不到网页页面则报403错误,我们可以再node节点的pvc中pv的共享目录中创建一个index.html文件,内容为pod-client-storageclass。然后再请求pod中的web站点,看看是否能得到网页结果内容是pod-client-storageclass。

k8s存储类:storageclass_storageclass_09

可以证明storageclass存储类动态创建的pv正常被挂载到pod中。