一、nfs-client-provisioner简介
nfs-client-provisioner 可动态为kubernetes提供pv卷,是Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储。持久卷目录的命名规则为:${namespace}-${pvcName}-${pvName}。
K8S的外部NFS驱动可以按照其工作方式(是作为NFS server还是NFS client)分为两类:
nfs-client:
它通过K8S内置的NFS驱动挂载远端的NFS服务器到本地目录;然后将自身作为storage provider关联storage class。当用户创建对应的PVC来申请PV时,该provider就将PVC的要求与自身的属性比较,一旦满足就在本地挂载好的NFS目录中创建PV所属的子目录,为Pod提供动态的存储服务。
nfs-server:
与nfs-client不同,该驱动并不使用k8s的NFS驱动来挂载远端的NFS到本地再分配,而是直接将本地文件映射到容器内部,然后在容器内使用ganesha.nfsd来对外提供NFS服务;在每次创建PV的时候,直接在本地的NFS根目录中创建对应文件夹,并export出该子目录。
本文将介绍使用nfs-client-provisioner这个应用,利用NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV。前提条件是有已经安装好的NFS服务器,并且NFS服务器与Kubernetes的Slave节点网络能够连通。将nfs-client驱动做为一个deployment部署到K8S集群中,然后对外提供存储服务
二、准备NFS服务端
2.0 当前环境信息
[root@k8s-master]-[~/nfs-provisioner]-#kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane 4d19h v1.24.0 10.0.0.16 <none> CentOS Linux 7 (Core) 3.10.0-1160.45.1.el7.x86_64 containerd://1.6.2
k8s-node-1 Ready <none> 4d19h v1.24.0 10.0.0.17 <none> CentOS Linux 7 (Core) 3.10.0-1160.45.1.el7.x86_64 containerd://1.6.2
k8s-node-2 Ready <none> 4d19h v1.24.0 10.0.0.18 <none> CentOS Linux 7 (Core) 3.10.0-1160.45.1.el7.x86_64 containerd://1.6.2
2.1 通过yum安装nfs server端
rpm -qa|egrep "nfs|rpc"
yum -y install nfs-utils rpcbind
2.2 启动服务和设置开机启动
#启动nfs-server,并加入开机启动
systemctl start rpcbind.service
systemctl enable rpcbind.service
systemctl start nfs
systemctl enable nfs-server --now
#查看nfs server是否已经正常启动
systemctl status nfs-server
2.3 编辑配置文件,设置共享目录
[root@k8s-node-2]-[/data/nfs_provisioner]-#vim /etc/exports
/data/nfs_provisioner 10.0.0.0/24(rw,no_root_squash)
#不用重启nfs服务,配置文件就会生效
exportfs -arv
用于配置NFS服务程序配置文件的参数:
2.4 客户端尝试挂载
2.4.1 客户端需要安装nfs-utils,否则将无法进行nfs的挂载
yum -y install nfs-utils
systemctl start nfs
systemctl enable nfs
systemctl status nfs
2.4.2 nfs-server创建共享目录
mkdir /nfs_data
2.4.3 客户端测试
#查看nfs-server共享的目录
[root@k8s-master]-[~]-#showmount -e 10.0.0.18
Export list for 10.0.0.18:
/data/nfs_provisioner 10.0.0.0/24
#本地创建一个目录作为挂载点
mount -t nfs 10.0.0.18:/data/nfs_provisioner /nfs_data
#将nfs-server的共享目录挂载到本地
#通过df -Th
[root@k8s-node-1]-[~]-#df -Th /nfs_data/
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 39G 7.9G 31G 21% /
三、部署nfs-provisioner
3.1 创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl apply -f sa.yaml
3.2 创建Deployment
注意镜像需要用比较新的,不然会报错:unexpected error getting claim reference: selfLink was empty, can't make reference
注意:千万不要使用以下方式来解决selfLink的问题,k8s1.24.0版本默认是true,不支持修改为false,否则apiserver会启动失败!
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-provisioner # 和3.Storage中provisioner保持一致便可
- name: NFS_SERVER
value: 10.0.0.18
- name: NFS_PATH
value: /data/nfs_provisioner
volumes:
- name: nfs-client-root
nfs:
server: 10.0.0.18
path: /data/nfs_provisioner
kubectl apply -f deployment.yaml
3.3 创建storageclass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: nfs-storage
provisioner: nfs-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
kubectl apply -f sc.yaml
四、创建应用测试动态添加PV
4.1 创建一个nginx应用
kubectl apply -f nginx_sts_pvc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs-storage" #使用新建的sc
resources:
requests:
storage: 10Mi
4.2 检查结果
检查nfs-server服务器是否创建pv持久卷: