一、PV/PVC
pv : 相当于磁盘分区
pvc: 相当于磁盘请求
使用NFS提供存储,此时就要求用户会搭建NFS系统,并且会在yaml配置nfs。由于kubernetes支持的存储系统有很多,要求客户全都掌握,显然不现实。为了能够屏蔽底层存储实现的细节,方便用户使用, kubernetes引入PV和PVC两种资源对象。
1、PersistentVolume(PV)是集群中已由管理员配置的一段网络存储。
集群中的资源就像一个节点是一个集群资源。 PV是诸如卷之类的卷插件,但是具有独立于使用PV的任何单个pod的生命周期。 该API对象捕获存储的实现细节,即NFS,iSCSI或云提供商特定的存储系统 。
1、PersistentVolumeClaim(PVC)是用户存储的请求。
PVC的使用逻辑:在pod中定义一个存储卷(该存储卷类型为PVC),定义的时候直接指定大小,pvc必须与对应的pv建立关系,pvc会根据定义去pv申请,而pv是由存储空间创建出来的。pv和pvc是kubernetes抽象出来的一种存储资源。
- PV : 持久化卷的意思,是对底层的共享存储的一种抽象
- PVC(Persistent Volume Claim)是持久卷请求于存储需求的一种声明(PVC其实就是用户向kubernetes系统发出的一种资源需求申请。)
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
nfs: # 存储类型,与底层真正存储对应
capacity: # 存储能力,目前只支持存储空间的设置
storage: 2Gi
accessModes: # 访问模式
storageClassName: # 存储类别
persistentVolumeReclaimPolicy: # 回收策略
使用了PV和PVC之后,工作可以得到进一步的细分:
存储:存储工程师维护
PV: kubernetes管理员维护
PVC:kubernetes用户维护
二、访问模式
1、PV 的访问模式(accessModes)
2、PV的回收策略(persistentVolumeReclaimPolicy)
3、PV的状态
实验环境
安装nfs
# 1、创建目录
[root@k8s-m-01 ~]# mkdir /root/data/{pv1,pv2,pv3} -pv
# 2、暴露服务
[root@k8s-m-01 ~]# vim /etc/exports
/root/data/pv1 192.168.15.0/24(rw,no_root_squash)
/root/data/pv2 192.168.15.0/24(rw,no_root_squash)
/root/data/pv3 192.168.15.0/24(rw,no_root_squash)
# 3、重启服务
[root@k8s-m-01 ~]# systemctl restart nfs
创建pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain #回收策略
storageClassName: nfs #类别名字
nfs: #nfs存储
path: /root/data/pv1 #nfs挂载路径
server: 192.168.15.30 #对应的nfs服务器
[root@k8s-master-01 pv]# kubectl apply -f pv.yaml
persistentvolume/pv1 created
[root@k8s-master-01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 10Gi RWX Retain Available nfs 4s
3、PVC
PVC是资源的申请,用来声明对存储空间、访问模式、存储类别需求信息。下面是资源清单文件:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
namespace: dev
spec:
accessModes: # 访问模式
selector: # 采用标签对PV选择
storageClassName: # 存储类别
resources: # 请求空间
requests:
storage: 5Gi
PVC 的关键配置参数说明:
访问模式(accessModes)
用于描述用户应用对存储资源的访问权限
选择条件(selector)
通过Label Selector的设置,可使PVC对于系统中己存在的PV进行筛选
存储类别(storageClassName)
PVC在定义时可以设定需要的后端存储的类别,只有设置了该class的pv才能被系统选出
资源请求(Resources )
描述对存储资源的请求
实验
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
namespace: dev
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
namespace: dev
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
namespace: dev
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi # 如果pvc大于pv,则绑定不上
# 1、创建pvc
[root@k8s-m-01 ~]# kubectl create -f pvc.yaml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
# 2、查看pvc
[root@k8s-m-01 ~]# kubectl get pvc -n dev -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
pvc1 Bound pv1 1Gi RWX 15s Filesystem
pvc2 Bound pv2 2Gi RWX 15s Filesystem
pvc3 Bound pv3 3Gi RWX 15s Filesystem
# 3、查看pv
[root@k8s-m-01 k8s]# kubectl get pv -n dev
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 1Gi RWX Retain Bound dev/pvc1 4m25s
pv2 2Gi RWX Retain Bound dev/pvc2 4m25s
pv3 3Gi RWX Retain Bound dev/pvc3
2) 创建pods.yaml, 使用pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /root/data/pv2
server: 192.168.15.31
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /root/data/pv3
server: 192.168.15.31
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: none #无头service
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet #一个接着一个创建
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: wangyanglinux/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "nfs"
resources:
requests:
storage: 1Gi
[root@k8s-master-01 pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pv3 1Gi RWX nfs 10s
www-web-1 Bound pv2 2Gi RWX nfs 4s
[root@k8s-master-01 pv]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 17s
web-1 1/1 Running 0 11s
web-2 0/1 Pending 0 5s
[root@k8s-master-01 pv]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 28s
web-1 1/1 Running 0 22s
web-2 0/1 Pending 0 16s
[root@k8s-master-01 pv]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 29s
web-1 1/1 Running 0 23s
效果测试
#查看pv2的存储
[root@k8s-master-01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv2 2Gi RWX Retain Bound default/www-web-1 nfs 93s
pv3 1Gi RWX Retain Bound default/www-web-0 nfs 93s
[root@k8s-master-01 pv]# kubectl describe pv pv2
Name: pv2
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: nfs
Status: Bound
Claim: default/www-web-1
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.15.31
Path: /root/data/pv2
ReadOnly: false
Events: <none>
#进入挂载目录创建index.html文件
[root@k8s-master-01 pv]# cd /root/data/pv2
[root@k8s-master-01 pv2]# ls
[root@k8s-master-01 pv2]# vim index.html
[root@k8s-master-01 pv2]# cat index.html
aaaaaa
[root@k8s-master-01 pv2]# chmod 777 index.html
[root@k8s-master-01 pv2]# kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
pv2 2Gi RWX Retain Bound default/www-web-1 nfs 4m20s Filesystem
pv3 1Gi RWX Retain Bound default/www-web-0 nfs 4m20s Filesystem
[root@k8s-master-01 pv2]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 4m14s 10.244.2.120 k8s-node-02 <none> <none>
web-1 1/1 Running 0 4m8s 10.244.1.96 k8s-node-01 <none> <none>
web-2 0/1 Pending 0 4m2s <none> <none> <none> <none>
[root@k8s-master-01 pv2]# curl 10.244.1.96
aaaaaa
statefulset访问的名称一样,当删除pod的时候 ,名称不变。地址会变。例如web-1一样。
[root@k8s-master-01 pv2]# kubectl delete pods web-1
pod "web-1" deleted
[root@k8s-master-01 pv2]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 9m26s
web-1 1/1 Running 0 58s
web-2 0/1 Pending 0 9m14s
[root@k8s-master-01 pv2]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 9m35s 10.244.2.120 k8s-node-02 <none> <none>
web-1 1/1 Running 0 67s 10.244.1.97 k8s-node-01 <none> <none>
web-2 0/1 Pending 0 9m23s <none> <none> <none> <none>
#新的IP地址访问,一样可以访问到
[root@k8s-master-01 pv2]# curl 10.244.1.97
aaaaaa
[root@k8s-master-01 pv2]#
其余pv类似
关于statefulset
[root@k8s-master-01 pv2]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 0 14s
web-0 1/1 Running 0 16m
web-1 1/1 Running 0 8m16s
web-2 0/1 Pending 0 16m
[root@k8s-master-01 pv2]# kubectl exec -it test-pd -- sh
/ # ping web-0.nginx
ping: bad address 'web-0.nginx'
/ # ping web-0.nginx
PING web-0.nginx (10.244.2.120): 56 data bytes
64 bytes from 10.244.2.120: seq=0 ttl=64 time=4.891 ms
64 bytes from 10.244.2.120: seq=1 ttl=64 time=0.209 ms
64 bytes from 10.244.2.120: seq=2 ttl=64 time=0.196 ms
64 bytes from 10.244.2.120: seq=3 ttl=64 time=0.131 ms
64 bytes from 10.244.2.120: seq=4 ttl=64 time=0.128 ms
[root@k8s-master-01 pv2]# kubectl get pods -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-f68b4c98f-nkqlm 1/1 Running 2 22d 10.244.0.7 k8s-master-01 <none> <none>
coredns-f68b4c98f-wzrrq 1/1 Running 2 22d 10.244.0.6 k8s-master-01 <none> <none>
etcd-k8s-master-01 1/1 Running 3 22d 192.168.15.31 k8s-master-01 <none> <none>
kube-apiserver-k8s-master-01 1/1 Running 3 22d 192.168.15.31 k8s-master-01 <none> <none>
kube-controller-manager-k8s-master-01 1/1 Running 4 22d 192.168.15.31 k8s-master-01 <none> <none>
kube-flannel-ds-8zj9t 1/1 Running 1 11d 192.168.15.32 k8s-node-01 <none> <none>
kube-flannel-ds-jmq5p 1/1 Running 0 11d 192.168.15.33 k8s-node-02 <none> <none>
kube-flannel-ds-vjt8b 1/1 Running 4 11d 192.168.15.31 k8s-master-01 <none> <none>
kube-proxy-kl2qj 1/1 Running 2 22d 192.168.15.31 k8s-master-01 <none> <none>
kube-proxy-rrlg4 1/1 Running 1 22d 192.168.15.32 k8s-node-01 <none> <none>
kube-proxy-tc2nd 1/1 Running 0 22d 192.168.15.33 k8s-node-02 <none> <none>
kube-scheduler-k8s-master-01 1/1 Running 4 22d 192.168.15.31 k8s-master-01 <none> <none>
[root@k8s-master-01 pv2]# dig -t A nginx.default.svc.cluster.local. @10.244.0.7
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.8 <<>> -t A nginx.default.svc.cluster.local. @10.244.0.7
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26852
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx.default.svc.cluster.local. IN A
;; ANSWER SECTION:
nginx.default.svc.cluster.local. 30 IN A 10.111.55.241
;; Query time: 7 msec
;; SERVER: 10.244.0.7#53(10.244.0.7)
;; WHEN: 一 12月 27 00:00:38 CST 2021
;; MSG SIZE rcvd: 107
删除对应的pod、svc、statefulset、pv、pvc
[root@k8s-master-01 pv]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 0 18m
web-0 1/1 Running 0 34m
web-1 1/1 Running 0 26m
web-2 0/1 Pending 0 34m
[root@k8s-master-01 pv]# kubectl delete -f pod.yaml
service "nginx" deleted
statefulset.apps "web" deleted
[root@k8s-master-01 pv]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 0 18m
web-0 0/1 Terminating 0 35m
web-1 0/1 Terminating 0 26m
[root@k8s-master-01 pv]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22d
[root@k8s-master-01 pv]# kubectl delete statefulsets.apps --all
No resources found
[root@k8s-master-01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv2 2Gi RWX Retain Bound default/www-web-1 nfs 35m
pv3 1Gi RWX Retain Bound default/www-web-0 nfs 35m
[root@k8s-master-01 pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pv3 1Gi RWX nfs 35m
www-web-1 Bound pv2 2Gi RWX nfs 35m
www-web-2 Pending nfs 35m
[root@k8s-master-01 pv]# kubectl delete pvc --all
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
#查看pv显示release状态
[root@k8s-master-01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv2 2Gi RWX Retain Released default/www-web-1 nfs 36m
pv3 1Gi RWX Retain Released default/www-web-0 nfs 36m
#编辑pv2的yaml格式、因为 claimRef的显示所以一直显示release的状态,可以通过edit修改pv2的yaml,删除对应的claimRef的那一段
[root@k8s-master-01 pv]# kubectl edit pv pv2 -o yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"pv2"},"spec":{"accessModes":["ReadWriteMany"],"capacity":{"storage":"2Gi"},"nfs":{"path":"/root/data/pv2","server":"192.168.15.31"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"nfs"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2021-12-26T15:34:19Z"
finalizers:
- kubernetes.io/pv-protection
name: pv2
resourceVersion: "501755"
uid: 7b9f8b31-f111-4064-9ec7-d06e55f6bebd
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: www-web-1
namespace: default
resourceVersion: "498363"
uid: 7d47eaf8-8bed-40fc-b790-18e93a8a0398
nfs:
path: /root/data/pv2
"/tmp/kubectl-edit-euy6w.yaml" 37L, 1260C
#这时候发现状态已经为 Available状态
[root@k8s-master-01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv2 2Gi RWX Retain Available nfs 44m
pv3 1Gi RWX Retain Released default/www-web-0 nfs 44m
、