pv动态供给演示(nfs)

动态供给插件

1
https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy

部署Stroage插件

创建StorageClass

1
2
3
4
5
6
7
8
[root@pool1 k8s_yaml]# cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
archiveOnDelete: "true" #---设置为"false"时删除PVC不会保留数据,"true"则保留数据

部署nfs插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@pool1 k8s_yaml]# cat deployment.yaml 
……
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes # 容器挂载目录
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 172.16.1.20 # nfs server端地址
- name: NFS_PATH
value: /data/k8s_data # nfs共享目录
volumes:
- name: nfs-client-root
nfs:
server: 172.16.1.20
path: /data/k8s_data

授权访问apiserver

1
# 由于内容过长,建议wget直接下载,或者上传;
1
2
3
4
5
6
7
8
9
10
[root@pool1 kubernetes]# kubectl apply -f class.yaml
[root@pool1 kubernetes]# kubectl apply -f deployment.yaml
[root@pool1 kubernetes]# kubectl apply -f rbac.yaml

[root@pool1 kubernetes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-665946598-5lrh2 1/1 Running 0 39s
[root@pool1 k8s_yaml]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 6m21s

部署Pod+PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
[root@pool1 k8s_yaml]# vi deployment-pvc2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auto-pvc
spec:
replicas: 1
selector:
matchLabels:
app: auto-nginx2
template:
metadata:
labels:
app: auto-nginx2
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: nfs-volume
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: my-pvc-auto
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc-auto
spec:
storageClassName: "managed-nfs-storage"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 40Gi

[root@pool1 k8s_yaml]# kubectl apply -f deployment-pvc2.yaml
[root@pool1 k8s_yaml]# kubectl get pods | grep auto-pv
auto-pv-6c89564dd-26kv2 0/1 Pending 0 4m1s
auto-pv-6c89564dd-jm9xp 0/1 Pending 0 4m1s
auto-pv-6c89564dd-w6qvr 0/1 Pending 0 4m1s
[root@pool1 k8s_yaml]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc-auto Pending managed-nfs-storage 12s
[root@pool1 nfs-storage]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound my-pv 5Gi RWX 4h14m
test-pvcpod Pending managed-nfs-storage 2m45s
# 发现pod和pvc一直处于Pending状态!!!

查看pvc日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@pool1 nfs-storage]# kubectl describe pvc test-pvcpod
Name: test-pvcpod
Namespace: default
StorageClass: managed-nfs-storage
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: nfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: testpvc-nginx-57cbdf6586-pcck5
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 5s (x13 over 2m54s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator
# 看不出有什么问题

查看nfs-client-provisioner日志

1
2
3
4
5
6
7
8
[root@pool1 nfs-storage]# kubectl logs nfs-client-provisioner-6cc768c76-6wsch
I0826 16:12:39.236017 1 leaderelection.go:185] attempting to acquire leader lease default/nfs...
E0826 16:12:56.641524 1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nfs", GenerateName:"", t", SelfLink:"", UID:"ef7353f9-3f27-40b4-9ce0-7f30350e9222", ResourceVersion:"75416", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63765590723, loc:(*time.Location)(0x19nTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\ovisioner-6cc768c76-6wsch_6f2fb044-0688-11ec-ab8f-42729f48f3a3\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-26T16:12:56Z\",\"renewTime\":\"2021-08-26T16:12:56Z\",\"leaderTransitions\rences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make refereport event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-6cc768c76-6wsch_6f2fb044-0688-11ec-ab8f-42729f48f3a3 became leader'
I0826 16:12:56.641577 1 leaderelection.go:194] successfully acquired lease default/nfs
I0826 16:12:56.641607 1 controller.go:631] Starting provisioner controller nfs_nfs-client-provisioner-6cc768c76-6wsch_6f2fb044-0688-11ec-ab8f-42729f48f3a3!
I0826 16:12:56.741954 1 controller.go:680] Started provisioner controller nfs_nfs-client-provisioner-6cc768c76-6wsch_6f2fb044-0688-11ec-ab8f-42729f48f3a3!
I0826 16:12:56.742031 1 controller.go:987] provision "default/test-pvcpod" class "managed-nfs-storage": started
E0826 16:12:56.744204 1 controller.go:1004] provision "default/test-pvcpod" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make referenc

经过查阅网上得资料得出结果是,在1.20+版本之后启用了selfLink功能,这时我们需要修改apiserver配置文件

解决方案

1
[root@pool1 nfs-storage]# vi /etc/kubernetes/manifests/kube-apiserver.yaml  containers:  - command:    - kube-apiserver    - --advertise-address=172.16.1.20    - --allow-privileged=true    - --authorization-mode=Node,RBAC    - --client-ca-file=/etc/kubernetes/pki/ca.crt    - --enable-admission-plugins=NodeRestriction    - --enable-bootstrap-token-auth=true    - --feature-gates=RemoveSelfLink=false	# 添加    ……# 修改完文件后,等待30s左右,pvc 会自动 Bound 成功

查看Pod、pv、pvc状态

1
[root@pool1 nfs-storage]# kubectl get pvcNAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGEmy-pvc-auto   Bound    pvc-0a413bd3-63f0-4d59-9125-29ad7c3dad7d   40Gi       RWX            managed-nfs-storage   8m59s[root@pool1 nfs-storage]# kubectl get pvNAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS          REASON   AGE                                  5h16mpvc-58db6622-4e63-4882-aafb-8bbe44c17668   1Gi        RWO            Delete           Bound    default/test-pvcpod   managed-nfs-storage            53m# 都出现Bound状态说明成功![root@pool1 k8s_data]# pwd/data/k8s_data[root@pool1 k8s_data]# lsdefault-my-pvc-auto-pvc-0a413bd3-63f0-4d59-9125-29ad7c3dad7d # 这个目录就是容器中得/usr/share/nginx/html/目录[root@pool1 k8s_data]# kubectl get podNAME                                      READY   STATUS    RESTARTS   AGEauto-pvc-6c89564dd-75lhv                  1/1     Running   0          11m# 查看容器running

测试挂载是否有效

1
[root@pool1 k8s_data]# kubectl expose deployment auto-pvc --port=80 --target-port=80 service/auto-pvc exposed[root@pool1 k8s_data]# kubectl get svc NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGEauto-pvc           ClusterIP   10.111.0.88      <none>        80/TCP    8s[root@pool1 k8s_data]# curl 10.111.0.88<html><head><title>403 Forbidden</title></head><body bgcolor="white"><center><h1>403 Forbidden</h1></center><hr><center>nginx/1.14.2</center></body></html>[root@pool1 k8s_data]# echo "StorageClass_Pvc" >> default-my-pvc-auto-pvc-0a413bd3-63f0-4d59-9125-29ad7c3dad7d/index.html[root@pool1 k8s_data]# curl 10.111.0.88StorageClass_Pvc

测试删除Pod及pvc

1
[root@pool1 nfs-storage]# kubectl delete -f pvc-nginx2-deploy.yaml deployment.apps "auto-pvc" deletedpersistentvolumeclaim "my-pvc-auto" deleted[root@pool1 k8s_data]# ls | grep my-pvc-autoarchived-default-my-pvc-auto-pvc-e17f0c73-8934-4cf7-bc13-d9b41cdf2fe1# 发现后端存储还是存在的,只是重命名进行保留了;# 原因是:[root@pool1 nfs-storage]# cat class.yaml | grep false  archiveOnDelete: "true"   #---设置为"false"时删除PVC不会保留数据,"false"则保留数据# 这里设置的是true,所以在删除pvc时后端存储是进行保留的