史上最全之K8s使用nfs作为存储卷的五种方式

Posted 琦彦

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了史上最全之K8s使用nfs作为存储卷的五种方式相关的知识,希望对你有一定的参考价值。

史上最全之K8s使用nfs作为存储卷的五种方式

我们能将 NFS (网络文件系统) 挂载到Pod 中,不像 emptyDir 那样会在删除 Pod 的同时也会被删除,nfs 卷的内容在删除 Pod 时会被保存,卷只是被卸载。 这意味着 nfs 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享,NFS 卷可以由多个pod同时挂载

注意: 在使用 NFS 卷之前,你必须运行自己的 NFS 服务器并将目标 share 导出备用。

虽然官方不是很推荐使用nfs存储作为pv,但是实际中我们有时候因为种种原因需要使用nfs类型的volume来存储数据。

下面介绍kubernetes中使用nfs作为存储卷的几种方式

  1. deployment/statefulset中直接使用
  2. 创建类型为nfs的持久化存储卷,用于为PersistentVolumClaim提供存储卷
  3. 使用csi-driver-nfs提供StorageClass
  4. 使用NFS Subdir External Provisioner提供StorageClass
  5. 使用nfs-ganesha-server-and-external-provisioner提供StorageClass

我们事先在172.26.204.144机器上配置好了nfs server端,并共享了如下目录。

[root@node-02 ~]# showmount -e 172.26.204.144
Export list for 172.26.204.144:
/opt/nfs-deployment 172.26.0.0/16
/opt/kubernetes-nfs 172.26.0.0/16

deployment/statefulset中直接使用

如下示例我们为nginx使用nfs作为存储卷持久化/usr/share/nginx/html目录

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        nfs:
          path: /opt/nfs-deployment
          server: 172.26.204.144

我们进入pod内部可以发现实际上pod内部是使用把172.26.204.144:/opt/nfs-deployment mount 到 /usr/share/nginx/html

[root@master-01 test]# kubectl exec -it nginx-deployment-6dfb66cbd9-lv5c7  bash
root@nginx-deployment-6dfb66cbd9-lv5c7:/usr/share/nginx/html# mount |grep 172
172.26.204.144:/opt/nfs-deployment on /usr/share/nginx/html type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.26.204.144,local_lock=none,addr=172.26.204.144)

此时我们在172.26.204.144的/opt/nfs-deployment 目录里创建一个date.html文件,在pod中我们也可以实时的访问到这个文件

#172.26.204.144上操作
[root@node-02 ~]# date > /opt/nfs-deployment/date.html 
[root@node-02 ~]# cat /opt/nfs-deployment/date.html
Sun Aug  8 01:36:15 CST 2021
#pod中查看
root@nginx-deployment-6dfb66cbd9-lv5c7:/usr/share/nginx/html# cd /usr/share/nginx/html/ 
root@nginx-deployment-6dfb66cbd9-lv5c7:/usr/share/nginx/html# ls
date.html

创建类型为nfs的持久化存储卷

[root@master-01 test]# cat pv-nfs.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany 
  nfs:
    path: /opt/nfs-deployment
    server: 172.26.204.144
[root@master-01 test]# kubectl apply -f pv-nfs.yaml 
kpersistentvolume/pv-nfs created
[root@master-01 test]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                 STORAGECLASS          REASON   AGE
pv-nfs                                     10Gi       RWX            Retain           Available                                                                                        4s

此时pv被创建后状态如下,我们还没有创建pvc使用这个pv,所以此时状态还是Available

[root@master-01 test]# kubectl get pv|grep nfs-pv1
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                 STORAGECLASS          REASON   AGE
nfs-pv1                                    5Gi        RWO            Recycle          Available                                                                                        6s

我们创建一个pvc使用这块pv后,pv的状态会变更为Bound

#会根据大小和类型自动匹配到上面的PV
[root@master-01 test]# cat pvc-nfs.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-nfs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
[root@master-01 test]# kubectl apply -f pvc-nfs.yaml
persistentvolumeclaim/pvc-nfs created
[root@master-01 test]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    pv-nfs   10Gi       RWX                           2s
[root@master-01 test]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                 STORAGECLASS          REASON   AGE
pv-nfs                                     10Gi       RWX            Retain           Bound    default/pvc-nfs                                                                      70s

我们此时可以创建一个服务使用这个pvc

[root@master-01 test]# cat dp-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-nfs
[root@master-01 test]# kubectl apply -f dp-pvc.yaml 
deployment.apps/busybox created
#这里使用的还是上面那个nfs的目录 所以我们这里还能看到之前的文件
[root@master-01 test]# kubectl exec -it busybox-7cdd999d7d-dwcbq  -- sh 
/ # cat /data/date.html 
Sun Aug  8 01:36:15 CST 202

NFS CSI Driver

NFS CSI Driver是K8s官方提供的CSI示例程序,只实现了CSI的最简功能,这个插件驱动本身只提供了集群中的资源和NFS服务器之间的通信层,使用这个驱动之前需要 Kubernetes 集群 1.14 或更高版本和预先存在的 NFS 服务器。

安装

#rbac规则
[root@master-01 deploy]# cat  rbac-csi-nfs-controller.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-nfs-controller-sa
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-external-provisioner-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-csi-provisioner-binding
subjects:
  - kind: ServiceAccount
    name: csi-nfs-controller-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-external-provisioner-role
  apiGroup: rbac.authorization.k8s.io
#
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: nfs.csi.k8s.io
spec:
  attachRequired: false
  volumeLifecycleModes:
    - Persistent

#Controller由CSI Plugin+csi-provisioner+livenessprobe组成
[root@master-01 deploy]# cat csi-nfs-controller.yaml 
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-nfs-controller
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      app: csi-nfs-controller
  template:
    metadata:
      labels:
        app: csi-nfs-controller
    spec:
      hostNetwork: true  # controller also needs to mount nfs to create dir
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: csi-nfs-controller-sa
      nodeSelector:
        kubernetes.io/os: linux  # add "kubernetes.io/role: master" to run controller on master node
      priorityClassName: system-cluster-critical
      tolerations:
        - key: "node-role.kubernetes.io/master"
          operator: "Exists"
          effect: "NoSchedule"
        - key: "node-role.kubernetes.io/controlplane"
          operator: "Exists"
          effect: "NoSchedule"
      containers:
        - name: csi-provisioner
          image: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
          args:
            - "-v=2"
            - "--csi-address=$(ADDRESS)"
            - "--leader-election"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
          resources:
            limits:
              cpu: 100m
              memory: 400Mi
            requests:
              cpu: 10m
              memory: 20Mi
        - name: liveness-probe
          image: k8s.gcr.io/sig-storage/livenessprobe:v2.3.0
          args:
            - --csi-address=/csi/csi.sock
            - --probe-timeout=3s
            - --health-port=29652
            - --v=2
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
            requests:
              cpu: 10m
              memory: 20Mi
        - name: nfs
          image: mcr.microsoft.com/k8s/csi/nfs-csi:latest
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          imagePullPolicy: IfNotPresent
          args:
            - "-v=5"
            - "--nodeid=$(NODE_ID)"
            - "--endpoint=$(CSI_ENDPOINT)"
          env:
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
          ports:
            - containerPort: 29652
              name: healthz
              protocol: TCP
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: healthz
            initialDelaySeconds: 30
            timeoutSeconds: 10
            periodSeconds: 30
          volumeMounts:
            - name: pods-mount-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
            - mountPath: /csi
              name: socket-dir
          resources:
            limits:
              cpu: 200m
              memory: 200Mi
            requests:
              cpu: 10m
              memory: 20Mi
      volumes:
        - name: pods-mount-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: Directory
        - name: socket-dir
          emptyDir: {}

### node-server由CSI Plugin+liveness-probe+node-driver-registrar组成
[root@master-01 deploy]# cat csi-nfs-node.yaml 
---
# This YAML file contains driver-registrar & csi driver nodeplugin API objects
# that are necessary to run CSI nodeplugin for nfs
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-nfs-node
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: csi-nfs-node
  template:
    metadata:
      labels:
        app: csi-nfs-node
    spec:
      hostNetwork: true  # original nfs connection would be broken without hostNetwork setting
      dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        - operator: "Exists"
      containers:
        - name: liveness-probe
          image: k8s.gcr.io/sig-storage/livenessprobe:v2.3.0
          args:
            - --csi-address=/csi/csi.sock
            - --probe-timeout=3s
            - --health-port=29653
            - --v=2
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
            requests:
              cpu: 10m
              memory: 20Mi
        - name: node-driver-registrar
          image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh", "-c", "rm -rf /registration/csi-nfsplugin /registration/csi-nfsplugin-reg.sock"]
          args:
            - --v=2
            - --csi-address=/csi/csi.sock
            - --kubelet-registration-path=/var/lib/kubelet/plugins/csi-nfsplugin/csi.sock
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration
        - name: nfs
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: mcr.microsoft.com/k8s/csi/nfs-csi:latest
          args:
            - "-v=5"
            - "--nodeid=$(NODE_ID)"
            - "--endpoint=$(CSI_ENDPOINT)"
          env:
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
          ports:
            - containerPort: 29653
              name: healthz
              protocol: TCP
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: healthz
            initialDelaySeconds: 30
            timeoutSeconds: 10
            periodSeconds: 30
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - name: pods-mount-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/csi-nfsplugin
            type: DirectoryOrCreate
        - name: pods-mount-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: Directory
        - hostPath:
            path: /var/lib/kubelet/plugins_registry
            type: Directory
          name: registration-dir

这里由于部分镜像无法拉去我们可以更改为下面镜像

k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 ---> misterli/sig-storage-csi-provisioner:v2.1.0
k8s.gcr.io/sig-storage/livenessprobe:v2.3.0   ---> misterli/sig-storage-livenessprobe:v2.3.0
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 ---> misterli/sig-storage-csi-node-driver-registrar:v2.2.0

部署成功后如下

[root@master-01 deploy]# kubectl -n kube-system  get pod|grep csi
csi-nfs-controller-5d74c65b76-wb7qt        3/3     Running   0          8m13s
csi-nfs-controller-5d74c65b76-xhqfx        3/3     Running   0          8m13s
csi-nfs-node-bgtf7                         3/3     Running   0          6m19s
csi-nfs-node-q8xvs                         3/3     Running   0          6m19s

fsGroupPolicy功能是 Kubernetes 1.20 的 Beta 版,默认禁用,如果需要启用,请执行如下命令

kubectl delete CSIDriver nfs.csi.k8s.io
cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: nfs.csi.k8s.io
spec:
  attachRequired: false
  volumeLifecycleModes:
    - Persistent
  fsGroupPolicy: File
EOF

使用

pv/pvc 使用(静态配置)

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-csi
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  mountOptions:
    - hard
    - nfsvers=4.1
  csi:
    driver: nfs.csi.k8s.io
    readOnly: false
    volumeHandle: unique-volumeid  # #确保它是集群中的唯一 ID
    volumeAttributes:
      server: 172.26.204.144
      share: /opt/nfs-deployment
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-nfs-csi-static
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  volumeName: pv-nfs-csi
  storageClassName: ""
参数意义示例是否必选Default value
volumeAttributes.servernfs server 地址Domain name nfs-server.default.svc.cluster.local Or IP address 127.0.0.1Yes
volumeAttributes.sharenfs共享路径/Yes

更多参数含义可参考https://kubernetes.io/zh/docs/concepts/storage/volumes/#out-of-tree-volume-plugins

部署后我们可以发现pvc-nfs-csi-static 已经bound我们创建的pv-nfs-csi这个pv了

[root@master-01 example]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                 STORAGECLASS          REASON   AGE

pv-nfs-csi                                 10Gi       RWX            Retain           Bound    default/pvc-nfs-csi-static                                                           48

我们创建一个服务验证一下这个pvc 是否可以正常使用

[root@master-01 test]# cat dp-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-nfs-csi-static
[root@master-01 test]# kubectl apply -f dp-pvc.yaml 
deployment.apps/busybox created
[root@master-01 test]# kubectl exec -it busybox-cd6d67ddc-zdrfp sh 
/ # ls /data
date.html

存储类使用(动态配置)

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: 172.26.204.144
  share: /opt/nfs-deployment
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - hard
  - nfsvers=4.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-csi-dynamic
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs-csi
参数意义示例是否必选Default value
parameters.servernfs server 地址Domain name nfs-server.default.svc.cluster.local Or IP address 127.0.0.1Yes
parameters.sharenfs共享路径/Yes

这里我们创建一个statefulset类型的服务验证一下是否可以正常使用volume

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset-nfs
  labels:
    app: nginx
spec:
  serviceName: statefulset-nfs
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
      containers:
        - name: statefulset-nfs
          image: mcr.microsoft.com/oss/nginx/nginx:1.19.5
          command:
            - "/bin/bash"
            - "-c"
            - set -euo pipefail; while true; do echo $(date) >> /mnt/nfs/outfile; sleep 1; done
          volumeMounts:
            - name: persistent-storage
              mountPath: /mnt/nfs
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: nginx
  volumeClaimTemplates:
    - metadata:
        name: persistent-storage
        annotations:
          volume.beta.kubernetes.io/storage-class: nfs-csi
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 10Gi
[root@master-01 example]# kubectl get sc
NAME                  PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi               nfs.csi.k8s.io                                Delete          Immediate           false                  4s
[root@master-01 example]# kubectl apply -f statefulset.yaml 
statefulset.apps/statefulset-nfs created
[root@master-01 example]# kubectl get pvc
NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistent-storage-statefulset-nfs-0   Bound    pvc-5269b7f4-33ec-48d1-85fb-9d869d611e94   10Gi       RWO            nfs-csi        4s
[root@master-01 example]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                 STORAGECLASS          REASON   AGE
pvc-5269b7f4-33ec-48d1-85fb-9d869d611e94   10Gi       RWO            Delete           Bound    default/persistent-storage-statefulset-nfs-0          nfs-csi                        14s

我们进入pod 内部查看发现,pod按预设在/mnt/nfs创建了一个名为outfile的文件

##我们进入pod内部查看
[root@master-01 example]# kubectl exec -it statefulset-nfs-0  -- bash 
root@statefulset-nfs-0:/# cd /mnt/nfs
root@statefulset-nfs-0:/mnt/nfs# ls
outfile

我们在nfs server 上查看

## nfs server上查看
[root@node-02 ~]# ls /opt/nfs-deployment/
date.html  pvc-5269b7f4-33ec-48d1-85fb-9d869d611e94
[root@node-02 ~]# ls /opt/nfs-deployment/pvc-5269b7f4-33ec-48d1-85fb-9d869d611e94/
outfile

NFS Subdir External Provisioner

NFS subdir external provisioner 使用现有的的NFS 服务器来支持通过 Persistent Volume Claims 动态供应 Kubernetes Persistent Volumes。持久卷默认被配置为${namespace}-${pvcName}-${pvName},使用这个必须已经拥有 NFS 服务器。

安装

class-delete.yaml  class-nfs.yaml  class.yaml  deployment.yaml  rbac.yaml  test-claim.yaml  test-pod.yaml  test.yaml
[root@master-01 deploy]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: devops
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: devops
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: devops
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: devops
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: devops
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
      
[root@master-01 deploy]# cat  deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: devops
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.26.204.144
            - name: NFS_PATH
              value: /opt/kubernetes-nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.26.204.144
            path: /opt/kubernetes-nfs

注意:

这里需要修改env中的NFS_SERVERNFS_PATH ,以及volumes中的serverpath

镜像无法拉取可以更改为misterli/k8s.gcr.io_sig-storage_nfs-subdir-external-provisioner:v4.0.2

创建存储类

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
allowVolumeExpansion: true
parameters:
  pathPattern: "${.PVC.namespace}-${.PVC.name}" 
  onDelete: delete

parameters:

名称描述默认
onDelete如果存在且有删除值,则删除该目录,如果存在且有保留值,则保存该目录。将在共享上以名称归档: archived-<volume.Name>
archiveOnDelete如果它存在并且值为假,则删除该目录。如果onDelete存在,archiveOnDelete将被忽略。将在共享上以名称归档: archived-<volume.Name>
pathPattern指定用于通过 PVC 元数据(例如标签、注释、名称或命名空间)创建目录路径的模板。要指定元数据,请使用${.PVC.<metadata>}. 示例:如果文件夹应命名为 like <pvc-namespace>-<pvc-name>,则使用${.PVC.namespace}-${.PVC.name}as pathPattern。 n a m e s p a c e − {namespace}- namespace{pvcName}-${pvName}

验证

[root@master-01 deploy]# cat test-claim.yaml  
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteOnce
    #- ReadWriteMany
  resources:
    requests:
      storage: 1024Mi
[root@master-01 deploy]# cat test-pod.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
  namespace: devops
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && sleep 300 && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
[root@master-01 deploy]# kubectl apply -f test-claim.yaml  -f test-pod.yaml 
persistentvolumeclaim/test-claim created
pod/test-pod created

[root@master-01 deploy]# kubectl get pvc
NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim                             Bound    pvc-6bb052e0-d57d-4de6-855c-22070ff56931   1Gi        RWO            managed-nfs-storage   5s

[root@master-01 deploy]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                 STORAGECLASS          REASON   AGE
pvc-6bb052e0-d57d-4de6-855c-22070ff56931   1Gi        RWO            Delete           Bound    default/test-claim                                    managed-nfs-storage            12s
                                manual                         206d

我们在nfs server端可以看到相关的目录中按照我们定义的命名规则创建了目录

[root@node-02 ~]# ls /opt/kubernetes-nfs/
default-test-claim 

集群模式

启用集群模式比较简单,我们只需将副本数设置为三个,并且设置环境变量ENABLE_LEADER_ELECTION值为true即可

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: devops
spec:
  replicas: 3
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: misterli/k8s.gcr.io_sig-storage_nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.26.204.144
            - name: NFS_PATH
              value: /opt/kubernetes-nfs
            - name: ENABLE_LEADER_ELECTION
              value: "true"
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.26.204.144
            path: /opt/kubernetes-nfs

部署后我们在日志中看到其中一个pod 被选举为leader

##第一个pod日志
[root@master-01 deploy]# kubectl -n devops logs -f nfs-client-provisioner-7bb7fb9945-zcc6w 
I0808 09:53:10.674682       1 leaderelection.go:242] attempting to acquire leader lease  devops/k8s-sigs.io-nfs-subdir-external-provisioner...

##第二个pod日志
[root@master-01 deploy]# kubectl -n devops logs -f nfs-client-provisioner-7bb7fb9945-h7xb6 
I0808 09:53:10.671051       1 leaderelection.go:242] attempting to acquire leader lease  devops/k8s-sigs.io-nfs-subdir-external-provisioner...

###第三个pod日志 , 看到successfully acquired lease 确定被选举为leader
[root@master-01 deploy]# kubectl -n devops logs -f nfs-client-provisioner-7bb7fb9945-rs97c 
I0808 09:53:10.531170       1 leaderelection.go:242] attempting to acquire leader lease  devops/k8s-sigs.io-nfs-subdir-external-provisioner...
I0808 09:53:28.143466       1 leaderelection.go:252] successfully acquired lease devops/k8s-sigs.io-nfs-subdir-external-provisioner
I0808 09:53:28.143742       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"devops", Name:"k8s-sigs.io-nfs-subdir-external-provisioner", UID:"a5a7a644-c682-4ce6-8e05-7ca4e5257776", APIVersion:"v1", ResourceVersion:"109115588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-7bb7fb9945-rs97c_24635026-51c7-4e48-8521-938c7ed83593 became leader
I0808 09:53:28.144326       1 controller.go:820] Starting provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-7bb7fb9945-rs97c_24635026-51c7-4e48-8521-938c7ed83593!
I0808 09:53:28.244537       1 controller.go:869] Started provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-7bb7fb9945-rs97c_24635026-51c7-4e48-8521-938c7ed83593!

我们把被选举为leader的pod 删除,此时观察日志我们可以发现此时别的pod 被选举为leader

[root@master-01 deploy]# kubectl -n devops logs

以上是关于史上最全之K8s使用nfs作为存储卷的五种方式的主要内容,如果未能解决你的问题,请参考以下文章

Kubernetes核心概念之Volume存储数据卷详解

从外部访问K8s中Pod的五种方式

六:数据存储的五种方式

Andriod中数据存储的五种方式

轻量级 K8S 环境本地 K8S 环境Minikube,一键使用 (史上最全)

轻量级 K8S 环境本地 K8S 环境Minikube,一键使用 (史上最全)