Kubernetes PV与PVC 持久卷应用

Posted 水木,年華

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kubernetes PV与PVC 持久卷应用相关的知识,希望对你有一定的参考价值。

持久卷概述

• PersistentVolume(PV):持久数据卷,对存储资源的抽象,使得存储作为集群中的资源管理。
• PersistentVolumeClaim(PVC):持久数据卷申请,用户定义使用的存储容量,使得用户不需要关心后端存储实现。
Pod申请PVC作为卷来使用,Kubernetes通过PVC查找绑定的PV,并挂载到Pod中供程序使用。

PV与PVC使用流程

# 容器应用
apiVersion: v1
kind: Pod
metadata:
 name: my-pod
spec:
 containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
    volumeMounts: 
     - name: www
       mountPath: /usr/share/nginx/html
 volumes:
  - name: www
    persistentVolumeClaim:
     claimName: my-pvc
---
# 卷需求模板
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: my-pvc
spec:
 accessModes:      #访问模式
  - ReadWriteMany
 resources:
  requests:
   storage: 5Gi   #需要的储存容量
---
# 数据卷定义
apiVersion: v1
kind: PersistentVolume
metadata:
 name: my-pv
spec:
 capacity:
  storage: 5Gi
 accessModes:
  - ReadWriteMany
 nfs:             #NFS 地址
  path: /ifs/kubernetes
  server: 192.168.95.206

PV 生命周期

ACCESS MODES(访问模式):
AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:

• ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载。
• ReadOnlyMany(ROX):只读权限,可以被多个节点挂载。
• ReadWriteMany(RWX):读写权限,可以被多个节点挂载。

RECLAIM POLICY(回收策略):
目前 PV 支持的策略有三种:
• Retain(保留): 保留数据,需要管理员手工清理数据,删除PVC,PV改变为Released状态。
• Recycle(回收):清除 PV 中的数据,效果相当于执行 rm -rf /nfs/kuberneres/* ,删除PVC,PV改变为 Available状态。
• Delete(删除):与 PV 相连的后端存储同时删除。
修改回收策略:persistentVolumeReclaimPolicy: Retain

STATUS(状态):
一个 PV 的生命周期中,可能会处于4中不同的阶段:
• Available(可用):表示可用状态,还未被任何 PVC 绑定。
• Bound(已绑定):表示 PV 已经被 PVC 绑定。
• Released(已释放):PVC 被删除,但是资源还未被集群重新声明。
• Failed(失败): 表示该 PV 的自动回收失败。

现在PV使用方式称为静态供给,需要K8s运维工程师提前创
建一堆PV,供开发者使用。

PV 动态供给(StorageClass)

PV静态供给明显的缺点是维护成本太高了!因此,K8s开始支持PV动态供给,使用StorageClass对象实现。

支持动态供给的存储插件:
https://kubernetes.io/docs/concepts/storage/storage-classes/

K8s默认不支持NFS动态供给,需要单独部署社区开发的插件。
项目地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

部署:

kubectl apply -f rbac.yaml # 授权访问apiserver
kubectl apply -f deployment.yaml # 部署插件,需修改里面NFS服务器地址与共享目录
kubectl apply -f class.yaml # 创建存储类
kubectl get sc # 查看存储类
# class.yaml 
piVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

# deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: lizhenliang/nfs-subdir-external-provisioner:v4.0.1
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.95.206
            - name: NFS_PATH
              value: /nfs/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.95.206
            path: /nfs/kubernetes
# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io


测试:在创建pvc时指定存储类名称。

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: "managed-nfs-storage"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: nginx
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: my-pvc

以上是关于Kubernetes PV与PVC 持久卷应用的主要内容,如果未能解决你的问题,请参考以下文章

Kubernetes PV与PVC 持久卷应用

Kubernetes 数据持久化之Persistent 数据卷类型

云原生 | kubernetes 资源对象 - 持久化存储PV,PVC

Kubernetes之存储卷

kubernetes pv pvc configmap secret 使用

kubernetes---存储--PV--PVC