Statefulset+storageclass+ceph

Posted zjz20

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Statefulset+storageclass+ceph相关的知识,希望对你有一定的参考价值。

1.创建kubernetes对接ceph需要的外部客户端rbd-provisioner

注:假如你的kubernetes集群是由kubeadm初始化时,那么kube-controller-manager本身是没有rbd的命令,所以需要添加一个外部插件

quay.io/external_storage/rbd-provisioner:latest,添加方法如下

cat rbd-provisioner.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rbd-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: quay.io/external_storage/rbd-provisioner:latest
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisione

 kubectl apply -f rbd-provisioner.yaml

2.创建ceph-secret这个k8s的secret对象,这个secret对象用于k8s volume插件访问ceph集群用的

(1)在ceph集群的管理节点获取client.admin的keyring值,并用base64编码,在ceph集群的管理节点上操作

ceph auth get-key client.admin | base64

获取的结果如下所示

QVFBczlGOWRCVTkrSXhBQThLa1k4VERQQjhVT29wd0FnZkNDQmc9PQ==

(2)创建secret

cat ceph-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
type: "ceph.com/rbd"
data:
  key: QVFBczlGOWRCVTkrSXhBQThLa1k4VERQQjhVT29wd0FnZkNDQmc9PQ==

kubectl apply -f ceph-secret.yaml

3.创建storageclass

(1)在ceph集群的管理节点创建pool池

ceph osd pool create k8stest 256

rbd create rbda -s 1024 -p k8stest

rbd feature disable  k8stest/rbda object-map fast-diff deep-flatten

(2)在k8s集群的master节点创建storageclass

cat  storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: k8s-rbd
provisioner: ceph.com/rbd
parameters:
  monitors: 192.168.199.201:6789
  adminId: admin
  adminSecretName: ceph-secret
  pool: k8stest
  userId: admin
  userSecretName: ceph-secret
  fsType: xfs
  imageFormat: "2"
  imageFeatures: "layering"apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: k8s-rbd
provisioner: ceph.com/rbd
parameters:
  monitors: 192.168.199.201:6789
  adminId: admin
  adminSecretName: ceph-secret
  pool: k8stest
  userId: admin
  userSecretName: ceph-secret
  fsType: xfs
  imageFormat: "2"
  imageFeatures: "layering"

 注:storageclass参数说明

 monitors: 192.168.199.201:6789   
  #ceph集群的监控节点,如果monitor是多个,可以写多个monitor地址,用,分割
  pool: k8stest
  #pool池,需要事先创建,在ceph集群的管理节点,按照上面创建pool的步骤创建即可
  userId: admin
  #k8s访问ceph集群使用admin管理用户
  userSecretName: ceph-secret
  #k8s访问ceph需要的密钥,就是我们上面创建的secret
  fsType: xfs
  #rbd块的文件系统类型,可以是xfs,也可以是ext4
  imageFormat: "2"
  #默认即可
  imageFeatures: "layering"
  #默认即可

kubectl apply  -f  storageclass.yaml

4.statefulset+ceph最佳实践测试

cat  stat.yaml

apiVersion: v1
kind: Service
metadata:
  name: storage
  labels:
    app: storage
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: storage
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: storage
spec:
  serviceName: "storage"
  replicas: 2
  template:
    metadata:
      labels:
        app: storage
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
        accessModes: [ "ReadWriteOnce" ]
        volumeMode: Filesystem
        storageClassName: k8s-rbd
        resources:
          requests:
            storage: 1Gi

 kubectl apply -f state.yaml

上面命令执行之后,会自动生成pvc,pv,然后把pvc和pv绑定

上面的pv是通过storageclass这个存储类动态生成的

原文大佬:https://mp.weixin.qq.com/s/dfUJZDUEyZupU4SBURZvLg

以上是关于Statefulset+storageclass+ceph的主要内容,如果未能解决你的问题,请参考以下文章

storageclass和本地持久化存储

Kubernetes 系列:持久化存储StorageClass

k8s 部署高可用mysql 主从集群

k8s 部署高可用mysql 主从集群

云原生 | kubernetes 持久化存储 - StorageClass动态绑定PV

k8s 改变默认 StorageClass