k8s 连接ceph集群

Posted lixinliang

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s 连接ceph集群相关的知识,希望对你有一定的参考价值。

创建 ceph admin secret
1. 由于使用的是外部ceph,因此在获得ceph.client.admin.keyring ceph.conf    后将 将ceph的配置文件ceph.comf放在所有节点的/etc/ceph目录下:(master + node)
2. 将caph集群的ceph.client.admin.keyring文件放在k8s控制节点的/etc/ceph目录  (master)
3.将ceph.client.admin.keyring 中的key 取出并加密,例如
key = AQByfGNceA3VGhAAK0Dq0M0zNuPZOSGPJBACNA==
将key 信息存放在文本中
cat tmp1.txt |awk ‘{printf "%s",$NF}‘ |base64  
记录结果


$ cat ceph-admin-secret.yaml 
apiVersion: v1
data:
  key: QVFCeWZHTmNlQTNWR2hBQUswRHEwTTB6TnVQWk9TR1BKQkFDTkE9PQ==                   #为 base64 之后的结果
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: kubernetes.io/rbd


kubectl create -f ceph-admin-secret.yaml 

 

创建 Ceph pool and a user secret
ceph osd pool create kube 8 8
ceph auth add client.kube mon ‘allow r‘ osd ‘allow rwx pool=kube‘
ceph auth get-key client.kube > /tmp/key
kubectl create secret generic ceph-secret --from-file=/tmp/key --namespace=kube-system --type=kubernetes.io/rbd
创建 RBD provisioner
$ cat provisoner.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  labels:
    app: rbd-provisioner
  name: rbd-provisioner
  namespace: kube-system
  resourceVersion: "1072409"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/rbd-provisioner
  uid: 01f18fcc-4705-4a9c-a28f-8b771eb49908
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 1
  revisionHistoryLimit: 2147483647
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
        image: quay.io/external_storage/rbd-provisioner:latest
        imagePullPolicy: IfNotPresent
        name: rbd-provisioner
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30


$ kubectl create -f provisoner.yaml 

 

创建storage class 连接 ceph集群
$ cat ceph-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: rbd
provisioner: ceph.com/rbd
parameters:
  monitors: 10.4.29.134:6789,10.4.29.31:6789,10.4.29.160:6789,10.4.25.135:6789,10.4.29.36:6789
  pool: kube
  adminId: admin
  adminSecretNamespace: kube-system
  adminSecretName: ceph-admin-secret
  userId: kube
  userSecretNamespace: kube-system
  userSecretName: ceph-secret
  imageFormat: "2"
  imageFeatures: layering


$ kubectl create -f  ceph-class.yaml

 

创建 mongo pod进行测试 基于副本集mongo

$ cat testmongo.yaml 
apiVersion: apps/v1beta1
kind: StatefulSet
metadata: 
  name: mongo
  namespace: mongo
spec: 
  selector: 
    matchLabels: 
      app: mongo
  replicas: 2
  podManagementPolicy: Parallel
  serviceName: shared-mongo-mongodb-replicaset
  template: 
    metadata: 
      labels: 
        app: mongo
    spec: 
      terminationGracePeriodSeconds: 10
      affinity: 
         podAntiAffinity: 
           requiredDuringSchedulingIgnoredDuringExecution: 
           - labelSelector: 
               matchExpressions: 
               - key: "app"
                 operator: In
                 values: 
                 - mongo
             topologyKey: "kubernetes.io/hostname"
      containers: 
      - name: mongo
        image: mongo:3.6
        command:  
        - mongod 
        - "--bind_ip_all"
        - "--replSet"
        - rs0
        ports: 
        - containerPort: 27017
        volumeMounts: 
        - name: mongo-data
          mountPath: /data/db
  volumeClaimTemplates:                    # Template 模板,会自动创建Pvc 和pv
  - metadata:
      name: mongo-data
      namespace: mongo
    spec:
      accessModes:
        - ReadWriteOnce      
      storageClassName: rbd
      resources:
        requests:
          storage: 2Gi


$ kubectl create -f testmongo.yaml 

 

证明 连接ceph 成功

$ kubectl get pv 
pvc-01474bb1-bffb-11e9-a095-5254002c2b14   2Gi        RWO            Delete           Bound    mongo/mongo-data-mongo-0    rbd                     33m
pvc-01e96076-bffb-11e9-a095-5254002c2b14   2Gi        RWO            Delete           Bound    mongo/mongo-data-mongo-1    rbd                     33m
$ kubectl get pvc -n mongo 
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongo-data-mongo-0   Bound    pvc-01474bb1-bffb-11e9-a095-5254002c2b14   2Gi        RWO            rbd            33m
mongo-data-mongo-1   Bound    pvc-01e96076-bffb-11e9-a095-5254002c2b14   2Gi        RWO            rbd            33m


$ kubectl get pod -n mongo   
NAME      READY   STATUS    RESTARTS   AGE
mongo-0   1/1     Running   0          34m
mongo-1   1/1     Running   0          34m

  

以上是关于k8s 连接ceph集群的主要内容,如果未能解决你的问题,请参考以下文章

k8s对接ceph存储

k8s集群中安装rook-ceph

K8S使用ceph-csi持久化存储之CephFS

K8S使用Ceph做持久化存储

记一次 K8S 排错实战过程

Ceph持久化存储为k8s应用提供存储方案