2个Kubernetes使用同一个Ceph存储达到Kubernetes间持久化数据迁移
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了2个Kubernetes使用同一个Ceph存储达到Kubernetes间持久化数据迁移相关的知识,希望对你有一定的参考价值。
2个Kubernetes使用同一个Ceph存储达到Kubernetes间持久化数据迁移[TOC]
当前最新Kubernetes稳定版为1.14。现在为止,还没有不同Kubernetes间持久化存储迁移的方案。但根据Kubernetes pv/pvc绑定流程和原理,只要 "存储"-->"PV"-->"PVC" 的绑定关系相同,即可保证不同间Kubernetes可挂载相同的存储,并且里面是相同数据。
1. 环境
原来我的Kubernetes为阿里云ECS自己搭建的,现在想切换使用阿里云购买的Kubernetes。因Kubernetes中一些应用使用像1G、2G等小容量存储比较多,所以仍旧想保留原有的Ceph存储使用。
Kubernetes: v1.13.4
Ceph: 12.2.10 luminous (stable)
2个Kubernetes存储使用storageclass管理,并连接相同Ceph集群。可参考:Kubernetes使用Ceph动态卷部署应用
2. 迁移过程示例
数据依旧保留在存储中,并未真正有迁移动作,迁移只是相对于不同Kubernetes来讲。
2.1 提取旧Kubernetes持久化存储
为了更好的看到效果,这里新建一个nginx的deploy,并使用ceph rbd做为持久化存储,然后写一些数据。
vim rbd-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rbd-pv-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd
resources:
requests:
storage: 1Gi
vim rbd-nginx-dy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-rbd-dy
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: ceph-cephfs-volume
mountPath: "/usr/share/nginx/html"
volumes:
- name: ceph-cephfs-volume
persistentVolumeClaim:
claimName: rbd-pv-claim
# 创建pvc和deploy
kubectl create -f rbd-claim.yaml
kubectl create -f rbd-nginx-dy.yaml
查看结果,并写入数据至nginx持久化目录中:
pod/nginx-rbd-dy-7455884d49-rthzt 1/1 Running 0 4m31s
[[email protected] tmp]# kubectl get pvc,pod
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/rbd-pv-claim Bound pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 1Gi RWO ceph-rbd 4m37s
NAME READY STATUS RESTARTS AGE
pod/nginx-rbd-dy-7455884d49-rthzt 1/1 Running 0 4m36s
[[email protected] tmp]# kubectl exec -it nginx-rbd-dy-7455884d49-rthzt /bin/bash
[email protected]:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 40G 23G 15G 62% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/vda1 40G 23G 15G 62% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd5 976M 2.6M 958M 1% /usr/share/nginx/html
tmpfs 16G 12K 16G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 16G 0 16G 0% /proc/acpi
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs 16G 0 16G 0% /sys/firmware
[email protected]:/# echo ygqygq2 > /usr/share/nginx/html/ygqygq2.html
[email protected]:/# exit
exit
[[email protected] tmp]#
将pv、pvc信息提取出来:
[[email protected] tmp]# kubectl get pvc rbd-pv-claim -oyaml --export > rbd-pv-claim-export.yaml
[[email protected] tmp]# kubectl get pv pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee -oyaml --export > pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml
[[email protected] tmp]# more rbd-pv-claim-export.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
creationTimestamp: null
finalizers:
- kubernetes.io/pvc-protection
name: rbd-pv-claim
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/rbd-pv-claim
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 1Gi
storageClassName: ceph-rbd
volumeMode: Filesystem
volumeName: pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
status: {}
[[email protected] tmp]# more pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: ceph.com/rbd
rbdProvisionerIdentity: ceph.com/rbd
creationTimestamp: null
finalizers:
- kubernetes.io/pv-protection
name: pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
selfLink: /api/v1/persistentvolumes/pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: rbd-pv-claim
namespace: default
resourceVersion: "51998402"
uid: d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
persistentVolumeReclaimPolicy: Retain
rbd:
fsType: ext4
image: kubernetes-dynamic-pvc-dac8284a-6a1c-11e9-b533-1604a9a8a944
keyring: /etc/ceph/keyring
monitors:
- 172.18.43.220:6789
- 172.18.138.121:6789
- 172.18.228.201:6789
pool: kube
secretRef:
name: ceph-secret
namespace: kube-system
user: kube
storageClassName: ceph-rbd
volumeMode: Filesystem
status: {}
[[email protected] tmp]#
2.2 将提取出来的pv、pvc导入新Kubernetes中
将上文中提取出来的pv和pvc传至新的Kubernetes中:
[[email protected] tmp]# rsync -avz pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml rbd-pv-claim-export.yaml rbd-nginx-dy.yaml 172.18.97.95:/tmp/
sending incremental file list
pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml
rbd-nginx-dy.yaml
rbd-pv-claim-export.yaml
sent 1,371 bytes received 73 bytes 2,888.00 bytes/sec
total size is 2,191 speedup is 1.52
[[email protected] tmp]#
在新的Kubernetes中导入pv、pvc:
[[email protected] tmp]# kubectl apply -f pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml -f rbd-pv-claim-export.yaml
persistentvolume/pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee created
persistentvolumeclaim/rbd-pv-claim created
[[email protected] tmp]# kubectl get pv pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 1Gi RWO Retain Released default/rbd-pv-claim ceph-rbd 20s
[[email protected] tmp]# kubectl get pvc rbd-pv-claim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rbd-pv-claim Lost pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 0 ceph-rbd 28s
[[email protected] tmp]#
可以看到,pvc状态显示为Lost
,这是因为在新的Kubernetes中导入pv和pvc后,它们会自动重新生成自己的resourceVersion
和uid
,因此在新导入的pv中的spec.claimRef
信息为旧的:
为了解决新导入的pv中的spec.claimRef
信息旧的变成新的,我们将这段信息删除,由provisioner自动重新绑定它们的关系:
这里我们做成一个脚本处理:
vim unbound.sh
pv=$*
function unbound() {
kubectl patch pv -p ‘{"spec":{"claimRef":{"apiVersion":"","kind":"","name":"","namespace":"","resourceVersion":"","uid":""}}}‘ $pv
kubectl get pv $pv -oyaml> /tmp/.pv.yaml
sed ‘/claimRef/d‘ -i /tmp/.pv.yaml
#kubectl apply -f /tmp/.pv.yaml
kubectl replace -f /tmp/.pv.yaml
}
unbound
sh unbound.sh pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
脚本执行后,过个10秒左右,查看结果:
在新的Kubernetes中使用之前传的rbd-nginx-dy.yaml
验证下,在此之前,因为使用ceph rbd,需要先解除旧Kubernetes上的pod占用该rbd:
旧Kubernetes:
[[email protected] tmp]# kubectl delete -f rbd-nginx-dy.yaml
deployment.extensions "nginx-rbd-dy" deleted
新Kubernetes:
3. 小结
上面实验中,使用的是RWO
的pvc,大家试想下,如果使用RWX
,多个Kubernetes使用,这种使用场景可能有更大的作用。
Kubernetes使用过程中,pv、pvc和存储,它们的信息和绑定关系至关重要,所以可按需求当作日常备份,有了这些备份,即使Kubernetes etcd数据损坏,也可达到恢复和迁移Kubernetes持久化数据目的。
以上是关于2个Kubernetes使用同一个Ceph存储达到Kubernetes间持久化数据迁移的主要内容,如果未能解决你的问题,请参考以下文章