记录ceph两个rbd删除不了的处理过程
Posted alfiesuse
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了记录ceph两个rbd删除不了的处理过程相关的知识,希望对你有一定的参考价值。
在一个使用的环境发现两个ceph的rbd删除不了,发现两个rbd都是由于残留了watch的信息。在此记录处理过程。
处理方法
[root@node-2 ~]# rbd rm compute/2d05517a-8670-4cce-b39d-709e055381d6_disk 2018-06-11 13:19:14.787750 7fd05853bd80 -1 librbd: cannot obtain exclusive lock - not removing Removing image: 0% complete...failed. rbd: error: image still has watchers This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.
查看rbd 残留的watch信息
[root@node-2 ~]# rbd status compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Watchers: watcher=192.168.55.2:0/2900899764 client.14844 cookie=139644428642944
将该残留的watch信息添加到osd的黑名单,再查看watch是否存在。
[root@node-2 ~]# ceph osd blacklist add 192.168.55.2:0/2900899764 blacklisting 192.168.55.2:0/2900899764 until 2018-06-11 14:25:31.027420 (3600 sec)
[root@node-2 ~]# rbd status compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Watchers: none
删除rbd
[root@node-2 ~]# rbd rm compute/2d05517a-8670-4cce-b39d-709e055381d6_disk Removing image: 100% complete...done.
以上是关于记录ceph两个rbd删除不了的处理过程的主要内容,如果未能解决你的问题,请参考以下文章
ceph-csi源码分析-rbd driver-controllerserver分析
[源码分析]Kubernests-csi与Openstack-Cinder使用Ceph-rbd创建快照过程对比及源码分析
[源码分析]Kubernests-csi与Openstack-Cinder使用Ceph-rbd创建快照过程对比及源码分析