Ceph pool配额设置

Posted ygtff

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Ceph pool配额设置相关的知识,希望对你有一定的参考价值。

功能描述

Ceph pool有限制配额的功能,下面做些试验,体验一下:

试验

  • 查看当前集群状态
[root@ceph3 ceph]# ceph -s
    cluster cbc99ef9-fbc3-41ad-a726-47359f8d84b3
     health HEALTH_OK
     monmap e2: 3 mons at ceph1=10.10.8.7:6789/0,ceph2=10.10.8.11:6789/0,ceph3=10.10.8.22:6789/0
            election epoch 10, quorum 0,1,2 ceph1,ceph2,ceph3
     osdmap e34: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v166: 64 pgs, 1 pools, 11520 bytes data, 10 objects
            345 MB used, 269 GB / 269 GB avail
                  64 active+clean
  • 查看当前pool配额
[root@ceph2 ceph]# ceph osd pool get-quota rbd
quotas for pool 'rbd':
  max objects: N/A
  max bytes  : N/A
  • 设置配额
[root@ceph2 ceph]# ceph osd pool set-quota rbd max_objects 10
set-quota max_objects = 10 for pool rbd

[root@ceph2 ceph]# ceph osd pool get-quota rbd
quotas for pool 'rbd':
  max objects: 10 objects
  max bytes  : N/A
  • 验证配额
依次上传10个obejcts到rbd pool,同时观察ceph集群状态:

[root@ceph3 ceph]# rados put obj-1 chrony.conf -p rbd
...
[root@ceph3 ceph]# rados put obj-9 chrony.conf -p rbd
[root@ceph3 ceph]# rados put obj-10 chrony.conf -p rbd

在创建objects的期间没有出现near full的情况,直到创建完第10个object的时候,出现pool full:
[root@ceph3 ceph]# ceph -s
    cluster cbc99ef9-fbc3-41ad-a726-47359f8d84b3
     health HEALTH_WARN
            pool 'rbd' is full
     monmap e2: 3 mons at ceph1=10.10.8.7:6789/0,ceph2=10.10.8.11:6789/0,ceph3=10.10.8.22:6789/0
            election epoch 10, quorum 0,1,2 ceph1,ceph2,ceph3
     osdmap e34: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v166: 64 pgs, 1 pools, 11520 bytes data, 10 objects
            345 MB used, 269 GB / 269 GB avail
                  64 active+clean
  • 取消配额限制
[root@ceph3 ceph-1]# ceph osd pool set-quota rbd max_objects 0
set-quota max_objects = 0 for pool rbd

[root@ceph3 ceph-1]# ceph osd pool get-quota rbd
quotas for pool 'rbd':
  max objects: N/A
  max bytes  : N/A

[root@ceph3 ceph-1]# ceph -s
    cluster cbc99ef9-fbc3-41ad-a726-47359f8d84b3
     health HEALTH_OK
     monmap e2: 3 mons at ceph1=10.10.8.7:6789/0,ceph2=10.10.8.11:6789/0,ceph3=10.10.8.22:6789/0
            election epoch 12, quorum 0,1,2 ceph1,ceph2,ceph3
     osdmap e57: 3 osds: 3 up, 3 in
            flags nearfull,sortbitwise,require_jewel_osds
      pgmap v533: 64 pgs, 1 pools, 11520 bytes data, 10 objects
            82030 MB used, 189 GB / 269 GB avail
                  64 active+clean

以上是关于Ceph pool配额设置的主要内容,如果未能解决你的问题,请参考以下文章

Ceph配置桶存储分片

ceph mds 配额设置#yyds干货盘点#

ubuntu环境ceph配置入门

ceph使用命令总结

配置glance使用ceph作为后端存储

配置nova服务使用ceph作为后端存储