cephfs 文件空间重建

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了cephfs 文件空间重建相关的知识,希望对你有一定的参考价值。

重置cephfs

清理现有cephfs 所有文件,重建空间:

清理删除 cephfs

关闭所有mds服务

systemctl stop [email protected]$HOSTNAME  
systemctl status [email protected]$HOSTNAME  

查看cephfs 信息

## ceph fs ls 
name: leadorfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

## ceph mds stat
e392: 0/1/1 up, 1 failed

## ceph mon dump
dumped monmap epoch 1

设置mds状态为失败

ceph mds fail 0    

删除mds文件系统

ceph fs rm leadorfs --yes-i-really-mean-it      

删除元数据文件夹

ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it   
ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it   

再次查看集群状态

## ceph mds stat
e394:

## eph mds  dump
dumped fsmap epoch 397
fs_name cephfs
epoch   397
flags   0
created 0.000000
modified        0.000000
tableserver     0
root    0
session_timeout 0
session_autoclose       0
max_file_size   0
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={}
max_mds 0
in
up      {}
failed
damaged
stopped
data_pools
metadata_pool   0
inline_data     disabled

重建 cephfs

启动所有mds服务

systemctl start [email protected]$HOSTNAME
systemctl status [email protected]$HOSTNAME

#验证:
ceph mds stat

e397:, 3 up:standby

重建cephfs

ceph osd pool create cephfs_data 512

ceph osd pool create cephfs_metadata 512

ceph fs new ptcephfs cephfs_metadata cephfs_data

验证集群状态

## ceph mds stat

e400: 1/1/1 up {0=jp33e502-4-13.ptengine.com=up:active}, 2 up:standby

## ceph mds dump

dumped fsmap epoch 400
fs_name ptcephfs
epoch   400
flags   0
created 2018-09-11 12:48:26.300848
modified        2018-09-11 12:48:26.300848
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in      0
up      {0=25579}
failed
damaged
stopped
data_pools      3
metadata_pool   4
inline_data     disabled
25579:  172.19.4.13:6800/2414848276 ‘jp33e502-4-13.ptengine.com‘ mds.0.399 up:active seq 834

集群健康状态

ceph -w
    cluster fe946afe-43d0-404c-baed-fb04cd22d20d
     health HEALTH_OK
     monmap e1: 3 mons at {jp33e501-4-11=172.19.4.11:6789/0,jp33e501-4-12=172.19.4.12:6789/0,jp33e502-4-13=172.19.4.13:6789/0}
            election epoch 12, quorum 0,1,2 jp33e501-4-11,jp33e501-4-12,jp33e502-4-13
      fsmap e400: 1/1/1 up {0=jp33e502-4-13.ptengine.com=up:active}, 2 up:standby
     osdmap e2445: 14 osds: 14 up, 14 in
            flags sortbitwise,require_jewel_osds
      pgmap v876685: 1024 pgs, 2 pools, 2068 bytes data, 20 objects
            73366 MB used, 12919 GB / 12990 GB avail
                1024 active+clean

以上是关于cephfs 文件空间重建的主要内容,如果未能解决你的问题,请参考以下文章

cephfs 文件系统

CephFS管理命令

Android强制Fragment重建View

cephfs shell cp 数据内容丢失问题分析

Ceph(CephFS)文件系统介绍和搭建

Cephfs+Samba构建基于Ceph的文件共享服务