ceph 块存储rbd的使用,使用普通户创建和挂载rbd

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ceph 块存储rbd的使用,使用普通户创建和挂载rbd相关的知识,希望对你有一定的参考价值。

参考技术A ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create rbd1-data 32 32

pool 'rbd1-data' created

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool ls

device_health_metrics

mypool

.rgw.root

default.rgw.log

default.rgw.control

default.rgw.meta

myrbd1

cephfs-metadata

cephfs-data

rbd1-data

在存储池启用rbd:

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool application enable rbd1-data rbd

enabled application 'rbd' on pool 'rbd1-data'

初始化存储池:

ceph@ceph-deploy:~/ceph-cluster$ rbd pool init -p rbd1-data

创建存储池映像文件:

映像文件的管理都是rbd命令来执行,rbd可对映像执行创建,查看,删除,以及创建快照,克隆映像,删除快照,查看快照,快照回滚等管理操作

ceph@ceph-deploy:~/ceph-cluster$ rbd create data-img1 --size 3G --pool rbd1-data --image-format 2 --image-feature layering

ceph@ceph-deploy:~/ceph-cluster$ rbd create data-img2 --size 5G --pool rbd1-data --image-format 2 --image-feature layering

查看存储池映像文件

ceph@ceph-deploy:~/ceph-cluster$ rbd list --pool rbd1-data

data-img1

data-img2

列出映像更多信息

ceph@ceph-deploy:~/ceph-cluster$ rbd list --pool rbd1-data -l

NAME      SIZE  PARENT  FMT  PROT  LOCK

data-img1  3 GiB            2           

data-img2  5 GiB            2

ceph@ceph-deploy:~/ceph-cluster$ rbd --image data-img1 --pool rbd1-data info

rbd image 'data-img1':

size 3 GiB in 768 objects

order 22 (4 MiB objects)

snapshot_count: 0

id: 3ab91c6a62f5

block_name_prefix: rbd_data.3ab91c6a62f5

format: 2

features: layering

op_features:

flags:

create_timestamp: Thu Sep  2 06:48:11 2021

access_timestamp: Thu Sep  2 06:48:11 2021

modify_timestamp: Thu Sep  2 06:48:11 2021

ceph@ceph-deploy:~/ceph-cluster$ rbd --image data-img1 --pool rbd1-data info --format json --pretty-format



    "name": "data-img1",

    "id": "3ab91c6a62f5",

    "size": 3221225472,

    "objects": 768,

    "order": 22,

    "object_size": 4194304,

    "snapshot_count": 0,

    "block_name_prefix": "rbd_data.3ab91c6a62f5",

    "format": 2,

    "features": [

        "layering"

    ],

    "op_features": [],

    "flags": [],

    "create_timestamp": "Thu Sep  2 06:48:11 2021",

    "access_timestamp": "Thu Sep  2 06:48:11 2021",

    "modify_timestamp": "Thu Sep  2 06:48:11 2021"



镜像(映像)特性的启用和禁用

特性包括:

layering支持分层快照特性  默认开启

striping条带化

exclusive-lock:支持独占锁  默认开启

object-map 支持对象映射,加速数据导入导出及已用空间特性统计等  默认开启

fast-diff 快速计算对象和快找数据差异对比  默认开启

deep-flatten  支持快照扁平化操作  默认开启

journaling  是否记录日志

开启:

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable object-map --pool rbd1-data --image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable fast-diff --pool rbd1-data --image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable exclusive-lock --pool rbd1-data --image data-img1

禁止:

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable object-map --pool rbd1-data --image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable fast-diff --pool rbd1-data --image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable exclusive-lock --pool rbd1-data --image data-img1

客户端使用块设备:

首先要安装ceph-comman,配置授权

[root@ceph-client1 ceph_data]# yum install -y http://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm

[root@ceph-client1 ceph_data]# yum install ceph-common -y 

授权,

ceph@ceph-deploy:/etc/ceph$ sudo -i

root@ceph-deploy:~# cd /etc/ceph/           

root@ceph-deploy:/etc/ceph# scp ceph.conf ceph.client.admin.keyring root@192.168.241.21:/etc/ceph

ubuntu系统:

root@ceph-client2:/var/lib/ceph# apt install -y ceph-common

root@ceph-deploy:/etc/ceph# sudo scp ceph.conf ceph.client.admin.keyring ceph@192.168.241.22:/tmp

ceph@192.168.241.22's password:

ceph.conf                                                                                                                  100%  270  117.7KB/s  00:00   

ceph.client.admin.keyring

root@ceph-client2:/var/lib/ceph# cd /etc/ceph/

root@ceph-client2:/etc/ceph# cp /tmp/ceph.c* /etc/ceph/

root@ceph-client2:/etc/ceph# ll /etc/ceph/

total 20

drwxr-xr-x  2 root root 4096 Aug 26 07:58 ./

drwxr-xr-x 84 root root 4096 Aug 26 07:49 ../

-rw-------  1 root root  151 Sep  2 07:24 ceph.client.admin.keyring

-rw-r--r--  1 root root  270 Sep  2 07:24 ceph.conf

-rw-r--r--  1 root root  92 Jul  8 07:17 rbdmap

-rw-------  1 root root    0 Aug 26 07:58 tmpmhFvZ7

客户端映射镜像

root@ceph-client2:/etc/ceph# rbd -p rbd1-data map data-img1

rbd: sysfs write failed

RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd1-data/data-img1 object-map fast-diff".

In some cases useful info is found in syslog - try "dmesg | tail".

rbd: map failed: (6) No such device or address

root@ceph-client2:/etc/ceph# rbd feature disable rbd1-data/data-img1 object-map fast-diff

root@ceph-client2:/etc/ceph# rbd -p rbd1-data map data-img1

/dev/rbd0

root@ceph-client2:/etc/ceph# rbd -p rbd1-data map data-img2

格式化块设备admin映射映像文件

查看块设备

root@ceph-client2:/etc/ceph# lsblk

NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda      8:0    0  20G  0 disk

└─sda1  8:1    0  20G  0 part /

sr0    11:0    1 1024M  0 rom 

rbd0  252:0    0    3G  0 disk

rbd1  252:16  0    5G  0 disk

root@ceph-client2:/etc/ceph# mkfs.ext4 /dev/rbd1

mke2fs 1.44.1 (24-Mar-2018)

Discarding device blocks: done                           

Creating filesystem with 1310720 4k blocks and 327680 inodes

Filesystem UUID: 168b99e6-a3d7-4dc6-9c69-76ce8b42f636

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                           

Writing inode tables: done                           

Creating journal (16384 blocks): done

Writing superblocks and filesystem accounting information: done

挂在挂设备

root@ceph-client2:/etc/ceph# mkdir /data/data1 -p

root@ceph-client2:/etc/ceph# mount /dev/rbd1 /data/data1/

验证写入数据:

root@ceph-client2:/etc/ceph# cd /data/data1/

root@ceph-client2:/data/data1# cp /var/log/ . -r

root@ceph-client2:/data/data1# ceph df

--- RAW STORAGE ---

CLASS    SIZE    AVAIL    USED  RAW USED  %RAW USED

hdd    220 GiB  213 GiB  7.4 GiB  7.4 GiB      3.37

TOTAL  220 GiB  213 GiB  7.4 GiB  7.4 GiB      3.37

--- POOLS ---

POOL                  ID  PGS  STORED  OBJECTS    USED  %USED  MAX AVAIL

device_health_metrics  1    1      0 B        0      0 B      0    66 GiB

mypool                  2  32  1.2 MiB        1  3.5 MiB      0    66 GiB

.rgw.root              3  32  1.3 KiB        4  48 KiB      0    66 GiB

default.rgw.log        4  32  3.6 KiB      209  408 KiB      0    66 GiB

default.rgw.control    5  32      0 B        8      0 B      0    66 GiB

default.rgw.meta        6    8      0 B        0      0 B      0    66 GiB

myrbd1                  7  64  829 MiB      223  2.4 GiB  1.20    66 GiB

cephfs-metadata        8  32  563 KiB      23  1.7 MiB      0    66 GiB

cephfs-data            9  64  455 MiB      129  1.3 GiB  0.66    66 GiB

rbd1-data              10  32  124 MiB      51  373 MiB  0.18    66 GiB

创建普通用户并授权

root@ceph-deploy:/etc/ceph# ceph auth add client.huahualin mon "allow rw"  osd "allow rwx pool=rbd1-data"

added key for client.huahualin

root@ceph-deploy:/etc/ceph# ceph-authtool --create-keyring ceph.client.huahualin.keyring

creating ceph.client.huahualin.keyring

root@ceph-deploy:/etc/ceph# ceph auth  get client.huahualin -o ceph.client.huahualin.keyring

exported keyring for client.huahualin

使用普通用户创建rbd

root@ceph-deploy:/etc/ceph# scp ceph.conf ceph.client.huahualin.keyring  root@192.168.241.21:/etc/ceph/

普通用户映射镜像

[root@ceph-client1 ~]# rbd --user huahualin --pool rbd1-data map data-img2

/dev/rbd0

使用普通用户挂载rbd

[root@ceph-client1 ~]# mkfs.ext4 /dev/rbd0

[root@ceph-client1 ~]# fdisk -l /dev/rbd0

[root@ceph-client1 ~]# mkdir /data

[root@ceph-client1 ~]# mount  /dev/rbd0 /data

[root@ceph-client1 ~]# df -Th

Filesystem              Type      Size  Used Avail Use% Mounted on

devtmpfs                devtmpfs  475M    0  475M  0% /dev

tmpfs                  tmpfs    487M    0  487M  0% /dev/shm

tmpfs                  tmpfs    487M  7.7M  479M  2% /run

tmpfs                  tmpfs    487M    0  487M  0% /sys/fs/cgroup

/dev/mapper/centos-root xfs        37G  1.7G  36G  5% /

/dev/sda1              xfs      1014M  138M  877M  14% /boot

tmpfs                  tmpfs      98M    0  98M  0% /run/user/0

192.168.241.12:6789:/  ceph      67G  456M  67G  1% /ceph_data

/dev/rbd0              ext4      4.8G  20M  4.6G  1% /data

挂载rbd后会自动加载模块libceph.ko

[root@ceph-client1 ~]# lsmod |grep ceph

ceph                  363016  1

libceph              306750  2 rbd,ceph

dns_resolver          13140  1 libceph

libcrc32c              12644  4 xfs,libceph,nf_nat,nf_conntrack

[root@ceph-client1 ~]# modinfo libceph

filename:      /lib/modules/3.10.0-1160.el7.x86_64/kernel/net/ceph/libceph.ko.xz

license:        GPL

description:    Ceph core library

author:        Patience Warnick <patience@newdream.net>

author:        Yehuda Sadeh <yehuda@hq.newdream.net>

author:        Sage Weil <sage@newdream.net>

retpoline:      Y

rhelversion:    7.9

srcversion:    D4ABB648AE8130ECF90AA3F

depends:        libcrc32c,dns_resolver

intree:        Y

vermagic:      3.10.0-1160.el7.x86_64 SMP mod_unload modversions

signer:        CentOS Linux kernel signing key

sig_key:        E1:FD:B0:E2:A7:E8:61:A1:D1:CA:80:A2:3D:CF:0D:BA:3A:A4:AD:F5

sig_hashalgo:  sha256

如果镜像空间不够用了,我们可以做镜像空间的拉伸,一般不建议减小

查看rdb1-data存储池的镜像

[root@ceph-client1 ~]# rbd ls -p rbd1-data -l

NAME      SIZE  PARENT  FMT  PROT  LOCK

data-img1  3 GiB            2           

data-img2  5 GiB            2 

比如data-img2空间不够了,需要拉伸,将data-img2扩展到8G

[root@ceph-client1 ~]# rbd resize --pool rbd1-data --image data-img2 --size  8G

Resizing image: 100% complete...done.

可以通过fdisk -l查看镜像空间大小,但是通过df -h就看不到

[root@ceph-client1 ~]# lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda              8:0    0  40G  0 disk

├─sda1            8:1    0    1G  0 part /boot

└─sda2            8:2    0  39G  0 part

  ├─centos-root 253:0    0  37G  0 lvm  /

  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]

sr0              11:0    1 1024M  0 rom 

rbd0            252:0    0    8G  0 disk /data

[root@ceph-client1 ~]# fdisk -l /dev/rbd0

Disk /dev/rbd0: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

将挂载设置开机启动

[root@ceph-client1 ~]# vi /etc/rc.d/rc.local

rbd --user huahualin --pool rbd1-data map data-img2

mount /dev/rbd0 /data

[root@ceph-client1 ~]# chmod a+x  /etc/rc.d/rc.local

[root@ceph-client1 ~]# reboot

Ceph v12.2 Luminous 块存储(RBD)搭建

1. 创建pool

创建存储池: ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] [crush-ruleset-name] [expected-num-objects]
删除存储池: ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
重命名存储池: ceph osd pool rename {current-pool-name} {new-pool-name}

ceph osd pool create test_pool 128 128 replicated

2. 查看存储池列表

ceph osd lspools

3.创建块设备镜像

创建块设备镜像命令是rbd create --size {megabytes} {pool-name}/{image-name},如果pool_name不指定,则默认的pool是rbd。 下面的命令将创建一个10GB大小的块设备:

rbd create --size 10240 test_image -p test_pool

删除镜像: rbd rm test_pool/test_image

4. 查看块设备镜像

查看块设备的命令是rbd info {pool-name}/{image-name}

hcy@admin_server:~/my-cluster$ rbd info test_pool/test_image
rbd image ‘test_image‘:
    size 1024 MB in 256 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.372674b0dc51
    format: 2
    features: layering
    flags: 
    create_timestamp: Sat Sep 23 18:16:28 2017

注意到上面的rbd info显示的RBD镜像的format为2,Format 2的RBD镜像支持RBD分层,是实现Copy-On-Write的前提条件。

5.将块设备映射到系统内核

块设备映射到操作系统的命令是rbd map {image-name}

sudo rbd map test_pool/test_image

取消映射: rbd unmap test_pool/test_image
此时如果打印:

rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg |www.bajieyy.org tail" or so.
rbd: map failed: (6) No such device or address

表示当前系统不支持feature,禁用当前系统内核不支持的feature:

rbd feature disable test_pool/test_image exclusive-lock, object-map, fast-diff, deep-flatten

重新映射:

hcy@admin_server:~/my-cluster$ sudo rbd map test_pool/test_image
/dev/rbd0

6. 格式化块设备镜像

sudo mkfs.ext4 /dev/rbd/test_pool/test_image

7. 挂载文件系统

sudo mkdir /mnt/ceph-block-device
sudo chmod 777  /mnt/ceph-block-device
sudo mount /dev/rbd/test_pool/test_image /mnt/ceph-block-device
cd /mnt/ceph-block-device

至此,Ceph的块设备搭建已完成。但是目前我遇到了一个问题,两台Client机器挂载了同一个镜像,两台机器的文件列表不能保持同步,可能要用到rbd mirror相关的知识,这方面还没接触,后续跟进更新。

以上是关于ceph 块存储rbd的使用,使用普通户创建和挂载rbd的主要内容,如果未能解决你的问题,请参考以下文章

ceph-rbd kvm 删除数据后集群空间不释放

rancher2 挂载ceph-rbd

Ceph v12.2 Luminous 块存储(RBD)搭建

ceph 部署rbd文件系统

ceph存储集群实战——ceph存储配置(映射RBD镜像到客户端)

k8s分布式存储-Ceph