Ceph 手动部署

Posted ygtff

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Ceph 手动部署相关的知识,希望对你有一定的参考价值。

规划集群

  • 生产环境
    • 至少3台物理机组成Ceph集群
    • 双网卡
  • 测试环境
    • 1台主机也可以
    • 单网卡也可以

准备工作

  • 在所有Ceph节点上安装NTP

    [root @test ~]# yum install ntp
  • 在所有Ceph节点上检查Iptables规则,确定打开6789端口,及6800:7300端口

    [root @test ~]# iptables -A INPUT -i eth0 -p tcp -s 10.10 . 8.0 / 24 --dport 6789 -j ACCEPT [root @test ~]# iptables -A INPUT -i eth0 -p tcp -s 10.10 . 8.0 / 24 --dport 6800 : 7300 -j ACCEPT
  • 关闭SELinux

    [root @test ~]# setenforce 0

获取Ceph包

  • 添加源,在/etc/yum.repos.d/目录下创建ceph.repo

    [root @test ~]# cd /etc/yum.repos.d/ [root @test ~]# touch ceph.repo   将下面的内容复制到ceph.repo中,但是注意将ceph-release字段替换为想要安装的ceph版本,这里安装的是jewel版本; 将distro替换为自己系统的版本,当前用的是el7 [ceph] name=Ceph packages for $basearch baseurl=http: //download.ceph.com/rpm-ceph-release/distro/$basearch enabled= 1 priority= 2 gpgcheck= 1 type=rpm-md gpgkey=https: //download.ceph.com/keys/release.asc   [ceph-noarch] name=Ceph noarch packages baseurl=http: //download.ceph.com/rpm-ceph-release/distro/noarch enabled= 1 priority= 2 gpgcheck= 1 type=rpm-md gpgkey=https: //download.ceph.com/keys/release.asc   [ceph-source] name=Ceph source packages baseurl=http: //download.ceph.com/rpm-ceph-release/distro/SRPMS enabled= 0 priority= 2 gpgcheck= 1 type=rpm-md gpgkey=https: //download.ceph.com/keys/release.asc     [root @test ~]# cat ceph.repo [ceph] name=Ceph packages for $basearch baseurl=http: //download.ceph.com/rpm-jewel/el7/$basearch enabled= 1 priority= 2 gpgcheck= 1 type=rpm-md gpgkey=https: //download.ceph.com/keys/release.asc   [ceph-noarch] name=Ceph noarch packages baseurl=http: //download.ceph.com/rpm-jewel/el7/noarch enabled= 1 priority= 2 gpgcheck= 1 type=rpm-md gpgkey=https: //download.ceph.com/keys/release.asc   [ceph-source] name=Ceph source packages baseurl=http: //download.ceph.com/rpm-jewel/el7/SRPMS enabled= 0 priority= 2 gpgcheck= 1 type=rpm-md gpgkey=https: //download.ceph.com/keys/release.asc
  • 安装ceph包

    [root @test ~]# yum install ceph [root @test ~]# rpm -qa | grep ceph ceph-mds- 10.2 . 10 - 0 .el7.x86_64 ceph- 10.2 . 10 - 0 .el7.x86_64 libcephfs1- 10.2 . 10 - 0 .el7.x86_64 python-cephfs- 10.2 . 10 - 0 .el7.x86_64 ceph-common- 10.2 . 10 - 0 .el7.x86_64 ceph-base- 10.2 . 10 - 0 .el7.x86_64 ceph-osd- 10.2 . 10 - 0 .el7.x86_64 ceph-mon- 10.2 . 10 - 0 .el7.x86_64 ceph-selinux- 10.2 . 10 - 0 .el7.x86_64

手动部署

配置Ceph

  •  配置ceph.conf

    [root @test ~]# touch ceph.conf
  • 生成Ceph Cluster ID

    [root @test ~]# uuidgen 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b [root @test ~]# echo "fsid = 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b" >> /etc/ceph/ceph.conf [root @test ~]# cat ceph.conf [global] fsid = 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b

部署Monitor

  • 配置中添加初始Monitor hostname

    [root @test ~]# echo "mon_initial_members = test" >> /etc/ceph/ceph.conf
  • 配置中添加初始Monitor IP地址

    [root @test ~]# echo "mon_host = 10.10.8.19" >> /etc/ceph/ceph.conf
  • 生成Monitor的keyring

    [root @test ~]# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
  • 生成client的keyring

    [root @test ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid= 0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
  • 将client keyring加入到monitor keyring

    [root @test ~]# ceph-authtool /tmp/ceph.mon.keyring -- import -keyring /etc/ceph/ceph.client.admin.keyring
  • 生成monmap

    [root @test ~]# monmaptool --create --add test 10.10 . 8.19 --fsid 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b /tmp/monmap
  • 创建Monitor目录

    [root @test ~]# mkdir /var/lib/ceph/mon/ceph-test
  • 创建Monitor

    [root @test ~]# ceph-mon --mkfs -i test --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
  • 填充ceph.conf

    将以下内容添加到ceph.conf中,注意替换相关字段 public network = network[, network] cluster network = network[, network] auth cluster required = cephx auth service required = cephx auth client required = cephx osd journal size = n   [root @test ~]# cat /etc/ceph/ceph.conf [global] fsid = 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b mon initial members = test mon_host = 10.10 . 8.19 public_network = 10.10 . 8.0 / 24 cluster_network = 10.10 . 8.0 / 24 osd_journal_size = 2048
  • 创建done文件

    [root @test ~]# touch /var/lib/ceph/mon/ceph-test/done
  • 修改权限

    [root @test ~]# chown ceph:ceph  /var/lib/ceph/mon/ceph-test/ -R
  • 启动Monitor服务

    [root @test ~]# systemctl start ceph-mon @test .service
  • 验证

    [root @test ~]# ceph -s       cluster 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b       health HEALTH_ERR              no osds       monmap e1: 1 mons at test= 10.10 . 8.19 : 6789 / 0              election epoch 3 , quorum 0 test       osdmap e1: 0 osds: 0 up, 0 in              flags sortbitwise,require_jewel_osds        pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects              0 kB used, 0 kB / 0 kB avail                    64 creating

部署OSD

  • 生成OSD的UUID

    [root @test ~]# uuidgen 8ff77024-27dd-423b-9f66-0a7cefcd3f53
  • 创建OSD

    [root @test ~]# ceph osd create 8ff77024-27dd-423b-9f66-0a7cefcd3f53 0
  • 创建OSD目录

    [root @test ~]# mkdir /var/lib/ceph/osd/ceph- 0
  • 格式化OSD磁盘

    [root @test ~]# mkfs -t xfs /dev/vdb
  • 挂载OSD磁盘

    [root @test ~]# mount /dev/vdb /var/lib/ceph/osd/ceph- 0 /
  • 初始化OSD

    [root @test ~]# ceph-osd -i 0 --mkfs --mkkey --osd-uuid 8ff77024-27dd-423b-9f66-0a7cefcd3f53
  • 注册OSD keyring

    [root @test ~]# ceph auth add osd. 0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph- 0 /keyring
  • 添加host到CRUSH map

    [root @test ~]# ceph osd tree ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY - 1      0 root default                                     0      0 osd. 0           down        0          1.0000 [root @test ~]# ceph osd crush add-bucket test host
  • 移动host到CRUSH map中的root

    [root @test ~]# ceph osd crush move test root= default
  • 添加OSD到CRUSH下的host

    [root @test ~]# ceph osd crush add osd. 0 1.0 host=test
  • 修改权限

    [root @test ~]# chown ceph:ceph /var/lib/ceph/osd/ceph- 0 / -R
  • 启动OSD服务

    [root @test ~]# systemctl start ceph-osd @0 .service
  • 验证

    [root @test ~]# ceph -s       cluster 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b       health HEALTH_OK       monmap e1: 1 mons at test= 10.10 . 8.19 : 6789 / 0              election epoch 3 , quorum 0 test       osdmap e12: 1 osds: 1 up, 1 in              flags sortbitwise,require_jewel_osds        pgmap v16: 64 pgs, 1 pools, 0 bytes data, 0 objects              2080 MB used, 18389 MB / 20470 MB avail                    64 active+clean [root @test ~]# ceph osd tree ID WEIGHT  TYPE NAME                UP/DOWN REWEIGHT PRIMARY-AFFINITY - 1 1.00000 root default                                                                                        - 2 1.00000     host test                                    0 1.00000         osd. 0                 up  1.00000          1.00000

以上是关于Ceph 手动部署的主要内容,如果未能解决你的问题,请参考以下文章

ceph手动部署

龙芯Ceph 集群手动部署(LoongArch架构Ceph N版手动部署MONOSDMGRDashboard服务)

007 Ceph手动部署单节点

Ceph 手动部署SSD

Ceph 手动部署

Ceph