基于Centos7.4搭建Ceph

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了基于Centos7.4搭建Ceph相关的知识,希望对你有一定的参考价值。


 本文使用ceph-deploy工具,能快速搭建出一个ceph集群。


一、环境准备

技术分享

  •  修改主机名

   

  1. [[email protected] ~]# cat /etc/redhat-release 

  2. CentOS Linux release 7.4.1708 (Core) 


IP
主机名角色

10.10.10.20

admin-nodeceph-deploy
10.10.10.21node1mon
10.10.10.22node2osd
10.10.10.23node3osd


  • 设置DNS解析(我们这里修改/etc/hosts文件)

  • 每个节点都要配置

   

  1. [[email protected] ~]# cat /etc/hosts

  2. 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

  3. ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

  4. 10.10.10.20 admin-node

  5. 10.10.10.21 node1

  6. 10.10.10.22 node2

  7. 10.10.10.23 node3 


  • 配置yum源

  • 每个节点都要配置


  1. [[email protected] ~]# mv /etc/yum.repos.d{,.bak}

  2. [[email protected] ~]# mkdir /etc/yum.repos.d

  3. [[email protected] ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

  4. [[email protected] ceph]# cat /etc/yum.repos.d/ceph.repo

  5. [Ceph]

  6. name=Ceph 

  7. baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/

  8. enabled=1

  9. gpgcheck=0


  • 关闭防火墙和Selinux

  • 每个节点都要配置


  1. [[email protected] ~]# systemctl stop firewalld.service

  2. [[email protected] ~]# systemctl disable firewalld.service

  3. [[email protected] ~]# setenforce 0

  4. [[email protected] ~]# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/‘ /etc/selinux/config


  • 设置节点之间面秘钥登入

  • 每个节点都要配置

  1. [[email protected] ~]# ssh-keygen 

  2. [[email protected] ~]# ssh-copy-id 10.10.10.21

  3. [[email protected] ~]# ssh-copy-id 10.10.10.22

  4. [[email protected] ~]# ssh-copy-id 10.10.10.23


  • 使用chrony同步时间

  • 每个节点都要配置


  1. [[email protected] ~]# yum install chrony -y 

  2. [[email protected] ~]# systemctl restart  chronyd 

  3. [[email protected] ~]# systemctl enable  chronyd

  4. [[email protected] ~]# chronyc source -v (查看时间是否同步,*表示同步完成)


二、安装ceph-luminous


  • 安装ceph-deploy

  • 只在admin-node节点安装


  1. [[email protected] ~]# yum install ceph-deploy -y 


  • 在管理节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对

  • 只在admin-node节点安装


  1. [[email protected] ~]# mkdir /etc/ceph

  2. [[email protected] ~]# cd /etc/ceph/


  • 清除配置(若想从新安装可以执行以下命令)

  • 只在admin-node节点安装


  1. [[email protected] ceph]# ceph-deploy purgedata node1 node2 node3 

  2. [[email protected] ceph]# ceph-deploy forgetkeys


  • 创建集群

  • 只在admin-node节点安装


  1. [[email protected] ceph]# ceph-deploy new node1


  • 修改ceph的配置,将副本数改为2

  • 只在admin-node节点安装


  1. [[email protected] ceph]# vi ceph.conf

  2. [global]

  3. fsid = 183e441b-c8cd-40fa-9b1a-0387cb8e8735

  4. mon_initial_members = node1

  5. mon_host = 10.10.10.21

  6. auth_cluster_required = cephx

  7. auth_service_required = cephx

  8. auth_client_required = cephx

  9. filestore_xattr_use_omap = true


  10. osd pool default size = 2

    

  • 安装ceph

  • 只在admin-node节点安装


  1. [[email protected] ceph]# ceph-deploy install admin-node node1 node2 node3


    [node3][DEBUG ] Configure Yum priorities to include obsoletes

    [node3][WARNIN] check_obsoletes has been enabled for Yum priorities plugin

    [node3][INFO  ] Running command: rpm --import https://download.ceph.com/keys/release.asc

    [node3][INFO  ] Running command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

    [node3][DEBUG ] 获取https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

    [node3][WARNIN] 警告:/etc/yum.repos.d/ceph.repo 已建立为 /etc/yum.repos.d/ceph.repo.rpmnew 

    [node3][DEBUG ] 准备中...                          ########################################

    [node3][DEBUG ] 正在升级/安装...

    [node3][DEBUG ] ceph-release-1-1.el7                  ########################################

    [node3][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority

    [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: ‘ceph-noarch‘

  2. 这个地方报错了,安装了一个高版本的ceph-release

    解决方法:yum remove ceph-release

    每个节点删除ceph-release后再次重新执行上一次的命令

        

  • 配置初始 monitor(s)、并收集所有密钥

  • 只在admin-node节点安装


  1. [[email protected] ceph]# ceph-deploy mon create-initial

  2. [[email protected] ceph]# ls

  3. ceph.bootstrap-mds.keyring  ceph.bootstrap-rgw.keyring  ceph-deploy-ceph.log

  4. ceph.bootstrap-mgr.keyring  ceph.client.admin.keyring   ceph.mon.keyring

  5. ceph.bootstrap-osd.keyring  ceph.conf                   rbdmap

  6. [[email protected] ceph]# ceph -s (查看集群状态)

  7.     cluster 8d395c8f-6ac5-4bca-bbb9-2e0120159ed9

  8.      health HEALTH_ERR

  9.             no osds

  10.      monmap e1: 1 mons at {node1=10.10.10.21:6789/0}

  11.             election epoch 3, quorum 0 node1

  12.      osdmap e1: 0 osds: 0 up, 0 in

  13.             flags sortbitwise,require_jewel_osds

  14.       pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects

  15.             0 kB used, 0 kB / 0 kB avail

  16.                   64 creating


  • 创建OSD


  1. [[email protected] ~]# lsblk    (node2,node3做osd)

  2. NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

  3. fd0           2:0    1    4K  0 disk 

  4. sda           8:0    0   20G  0 disk 

  5. ├─sda1        8:1    0    1G  0 part /boot

  6. └─sda2        8:2    0   19G  0 part 

  7.   ├─cl-root 253:0    0   17G  0 lvm  /

  8.   └─cl-swap 253:1    0    2G  0 lvm  [SWAP]

  9. sdb           8:16   0   50G  0 disk /var/local/osd0

  10. sdc           8:32   0    5G  0 disk 

  11. sr0          11:0    1  4.1G  0 rom  

  12. [[email protected] ~]# mkfs.xfs /dev/sdb

  13. [[email protected] ~]#  mkdir /var/local/osd0

  14. [[email protected] ~]#  mount /dev/sdb /var/local/osd0

  15. [[email protected] ~]# chown ceph:ceph  /var/local/osd0

  16. [[email protected] ~]# mkdir /var/local/osd1

  17. [[email protected] ~]# mkfs.xfs /dev/sdb

  18. [[email protected] ~]# mount /dev/sdb /var/local/osd1/

  19. [[email protected] ~]# chown ceph:ceph  /var/local/osd1

  20. [[email protected] ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1    (在admin-node节点执行)


  • 将admin-node上的密钥和配合文件拷贝到各个节点

  • 只在admin-node节点安装


  1. [[email protected] ceph]# ceph-deploy admin admin-node node1 node2 node3


  • 确保对 ceph.client.admin.keyring 有正确的操作权限

  • 只在OSD节点执行


  1. [[email protected] ~]# chmod +r /etc/ceph/ceph.client.admin.keyring


  • 管理节点执行 ceph-deploy 来准备 OSD


  1. [[email protected] ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1


  • 激活 OSD


  1. [[email protected] ceph]# ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1


  • 检查集群的健康状况


  1. [[email protected] ceph]# ceph health

  2. HEALTH_OK


  1. [[email protected] ceph]# ceph health

  2. HEALTH_OK

  3. [[email protected] ceph]# ceph -s 

  4.     cluster 69f64f6d-f084-4b5e-8ba8-7ba3cec9d927

  5.      health HEALTH_OK

  6.      monmap e1: 1 mons at {node1=10.10.10.21:6789/0}

  7.             election epoch 3, quorum 0 node1

  8.      osdmap e14: 3 osds: 3 up, 3 in

  9.             flags sortbitwise,require_jewel_osds

  10.       pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects

  11.             15459 MB used, 45950 MB / 61410 MB avail

  12.                   64 active+clean




本文出自 “若不奋斗,何以称王” 博客,请务必保留此出处http://wangzc.blog.51cto.com/12875919/1966109

以上是关于基于Centos7.4搭建Ceph的主要内容,如果未能解决你的问题,请参考以下文章

Elasticsearch 7.6集群搭建(基于Centos7.4)

CentOS7.4搭建基于用户认证的MongoDB4.0三节点副本集集群

CentOS7.4—nginx应用之基于域名的虚拟主机

centos7.4搭建Memcached

centos7.4 搭建nginx反向缓存代理

云服务器CentOS7.4下搭建GitLab