Ceph集群部署
Posted 涛子GE哥
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Ceph集群部署相关的知识,希望对你有一定的参考价值。
CEPH搭建环境部署---三台centos7.1主机;内核:3.10.0-229.el7.x86_64 IP 角色 Hostname
IP | 角色 | Hostname |
192.168.1.10 | osd mon admin | ceph-01 |
192.168.1.11 | asd mon | ceph-02 |
192.168.1.12 | asd mon | ceph-03 |
一、配置admin节点与osd节点无密码认证(ssh秘钥) 1.1、修改主机名(三台主机均做此操作) [root@192.168.1.10 root]#vim /etc/hostname ceph-01 [root@192.168.1.10 root]#hostname ceph-01
1.2、修改hosts文件,通过主机名可以实现通讯 1.2.1在admin节点做以下操作(实现到所有asd节点的免密码登录) [root@192.168.1.10 root]#vim /etc/hosts 192.168.1.10 ceph-01 192.168.1.11 ceph-02 192.168.1.12 ceph-03
1.2.2在其他osd节点和mon节点做以下操作(按照规划修改命名) [root@192.168.1.11 root]#vim /etc/hosts 192.168.1.11 ceph-02
1.2.3ping测试(admin节点) [root@192.168.1.10 root]#ping ceph-01 PING ceph-01 (192.168.1.10) 56(84) bytes of data. 64 bytes from ceph-01 (192.168.1.10): icmp_seq=1 ttl=64 time=0.036 ms 64 bytes from ceph-01 (192.168.1.10): icmp_seq=2 ttl=64 time=0.028 ms ^C --- ceph-01 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.028/0.032/0.036/0.004 ms [root@192.168.1.10 root]#ping ceph-02 PING ceph-02 (192.168.1.11) 56(84) bytes of data. 64 bytes from ceph-02 (192.168.1.11): icmp_seq=1 ttl=64 time=0.248 ms 64 bytes from ceph-02 (192.168.1.11): icmp_seq=2 ttl=64 time=0.250 ms ^C --- ceph-02 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.248/0.249/0.250/0.001 ms [root@192.168.1.10 root]#ping ceph-03 PING ceph-03 (192.168.1.12) 56(84) bytes of data. 64 bytes from ceph-03 (192.168.1.12): icmp_seq=1 ttl=64 time=0.174 ms 64 bytes from ceph-03 (192.168.1.12): icmp_seq=2 ttl=64 time=0.172 ms ^C --- ceph-03 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.172/0.173/0.174/0.001 ms
1.2.4添加白名单(所有节点做以下操作hosts.allow 60是admin节点IP) [root@192.168.1.10 root]#vim /etc/hosts.allow sshd:192.168.1.10:allow
1.2.5配置基础环境,关闭防火墙、设置selinux(所有节点均做此操作) [root@192.168.1.10 root]#systemctl stop firewalld [root@192.168.1.10 root]#systemctl disable firewalld [root@192.168.1.10 root]#vim /etc/selinux/config SELINUX=disabled [root@192.168.1.10 root]#setenforce 0
1.2.6配置免密码认证 1.2.6.1修改ssh配置文件(所有节点) [root@192.168.1.10 root]#vim /etc/ssh/sshd_config PermitRootLogin yes #允许root登录 (在环境部署完成后可以将其注释掉 root风险较大) [root@192.168.1.10 root]#systemctl restart sshd 1.2.6.2生成秘钥(admin节点) ssh-keygen #一路回车即可 1.2.6.3将公钥拷贝到各ceph 节点(admin节点) [root@192.168.1.10 root]#ssh-copy-id root@ceph-01 [root@192.168.1.10 root]#ssh-copy-id root@ceph-02 [root@192.168.1.10 root]#ssh-copy-id root@ceph-03 验证 [root@192.168.1.10 root]#ssh ceph-01 Last login: Tue Mar 20 09:46:13 2018 from 192.168.1.10 [root@192.168.1.10 ~]# [root@192.168.1.10 root]#ssh ceph-02 Last login: Tue Mar 20 09:45:40 2018 from 192.168.1.10 [root@192.168.1.11 ~]# [root@192.168.1.10 root]#ssh ceph-03 Last login: Mon Mar 19 16:01:10 2018 from 192.168.1.10 [root@192.168.1.12 ~]#
二、配置NTP(所有节点) 2.1、yum安装ntp [root@192.168.1.10 root]#yum -y install ntp 2.2、将ntp.conf文件备份,指定ntp服务器(这里是将ceph-01、ceph-02部署为ntp server) [root@192.168.1.10 root]#cp /etc/ntp.conf /etc/ntp.conf.bak [root@192.168.1.10 root]#grep -v "^#" /etc/ntp.conf | grep -v "^$" driftfile /var/lib/ntp/drift restrict default nomodify notrap nopeer noquery restrict 127.0.0.1 restrict ::1 restrict 0.0.0.0 mask 0.0.0.0 nomodify notrap server 192.168.1.10 prefer server 192.168.1.11 includefile /etc/ntp/crypto/pw keys /etc/ntp/keys disable monitor 2.3.、重合并加入开机自启 [root@192.168.1.10 root]#systemctl restart ntpd [root@192.168.1.10 root]#systemctl enable ntpd 2.4、 写入硬时钟 [root@192.168.1.10 root]#hwclock -w 2.5、验证 [root@192.168.1.10 root]#ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.1.10 LOCAL(0) 2 u 254 1024 377 0.209 -0.475 1.544 192.168.1.11 .INIT. 16 u - 1024 0 0.000 0.000 0.000
三、部署Ceph环境 3.1、添加分区,这里以一台设备添加一块磁盘为例 #sdb 500G的ssd盘 在这里做日志盘 #sdd 4T普通盘 做数据盘 [root@192.168.1.10 root]#parted -s /dev/sdb mklabel gpt [root@192.168.1.10 root]#parted -s /dev/sdd mklabel gpt [root@192.168.1.10 root]#parted -s /dev/sdb mkpart primary 1 15000 #分区1 15G大小 [root@192.168.1.10 root]#parted -s /dev/sdd mkpart primary 1 100% #for i in b..m;do parted -s /dev/sd$i mklabel gpt;done
查看 [root@192.168.1.10 root]#lsblk sdb 8:16 0 446.6G 0 disk └─sdb1 8:17 0 14G 0 part sdd 8:48 0 3.7T 0 disk └─sdd1 8:49 0 3.7T 0 part
3.2、配置yum源,这里用的是自己搭建的yum源(所有节点) #若不知道怎么搭建,请看上篇博客-《搭建本地Ceph yum源》 3.2.1查看yum源文件 [root@192.168.1.10 root]#cat ceph.repo [Ceph-10.2.9] name=Ceph-10.2.9 baseurl=http://yum server IP/yum/x86_64/ceph gpgcheck=0 enabled=1
#也可以使用阿里源
3.2.2清除缓存并生成新的yum缓存 [root@192.168.1.10 root]#yum clean all [root@192.168.1.10 root]#yum makecache
3.3、开始安装 3.3.1内核优化(所有节点) [root@192.168.1.11 root]#echo 'kernel.pid_max = 4194303' >> /etc/sysctl.conf [root@192.168.1.11 root]#echo 'fs.file-max = 26234859' >> /etc/sysctl.conf
[root@192.168.1.11 root]#echo '* soft nofile 65536' >> /etc/security/limits.conf [root@192.168.1.11 root]#echo '* hard nofile 65536' >> /etc/security/limits.conf
3.3.1admin节点部署 [root@192.168.1.11 root]#yum -y install ceph-deploy [root@192.168.1.11 root]# cd [root@192.168.1.10 ~]# mkdir my-cluster [root@192.168.1.10 ~]# cd my-cluster/
3.3.2osd节点部署 [root@192.168.1.11 root]#yum install ceph
3.3.3初始化mon 3.3.3.1第一步 这些操作必须在my-cluster目录下 [root@192.168.1.10 my-cluster]#ceph-deploy new ceph-01 ceph-02 ceph-03 [root@192.168.1.10 my-cluster]#ls ceph.conf ceph-deploy-ceph.log ceph.mon.keyring #会创建默认配置文件ceph.conf 有其他需求,可以修改配置文件
3.3.3.2初始化第二步 [root@192.168.1.10 my-cluster]#ceph-deploy --overwrite-conf mon create-initial #--overwrite-conf 参数意义是覆盖之前配置
3.4、添加osd(前一块盘放数据、后一块放日志) 日志盘需要给ceph:ceph权限 [root@192.168.1.10 my-cluster]#chown ceph:ceph /dev/sdb* 预加载osd [root@192.168.1.10 my-cluster]#ceph-deploy --overwrite-conf osd prepare ceph-01:/dev/sdd1:/dev/sdb1 激活osd [root@192.168.1.10 my-cluster]#ceph-deploy --overwrite-conf osd activate ceph-01:/dev/sdd1:/dev/sdb1
查看 [root@192.168.1.10 my-cluster]#ceph -s cluster fba764dc-998a-4acb-ac23-2d0a405e59f7 health HEALTH_OK monmap e1: 3 mons at ceph-01=192.168.1.10:6789/0,ceph-02=192.168.1.11:6789/0,ceph-03=192.168.1.12:6789/0 election epoch 6, quorum 0,1,2 ceph-01,ceph-02,ceph-03 osdmap e21: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v46: 64 pgs, 1 pools, 181 bytes data, 1 objects 100 MB used, 11170 GB / 11171 GB avail 64 active+clean
快速格式磁盘 sgdisk -o /dev/sdd mkfs.xfs -f -i size=2048 /dev/sdd1 parted -s /dev/sdd mklabel gpt parted -s /dev/sdd mkpart primary 1 100%
以上是关于Ceph集群部署的主要内容,如果未能解决你的问题,请参考以下文章