ceph集群部署
Posted wshenjin
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ceph集群部署相关的知识,希望对你有一定的参考价值。
环境
两个节点:ceph1、ceph2
- ceph1: mon、mds、osd.0、osd.1
- ceph2: osd.2、osd.3
网络配置:
ceph1: 管理网络,eth0,10.0.0.20
存储网络,eth1, 10.0.1.20
ceph2: 管理网络,eth0,10.0.0.21
存储网络,eth1, 10.0.1.21
安装
root@ceph1:~# wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
root@ceph1:~# echo deb http://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
root@ceph1:~# apt-get update && apt-get upgrade
root@ceph1:~# apt-get install ceph-common ceph-fs-common ceph-mds ceph
配置
/etc/ceph/ceph.conf?
[global]
max open files = 131072
fsid = eb3d751f-829f-4eae-8d69-0423b81f88f4
auth cluster required = none
auth service required = none
auth client required = none
osd pool default size = 2
osd pool default min size = 1
mon osd full ratio = .95
mon osd nearfull ratio = .85
[mon]
mon data = /var/lib/ceph/mon/$cluster-$name
[osd]
osd journal size = 1024
osd mkfs type = xfs
osd mkfs options xfs = -f??
osd mount options xfs = rw,noatime?
[mon.a]
host = ceph1
mon addr = 10.0.1.20:6789
[osd.0]
host =? ceph1
devs = /dev/sdb1
[osd.1]
host =? ceph1
devs= /dev/sdc1
[osd.2]
host =? ceph2
devs = /dev/sdb1
[osd.3]
host =? ceph2
devs = /dev/sdc1
部署mon
root@ceph1:~# ceph-authtool /etc/ceph/ceph.mon.keyring --create-keyring --gen-key -n mon.?
root@ceph1:~# ceph-mon -i a --mkfs --keyring /etc/ceph/ceph.mon.keyring
root@ceph1:~# chown ceph:ceph -R /var/lib/ceph/mon/ /var/run/ceph
root@ceph1:~# /etc/init.d/ceph start mon.a
部署osd
root@ceph1:~# mkfs.xfs? /dev/sdb1?
root@ceph1:~# mkfs.xfs? /dev/sdc1?
root@ceph1:~# mkdir -p /var/lib/ceph/osd/ceph-0/
root@ceph1:~# mkdir -p /var/lib/ceph/osd/ceph-1/
root@ceph1:~# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0/
root@ceph1:~# mount /dev/sdc1 /var/lib/ceph/osd/ceph-1/
部署osd.0
ps:无论osd在哪台服务器上,ceph osd creat都要在mon所在服务器上执行
root@ceph1:~# ceph osd create?
root@ceph1:~# ceph-osd -i 0 --mkfs --mkkey? ?
root@ceph1:~# ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-0/keyring?
root@ceph1:~# ceph osd crush add osd.0 0.2 root=default host=ceph1
root@ceph1:~# ceph-osd -i 0
同理部署其他osd
ps: 不同服务器时,需先复制ceph.conf到对应服务器上
root@ceph1:~# ceph osd tree
ID WEIGHT? TYPE NAME? ? ? UP/DOWN REWEIGHT PRIMARY-AFFINITY?
-1 0.79999 root default? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
-2 0.39999? ? ?host ceph1? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
?0 0.20000? ? ? ? ?osd.0? ? ? ?up? 1.00000? ? ? ? ? 1.00000?
?1 0.20000? ? ? ? ?osd.1? ? ? ?up? 1.00000? ? ? ? ? 1.00000?
-3 0.39999? ? ?host ceph2? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
?2 0.20000? ? ? ? ?osd.2? ? ? ?up? 1.00000? ? ? ? ? 1.00000?
?3 0.20000? ? ? ? ?osd.3? ? ? ?up? 1.00000? ? ? ? ? 1.00000?
部署mds
root@ceph1:~# ceph-mds -i a -n mds.a -c /etc/ceph/ceph.conf? -m 10.0.1.20:6789
以上是关于ceph集群部署的主要内容,如果未能解决你的问题,请参考以下文章