ceph基于VMware Workstation虚拟机Ceph集群安装配置笔记#私藏项目实操分享#

Posted Well_Name

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ceph基于VMware Workstation虚拟机Ceph集群安装配置笔记#私藏项目实操分享#相关的知识,希望对你有一定的参考价值。

1.写在前面

本文主要内容为搭建三节点Ceph集群的过程。

这两天捣鼓了一下Ceph,准备做个底层基于ceph的nas存储,首先网上看了一些教程,还有对照了一下书籍,依然在安装配置时ceph还是遇到很多问题,所以写此贴记录一下自己遇到的问题和安装过程,同时也希望能够帮助一些小伙伴搭建Ceph集群。


2.安装环境

linux版本,精简版

[root@node1 ~]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)

Ceph版本 Giant 

[root@node1 ~]# ceph -v
ceph version 0.87.2 (87a7cec9ab11c677de2ab23a7668a77d2f5b955e)

VMware workstation版本

网络(桥接物理主机)

node1 192.168.68.11

node2 192.168.68.12

node3 192.168.68.13

网关 192.168.68.1

3.安装步骤

3.1 VMware Workstation 安装虚机步骤

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

3.2 centos7安装过程

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

node2、node3也是如此,局部将1改为2或3即可(或者使用Vmware Workstation克隆功能,这里不再演示)

给三个节点虚机分别增加3块硬盘

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

【ceph】基于VMware

3.3 安装Ceph前虚机配置(三个节点都需要)

配置hosts访问

vi /etc/hosts

【ceph】基于VMware

配置node1免密访问node2、3

ssh-keygen

【ceph】基于VMware

ssh-copy-id node2
ssh-copy-id node3

【ceph】基于VMware

【ceph】基于VMware

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

setenforce 0
sed -i /SELINUX/s/enforcing/disabled/ /etc/selinux/config

安装并配置ntp时钟服务

yum install -y ntp ntpdate -y
ntpdate pool.net.org
systemctl restart ntpdate.service
systemctl restart ntp.service
systemctl enable ntp.service
systemctl enable ntpdate.service

添加Ceph Giant 版本包 更新yum

rpm -Uhv http://download.ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

修改ceph.repo配置文件(提示:%d可快速清除此页内容)

vi /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-giant/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-giant/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-giant/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

更新yum

yum -y update

3.4 安装和配置Ceph

强烈建议此步骤前将三台虚机快照备份

3.4.1 在node​1上创建Ceph集群

安装ceph-deploy
yum install ceph-deploy -y
创建一个ceph集群
mkdir /etc/ceph
cd /etc/ceph
ceph-deploy new node1
在node1节点给个节点安装ceph
ceph-deploy install node1 node2 node3
查看安装ceph版本
ceph -v
在node1上创建第一个ceph monitor
ceph-deploy mon create-initial
检查集群状态
ceph -s
在node1上创建OSD(在ceph目录下执行)
列出可用磁盘
ceph-deploy disk list node1
删除选择磁盘的分区和内容
ceph-deploy disk zap node1:sdb node1:sdc node1:sdd
创建osd
ceph-deploy osd create node1:sdb node1:sdc node1:sdd
检查集群状态
ceph -s

【ceph】基于VMware

此时单节点配置完毕

3.4.2 拓展Ceph集群

在node1上将共有网络地址添加到文件/etc/ceph/ceph.conf
public network = 192.168.68.0/24
分别在node2、3上创建monitor
ceph-deploy mon create node2
ceph-deploy mon create node3
查看集群状态
ceph -s

【ceph】基于VMware

此时另外两个节点已成功加入

处理node2、3节点的磁盘并创建OSD
ceph-deploy disk zap node2:sdb node2:sdc node2:sdd
ceph-deploy disk zap node3:sdb node3:sdc node3:sdd
ceph-deploy osd create node2:sdb node2:sdc node2:sdd
ceph-deploy osd create node3:sdb node3:sdc node3:sdd
调整rbd存储池pg_num和pgp_num的值,使我们的集群达到HEALTH_OK状态
ceph osd pool set rbd pg_num 256
ceph osd pool set rbd pgp_num 256
检查集群状态,如果为HEALTH_OK,集群已经完整搭建并且正常了

【ceph】基于VMware

4.遇到的一些错误

原因:修改了ceph用户里的ceph.conf文件内容,但是没有把这个文件里的最新消息发送给其他节点,所有要推送消息

【ceph】基于VMware

ceph-deploy --overwrite-conf config push node2

ceph-deploy --overwrite-conf mon create node2

5.一些常用命令

检查Ceph安装状态

ceph -s
ceph status
[root@node1 ceph]# ceph -s
cluster 5e563d2f-94e6-4d9b-8aaf-6b2c76f856e4
health HEALTH_OK
monmap e3: 3 mons at node1=192.168.68.11:6789/0,node2=192.168.68.12:6789/0,node3=192.168.68.13:6789/0, election epoch 4, quorum 0,1,2 node1,node2,node3
osdmap e53: 9 osds: 9 up, 9 in
pgmap v122: 256 pgs, 1 pools, 0 bytes data, 0 objects
318 MB used, 134 GB / 134 GB avail
256 active+clean

[root@node1 ceph]# ceph -s
cluster 5e563d2f-94e6-4d9b-8aaf-6b2c76f856e4
health HEALTH_OK
monmap e3: 3 mons at node1=192.168.68.11:6789/0,node2=192.168.68.12:6789/0,node3=192.168.68.13:6789/0, election epoch 4, quorum 0,1,2 node1,node2,node3
osdmap e53: 9 osds: 9 up, 9 in
pgmap v122: 256 pgs, 1 pools, 0 bytes data, 0 objects
318 MB used, 134 GB / 134 GB avail
256 active+clean

查看Ceph版本

ceph -v
[root@node1 ceph]# ceph -v
ceph version 0.87.2 (87a7cec9ab11c677de2ab23a7668a77d2f5b955e)

观察集群健康状况

ceph -
[root@node1 ceph]# ceph -w
cluster 5e563d2f-94e6-4d9b-8aaf-6b2c76f856e4
health HEALTH_OK
monmap e3: 3 mons at node1=192.168.68.11:6789/0,node2=192.168.68.12:6789/0,node3=192.168.68.13:6789/0, election epoch 4, quorum 0,1,2 node1,node2,node3
osdmap e53: 9 osds: 9 up, 9 in
pgmap v122: 256 pgs, 1 pools, 0 bytes data, 0 objects
318 MB used, 134 GB / 134 GB avail
256 active+clean

2022-04-25 10:27:09.830678 mon.0 [INF] pgmap v122: 256 pgs: 256 active+clean; 0 bytes data, 318 MB used, 134 GB / 134 GB avail

检查Ceph monitor仲裁状态

ceph quorum_status --format json-pretty
[root@node1 ceph]# ceph quorum_status --format json-pretty

"election_epoch": 4,
"quorum": [
0,
1,
2],
"quorum_names": [
"node1",
"node2",
"node3"],
"quorum_leader_name": "node1",
"monmap": "epoch": 3,
"fsid": "5e563d2f-94e6-4d9b-8aaf-6b2c76f856e4",
"modified": "2022-04-25 10:06:48.209985",
"created": "0.000000",
"mons": [
"rank": 0,
"name": "node1",
"addr": "192.168.68.11:6789\\/0",
"rank": 1,
"name": "node2",
"addr": "192.168.68.12:6789\\/0",
"rank": 2,
"name": "node3",
"addr": "192.168.68.13:6789\\/0"]

列表PG

ceph pg dump

列表Ceph存储池

ceph osd lspools
[root@node1 ceph]# ceph osd lspools
0 rbd,

列表集群的认证密钥

ceph auth list
[root@node1 ceph]# ceph auth list
installed auth entries:

osd.0
key: AQA7GGViMOKvBhAApRSMC8DDLnlOQXmAD7UUDQ==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQBEGGVioIu6IxAAtFI6GkzHH86f5DbZcFLP+Q==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQBMGGViMHbdKxAAlahPljoMpYC5gRoJBPwmcg==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.3
key: AQB9BWZiwNzrOhAADsHBX/QZgBgZ/5SbJ9wFlg==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.4
key: AQCHBWZiECT2IBAApALFn7F7IDMW/ctkL8BAsA==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.5
key: AQCPBWZisNn3NxAAiDcZGUPWY+e3lflW+7c6AQ==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.6
key: AQCdBWZiQLATHxAAc7z2NE3FmFUx28dIXeHN2g==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.7
key: AQCnBWZiIPh+CRAA/hDBfG/iwChiNcVvB4lw2Q==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.8
key: AQCvBWZiQE9vARAAroqxul/dQRsDnN7Cz9pkAA==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQCRF2ViOAyLMBAAip1+6gqV2wJmxYUlrBzdFQ==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQCSF2ViiMcjChAAncKTNo7o5sGaKFvoJHEFmA==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
key: AQCSF2ViUB/CABAAPXRQtcShw39kI6xYr51Cdw==
caps: [mon] allow profile bootstrap-osd

检查集群使用状态

ceph df
[root@node1 ceph]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
134G 134G 318M 0.23
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 45938M 0

检查OSD的Crush map

ceph osd tree
[root@node1 ceph]# ceph osd tree
# id weight type name up/down reweight
-1 0.08995 root default
-2 0.02998 host node1
0 0.009995 osd.0 up 1
1 0.009995 osd.1 up 1
2 0.009995 osd.2 up 1
-3 0.02998 host node2
3 0.009995 osd.3 up 1
4 0.009995 osd.4 up 1
5 0.009995 osd.5 up 1
-4 0.02998 host node3
6 0.009995 osd.6 up 1
7 0.009995 osd.7 up 1
8 0.009995 osd.8 up 1





(作者水平有限,文章如有错误,烦请指正。)

以上是关于ceph基于VMware Workstation虚拟机Ceph集群安装配置笔记#私藏项目实操分享#的主要内容,如果未能解决你的问题,请参考以下文章

1-VMware workstation认识

1-VMware workstation认识

虚拟机介绍与使用(VMware Workstation)

小猿圈linux之使用VMware workstation安装Linux虚拟机

VMware Workstation 虚拟机使用硬盘的问题

基于VMware Workstation构建Vagrant base box