ceph安装过程中,在添加osd时出错

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ceph安装过程中,在添加osd时出错相关的知识,希望对你有一定的参考价值。

我搭建了三台虚拟机(系统环境centos7,最新内核)ceph1,ceph2,ceph3,其中ceph2和ceph3作为osd使用,但是在配置osd时,在ceph1命令行下输入ceph-deploy disk zap ceph2:/dev/vdb ceph3:/dev/vdb时提示
[ceph_deploy.osd][DEBUG ] zapping ceph3:/dev/vdb on ceph2:/dev/vdb
ssh: Could not resolve hostname ceph2:/dev/vdb: Name or service not known
[ceph_deploy][ERROR ] RuntimeError: connecting to host: ceph2:/dev/vdb resulted in errors: HostNotFound ceph2:/dev/vdb
不知道时哪里出了问题,给点提示也可以

参考技术A DataTable dt = new DataTable();
dt.Columns.Add(new DataColumn("PreRevDate0", typeof(decimal)));
DataColumn col = new DataColumn();
col.ColumnName = "PreRevDate1";
col.Expression = "ABS(Convert.ToInt32(PreRevDate0))";
col.DataType = typeof(decimal);
dt.Columns.Add(col);
DataRow dr = dt.NewRow();
dr["PreRevDate0"] = -1;
dt.Rows.Add(dr);本回答被提问者和网友采纳

Ceph:添加新的OSD节点

Ceph:添加新的OSD节点

一、Ceph新的OSD节点上的操作

1.1 配置ceph的yum源

cat /etc/yum.repos.d/ceph-aliyun.repo

[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1

[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1

1.2 配置ceph的安装包

yum -y install ceph ceph-radosgw

二、管理节点上的操作

# ssh-copy-id hz01-dev-ops-wanl-01
# ceph-deploy disk list hz01-dev-ops-wanl-01
# ceph-deploy disk zap hz01-dev-ops-wanl-01:vdb
# ceph-deploy osd prepare hz01-dev-ops-wanl-01:vdb
# ceph-deploy osd activate hz01-dev-ops-wanl-01:vdb1
# cd /my-cluster
# ceph-deploy admin hz01-dev-ops-wanl-01

查看ceph的状态:

2.1 查看ceph的状态:ceph -s

cluster e2ca994a-00c4-477f-9390-ea3f931c5062
 health HEALTH_OK
 monmap e1: 3 mons at {hz-01-ops-tc-ceph-02=172.16.2.231:6789/0,hz-01-ops-tc-ceph-03=172.16.2.172:6789/0,hz-01-ops-tc-ceph-04=172.16.2.181:6789/0}
        election epoch 14, quorum 0,1,2 hz-01-ops-tc-ceph-03,hz-01-ops-tc-ceph-04,hz-01-ops-tc-ceph-02
 osdmap e45: 5 osds: 5 up, 5 in
        flags sortbitwise,require_jewel_osds
  pgmap v688: 64 pgs, 1 pools, 0 bytes data, 0 objects
        170 MB used, 224 GB / 224 GB avail
              64 active+clean

2.2 查看ceph osd的状态:ceph osd tree

ID WEIGHT  TYPE NAME                     UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.21950 root default                                                    
-2 0.04390     host hz-01-ops-tc-ceph-01                                   
 0 0.04390         osd.0                      up  1.00000          1.00000 
-3 0.04390     host hz-01-ops-tc-ceph-02                                   
 1 0.04390         osd.1                      up  1.00000          1.00000 
-4 0.04390     host hz-01-ops-tc-ceph-03                                   
 2 0.04390         osd.2                      up  1.00000          1.00000 
-5 0.04390     host hz-01-ops-tc-ceph-04                                   
 3 0.04390         osd.3                      up  1.00000          1.00000 
-6 0.04390     host hz01-dev-ops-wanl-01                                   
 4 0.04390         osd.4                      up  1.00000          1.00000 

以上是关于ceph安装过程中,在添加osd时出错的主要内容,如果未能解决你的问题,请参考以下文章

ceph添加osd(ceph-deploy)

记一次Ceph日志损坏的分析处理过程

ceph学习笔记之七 数据平衡

搭建Ceph集群

CentOS 7.2 安装部署 Ceph 及添加 PG

Ceph Nautilus安装配置 MGR-dashboard