【ceph】问题 - mgr无法启动dashboard

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了【ceph】问题 - mgr无法启动dashboard相关的知识,希望对你有一定的参考价值。

参考技术A 环境信息:
系统:centos 7.5
Ceph:12.2.10

启动命令:

错误信息:

问题分析:
因为我把centos7的ipv6关闭了所以报错了,mgr默认是同时开启ipv4和ipv6,解决方案是指定使用ipv4启动mgr。

解决方案:

参考链接:
https://ivirt-it.ru/ceph-mgr-dashboard-zabbix-restful-status-balancer/](https://ivirt-it.ru/ceph-mgr-dashboard-zabbix-restful-status-balancer/

ceph部署手册

CentOS7.2部署Luminous版Ceph-12.2.0

CentOS7.2上安装部署Luminous版Ceph-12.2.0。由于ceph的Luminous版本默认使用bluestore作为后端存储类型,也新增了mgr功能,所以使用ceph-deploy的1.5.38版本来部署集群、创建MON、OSD、MGR等。

环境

每台主机

  • CentOS Linux release 7.2.1511 (Core) Mini版
  • 两个100G的磁盘做OSD

1

2

3

4

5

6

7

8

9

10

11

12

13

[[email protected] ~]# cat /etc/redhat-release

CentOS Linux release 7.2.1511 (Core)

[[email protected] ~]# lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sr0              11:0    1 1024M  0 rom

xvda            202:0    0   10G  0 disk

├─xvda1         202:1    0  500M  0 part /boot

└─xvda2         202:2    0  9.5G  0 part

  ├─centos-root 253:0    0  8.5G  0 lvm  /

  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]

xvdb            202:16   0  100G  0 disk

xvdc            202:32   0  100G  0 disk

 

 

主机node232作为管理节点,部署ceph-deploy。三台主机配置如下

主机

IP

安装组件

node232

192.168.217.232

ceph-deploy、mon、osd、mgr、ntp

node233

192.168.217.233

mon、osd、ntpdate

node234

192.168.217.234

mon、osd、ntpdate

 

设置无密登录

 

Yum源配置

 

下载阿里云的base源

Wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

 

下载阿里云的epel源

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

 

修改里面的系统版本为7.3.1611,当前用的CentOS7.2.1511版本的yum源已经清空了

[[email protected] ~]# sed -i ‘/aliyuncs/d‘ /etc/yum.repos.d/CentOS-Base.repo

[[email protected] ~]# sed -i ‘/aliyuncs/d‘ /etc/yum.repos.d/epel.repo

#[[email protected] ~]# sed -i ‘s/$releasever/7.3.1611/g‘ /etc/yum.repos.d/CentOS-Base.repo

 

[ceph]
name=ceph
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/x86_64/gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0

 

yum makecache

 

http://download.ceph.com/ ceph官方yum

安装ceph

下载ceph的相关rpm到本地

1

[[email protected] ~]# yum install --downloadonly --downloaddir=/tmp/ceph ceph

在每台主机上安装ceph

1

[[email protected] ~]# yum localinstall -C -y --disablerepo=* /tmp/ceph/*.rpm

安装成功,查看ceph版本

1

2

[[email protected] ~]# ceph -v

ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)

 

 

部署ceph

在管理节点node232上执行。

安装ceph-deploy

下载ceph-deploy-1.5.38

[[email protected] ~]# yum install --downloadonly --downloaddir=/tmp/ceph-deploy/ ceph-deploy

 

yum localinstall -C -y --disablerepo=* /tmp/ceph-deploy/*.rpm

 

安装成功,查看ceph-deploy版本

1

2

[[email protected] ~]# ceph-deploy --version

1.5.38

 

 

部署集群

创建部署目录,部署集群

[[email protected] ~]# mkdir ceph-cluster

[[email protected] ~]# cd ceph-cluster

[[email protected] ceph-cluster]# ceph-deploy new node232 node233 node234

加入监控节点

 

 

部署mon

1

[[email protected] ceph-cluster]# ceph-deploy mon create-initial

 

初始化监控节点

 

执行ceph -s会出错,是由于缺少/etc/ceph/ceph.client.admin.keyring文件

1

2

3

4

5

[[email protected] ceph-cluster]# ceph -s

2017-09-13 12:12:18.772214 7f3d3fc3f700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory

2017-09-13 12:12:18.772260 7f3d3fc3f700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication

2017-09-13 12:12:18.772263 7f3d3fc3f700  0 librados: client.admin initialization error (2) No such file or directory

[errno 2] error connecting to the cluster

手工复制ceph-cluster目录下ceph.client.admin.keyring文件到/etc/ceph/目录下或者执行 ceph-deploy admin命令自动复制ceph.client.admin.keyring文件

[[email protected] ceph-cluster]# ceph-deploy admin node232 node233 node234

 

查看集群

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

[[email protected] ceph-cluster]# ceph -s

  cluster:

    id:     988e29ea-8b2c-4fa7-a808-e199f2e6a334

    health: HEALTH_OK

  services:

    mon: 3 daemons, quorum node232,node233,node234

    mgr: no daemons active

    osd: 0 osds: 0 up, 0 in

  data:

    pools:   0 pools, 0 pgs

    objects: 0 objects, 0 bytes

    usage:   0 kB used, 0 kB / 0 kB avail

    pgs:

创建osd

1

[[email protected] ceph-cluster]# ceph-deploy --overwrite-conf osd prepare node232:/dev/xvdb node232:/dev/xvdc node233:/dev/xvdb node233:/dev/xvdc node234:/dev/xvdb node234:/dev/xvdc --zap-disk

激活osd

[[email protected] ceph-cluster]# ceph-deploy --overwrite-conf osd activate node232:/dev/xvdb1 node232:/dev/xvdc1 node233:/dev/xvdb1 node233:/dev/xvdc1 node234:/dev/xvdb1 node234:/dev/xvdc1

 

查看集群

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

[[email protected] ceph-cluster]# ceph -s

  cluster:

    id:     988e29ea-8b2c-4fa7-a808-e199f2e6a334

    health: HEALTH_WARN

            no active mgr

  services:

    mon: 3 daemons, quorum node232,node233,node234

    mgr: no daemons active

    osd: 6 osds: 6 up, 6 in

  data:

    pools:   0 pools, 0 pgs

    objects: 0 objects, 0 bytes

    usage:   0 kB used, 0 kB / 0 kB avail

    pgs:

 

配置mgr

node232上创建名称为foo的mgr服务

1

[[email protected] ceph-cluster]# ceph-deploy mgr create node232:foo

 

启用dashboard

1

[[email protected] ceph-cluster]# ceph mgr module enable dashboard

 

通过 http://192.168.217.232:7000 访问dashboard

 

dashboard的port默认为7000,可以执行ceph config-key set mgr/dashboard/server_port $PORT修改port。
也可以执行ceph config-key set mgr/dashboard/server_addr $IP指定dashboard的访问IP。








以上是关于【ceph】问题 - mgr无法启动dashboard的主要内容,如果未能解决你的问题,请参考以下文章

Ceph mgr devicehealth模块加载报错

ceph 开启mgr balancer

Ceph mgr Zabbix module ceph利用Zabbix+Granfana数据可视化展示

Ceph Nautilus安装配置 MGR-dashboard

OGG mgr 进程无法启动问题解析

ceph部署手册