ceph对接openstack环境
Posted zzzynx
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ceph对接openstack环境相关的知识,希望对你有一定的参考价值。
一、使用rbd方式提供存储如下数据:
二、实施步骤:
(1)客户端也要有cent用户:
useradd cent && echo "123" | passwd --stdin cent echo -e ‘Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL‘ | tee /etc/sudoers.d/ceph chmod 440 /etc/sudoers.d/ceph
(2)openstack要用ceph的节点(比如compute-node和storage-node)安装下载的软件包:
yum localinstall ./* -y
或则:每个节点安装 clients(要访问ceph集群的节点):
yum install python-rbd
yum install ceph-common
(3)部署节点上执行,为openstack节点安装ceph:
ceph-deploy install controller
ceph-deploy admin controller
(4)客户端执行
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
5)创建存储池,分别名为images、vms、volumes
[[email protected] ~]#ceph osd pool create images 128 pool ‘images‘ created [[email protected] ~]# ceph osd pool create vms 128 pool ‘vms‘ created [[email protected] ~]# ceph osd pool create volumes 128 pool ‘volumes‘ created
6)查看pool列表
ceph osd lspools
0 rbd,1 images,2 vms,3 volumes,
7)在ceph集群中创建glance和cinder用户,由于是all in one环境所以这里我们就在部署节点上创建这两个用户即可。
useradd glance
useradd cinder
8)给与这两个用户访问下面存储池等权限
[[email protected] ceph]# ceph auth get-or-create client.glance mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=images‘ [client.glance] key = AQCZggNd3TrTDBAAFgWrEAXhXt7xv4xcnn0eWA== [[email protected] ceph]# ceph auth get-or-create client.cinder mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images‘ [client.cinder] key = AQCtggNdHrFuHhAAsI/rt4cVujt8QEYZOODRFw==
9)拷贝ceph-keyring,并发给需要这两个用户进行操作的节点,我们这里就在controller节点上拷贝然后发给storage节点
[email protected] ceph]# ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring
[[email protected] ceph]# ceph auth get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring
[[email protected] ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring compute:/etc/ceph/
ceph.client.glance.keyring 100% 64 15.4KB/s 00:00
ceph.client.cinder.keyring 100% 64 31.4KB/s 00:00
[[email protected] ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring storage:/etc/ceph/
ceph.client.glance.keyring 100% 64 14.1KB/s 00:00
ceph.client.cinder.keyring 100% 64 28.0KB/s 00:00
10)更改下面keyring文件的属组属组,不然没有权限访问。
[[email protected] ceph]# chown glance:glance /etc/ceph/ceph.client.glance.keyring [[email protected] ceph]# chown cinder:cinder /etc/ceph/ceph.client.glance.keyring
11)更改libvirt权限,只需要在nova-compute节点上操作就行
生成一个uuid,并将uuid写入/etc/ceph/uuid
[[email protected] ceph]# uuidgen 3e3314c9-bfb0-439e-8764-61896c621b7e
[[email protected] ceph]# vim uuid
3e3314c9-bfb0-439e-8764-61896c621b7e
在/etc/ceph目录下创建secret文件,添加以下内容
cat > secret.xml <<EOF <secret ephemeral=‘no‘ private=‘no‘> <uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid> <usage type=‘ceph‘> <name>client.cinder secret</name> </usage> </secret>
将secret文件发送到其他compute节点,并执行以下操作
[[email protected] ceph]# ceph auth get-key client.cinder > ./client.cinder.key [[email protected] ceph]# ls ceph.client.admin.keyring ceph.conf secret.xml ceph.client.cinder.keyring client.cinder.key tmpJKjseK ceph.client.glance.keyring rbdmap uuid [[email protected] ceph]# virsh secret-set-value --secret 3e3314c9-bfb0-439e-8764-61896c621b7e --base64 $(cat ./client.cinder.key) secret 值设定
15)在horizon页面删除镜像和实例
16)在controller节点上修改glance-api.conf配置文件,然后重启
[DEFAULT] default_store = rbd [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8
[[email protected] ceph]# systemctl restart openstack-glance-api
17)查看并重新创建image镜像
[[email protected] ceph]# openstack image create "cirros" --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --public
18)storage节点上修改cinder配置文件,并重启controller节点相关服务和storage节点相关服务
19)horizon界面创建卷验证
20)在nova-compute节点修改nova配置文件,重启controller的nova相关服务和storage节点的相关服务
21)horizon界面创建虚拟机验证
以上是关于ceph对接openstack环境的主要内容,如果未能解决你的问题,请参考以下文章