openstack安装配置—— block node配置
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了openstack安装配置—— block node配置相关的知识,希望对你有一定的参考价值。
对于云主机来说,机器可以随时销毁再创建,但数据一定不能,所以就需要数据的持久存储,openstack官方给出的数据存储方案就是cinder模块,cinder模块需要cinder服务端和cinder存储节点共同构成,在本实验中,我们把cinder服务端一并安装在了controller节点上,另行配置一台cinder存储节点,也就是我们的block节点。
block节点基础配置
[[email protected] ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Model name: Westmere E56xx/L56xx/X56xx (Nehalem-C)
Stepping: 1
CPU MHz: 2400.084
BogoMIPS: 4800.16
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0-7
[[email protected] ~]# free -h
total used free shared buff/cache available
Mem: 7.8G 86M 7.6G 8.3M 81M 7.6G
Swap: 0B 0B 0B
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 400G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 399.5G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm
└─centos-data 253:2 0 345.6G 0 lvm /data
[[email protected] ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.20 netmask 255.255.255.0 broadcast 192.168.10.255
inet6 fe80::5054:ff:fe77:e86e prefixlen 64 scopeid 0x20<link>
ether 52:54:00:77:e8:6e txqueuelen 1000 (Ethernet)
RX packets 45039 bytes 3894559 (3.7 MiB)
RX errors 0 dropped 4641 overruns 0 frame 0
TX packets 51 bytes 3458 (3.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.0.20 netmask 255.255.0.0 broadcast 10.0.255.255
inet6 fe80::5054:ff:fef2:70e4 prefixlen 64 scopeid 0x20<link>
ether 52:54:00:f2:70:e4 txqueuelen 1000 (Ethernet)
RX packets 3418 bytes 293716 (286.8 KiB)
RX errors 0 dropped 354 overruns 0 frame 0
TX packets 12 bytes 788 (788.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 111.40.215.10 netmask 255.255.255.240 broadcast 111.40.215.15
inet6 fe80::5054:ff:fef2:704e prefixlen 64 scopeid 0x20<link>
ether 52:54:00:f2:70:4e txqueuelen 1000 (Ethernet)
RX packets 74 bytes 10274 (10.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11 bytes 746 (746.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[[email protected] ~]# getenforce
Disabled
[[email protected] ~]# iptables -vnL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.10 controller
192.168.10.20 block
192.168.10.31 compute1
192.168.10.32 compute2
[[email protected] ~]#
配置时间同步服务
[[email protected] ~]# yum install -y chrony
[[email protected] ~]# cp /etc/chrony.conf{,.bak}
[[email protected] ~]# vim /etc/chrony.conf
[[email protected] ~]# grep -v ^# /etc/chrony.conf | tr -s [[:space:]]
server controller iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
noclientlog
logchange 0.5
logdir /var/log/chrony
[[email protected] ~]# systemctl enable chronyd.service
[[email protected] ~]# systemctl start chronyd.service
[[email protected] ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 6 37 47 -645ns[-1049us] +/- 136ms
[[email protected] ~]#
安装 OpenStack 客户端
[[email protected] ~]# yum install -y python-openstackclient
controller节点上cinder的配置
为cinder准备数据库
[[email protected] ~]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 542
Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.
MariaDB [(none)]> CREATE DATABASE cinder;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘localhost‘ \
-> IDENTIFIED BY ‘CINDER_DBPASS‘;
Query OK, 0 rows affected (0.01 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘%‘ \
-> IDENTIFIED BY ‘CINDER_DBPASS‘;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> quit
Bye
[[email protected] ~]#
以管理员身份向默认域中添加管理权限管理用户cinder
[[email protected] ~]# . admin-openrc
[[email protected] ~]# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 3ad6ac5f704c494e9f16b9e04ef745fe |
| enabled | True |
| id | 44207784dbfe4d47be039fa1670b3105 |
| name | cinder |
+-----------+----------------------------------+
[[email protected] ~]# openstack role add --project service --user cinder admin
[[email protected] ~]#
创建 cinder 和 cinderv2 服务实体
[[email protected] ~]# openstack service create --name cinder \
> --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 34951aecd43c4019b5ccb8d3546968b6 |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
[[email protected] ~]# openstack service create --name cinderv2 \
> --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 353d5e3c7d844bdb9f2fa5b0bd054154 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
[[email protected] ~]#
创建块设备存储服务的 API 入口点
v1版本API入口
[[email protected] ~]# openstack endpoint create --region RegionOne \
> volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | ee567d4643354a5c917246885b81fd3e |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 34951aecd43c4019b5ccb8d3546968b6 |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[[email protected] ~]# openstack endpoint create --region RegionOne \
> volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | e8170f7726954a9e8271168643a4e6f0 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 34951aecd43c4019b5ccb8d3546968b6 |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[[email protected] ~]# openstack endpoint create --region RegionOne \
> volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | ee91417901f64837b6df639492dbed47 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 34951aecd43c4019b5ccb8d3546968b6 |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[[email protected] ~]#
v2版本API入口
[[email protected] ~]# openstack endpoint create --region RegionOne \
> volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 53239152bd51439793d2ed8e4401edfb |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 353d5e3c7d844bdb9f2fa5b0bd054154 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[[email protected] ~]# openstack endpoint create --region RegionOne \
> volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | aa30ce1ab9244604a76a33365b865d75 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 353d5e3c7d844bdb9f2fa5b0bd054154 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[[email protected] ~]# openstack endpoint create --region RegionOne \
> volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | a83ac4dc6af2472989bc2a859d0104f9 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 353d5e3c7d844bdb9f2fa5b0bd054154 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[[email protected] ~]#
controller节点上安装cinder并修改配置文件
[[email protected] ~]# yum -y install openstack-cinder
[[email protected] ~]# cp /etc/cinder/cinder.conf{,.bak}
[[email protected] ~]# vim /etc/cinder/cinder.conf
[[email protected] ~]# grep -v ^# /etc/cinder/cinder.conf | tr -s [[:space:]]
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.10.10
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[[email protected] ~]#
初始化cinder服务的数据库
[[email protected] ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
2017-07-25 00:00:48.333 13486 WARNING py.warnings [-] /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:
241: NotSupportedWarning: Configuration option(s) [‘use_tpool‘] not supported
exception.NotSupportedWarning
2017-07-25 00:00:48.624 13486 INFO migrate.versioning.api [-] 0 -> 1...
2017-07-25 00:00:50.850 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:50.850 13486 INFO migrate.versioning.api [-] 1 -> 2...
2017-07-25 00:00:51.550 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:51.551 13486 INFO migrate.versioning.api [-] 2 -> 3...
2017-07-25 00:00:51.742 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:51.743 13486 INFO migrate.versioning.api [-] 3 -> 4...
2017-07-25 00:00:53.052 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:53.052 13486 INFO migrate.versioning.api [-] 4 -> 5...
2017-07-25 00:00:53.260 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:53.261 13486 INFO migrate.versioning.api [-] 5 -> 6...
2017-07-25 00:00:53.492 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:53.493 13486 INFO migrate.versioning.api [-] 6 -> 7...
2017-07-25 00:00:53.835 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:53.836 13486 INFO migrate.versioning.api [-] 7 -> 8...
2017-07-25 00:00:54.010 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:54.011 13486 INFO migrate.versioning.api [-] 8 -> 9...
2017-07-25 00:00:54.251 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:54.252 13486 INFO migrate.versioning.api [-] 9 -> 10...
2017-07-25 00:00:54.448 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:54.448 13486 INFO migrate.versioning.api [-] 10 -> 11...
2017-07-25 00:00:54.694 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:54.695 13486 INFO migrate.versioning.api [-] 11 -> 12...
2017-07-25 00:00:54.901 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:54.902 13486 INFO migrate.versioning.api [-] 12 -> 13...
2017-07-25 00:00:55.143 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:55.143 13486 INFO migrate.versioning.api [-] 13 -> 14...
2017-07-25 00:00:55.351 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:55.351 13486 INFO migrate.versioning.api [-] 14 -> 15...
2017-07-25 00:00:55.434 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:55.435 13486 INFO migrate.versioning.api [-] 15 -> 16...
2017-07-25 00:00:55.611 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:55.612 13486 INFO migrate.versioning.api [-] 16 -> 17...
2017-07-25 00:00:56.328 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:56.328 13486 INFO migrate.versioning.api [-] 17 -> 18...
2017-07-25 00:00:56.876 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:56.877 13486 INFO migrate.versioning.api [-] 18 -> 19...
2017-07-25 00:00:57.119 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:57.119 13486 INFO migrate.versioning.api [-] 19 -> 20...
2017-07-25 00:00:57.303 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:57.304 13486 INFO migrate.versioning.api [-] 20 -> 21...
2017-07-25 00:00:57.437 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:57.437 13486 INFO migrate.versioning.api [-] 21 -> 22...
2017-07-25 00:00:57.638 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:57.639 13486 INFO migrate.versioning.api [-] 22 -> 23...
2017-07-25 00:00:57.771 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:57.771 13486 INFO migrate.versioning.api [-] 23 -> 24...
2017-07-25 00:00:58.377 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:58.378 13486 INFO migrate.versioning.api [-] 24 -> 25...
2017-07-25 00:00:59.520 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:59.521 13486 INFO migrate.versioning.api [-] 25 -> 26...
2017-07-25 00:00:59.578 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:59.579 13486 INFO migrate.versioning.api [-] 26 -> 27...
2017-07-25 00:00:59.603 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:59.604 13486 INFO migrate.versioning.api [-] 27 -> 28...
2017-07-25 00:00:59.628 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:59.629 13486 INFO migrate.versioning.api [-] 28 -> 29...
2017-07-25 00:00:59.653 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:59.654 13486 INFO migrate.versioning.api [-] 29 -> 30...
2017-07-25 00:00:59.735 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:59.736 13486 INFO migrate.versioning.api [-] 30 -> 31...
2017-07-25 00:00:59.760 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:00:59.761 13486 INFO migrate.versioning.api [-] 31 -> 32...
2017-07-25 00:01:00.148 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:00.149 13486 INFO migrate.versioning.api [-] 32 -> 33...
2017-07-25 00:01:00.665 13486 WARNING py.warnings [-] /usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py:2999:
SAWarning: Table ‘encryption‘ specifies columns ‘volume_type_id‘ as primary_key=True, not matching locally specified columns
‘encryption_id‘; setting the current primary key columns to ‘encryption_id‘. This warning may become an exception in a future release
", ".join("‘%s‘" % c.name for c in self.columns)
2017-07-25 00:01:00.946 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:00.947 13486 INFO migrate.versioning.api [-] 33 -> 34...
2017-07-25 00:01:01.189 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:01.190 13486 INFO migrate.versioning.api [-] 34 -> 35...
2017-07-25 00:01:01.423 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:01.423 13486 INFO migrate.versioning.api [-] 35 -> 36...
2017-07-25 00:01:01.670 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:01.671 13486 INFO migrate.versioning.api [-] 36 -> 37...
2017-07-25 00:01:01.821 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:01.822 13486 INFO migrate.versioning.api [-] 37 -> 38...
2017-07-25 00:01:02.030 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:02.030 13486 INFO migrate.versioning.api [-] 38 -> 39...
2017-07-25 00:01:02.256 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:02.257 13486 INFO migrate.versioning.api [-] 39 -> 40...
2017-07-25 00:01:03.408 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:03.408 13486 INFO migrate.versioning.api [-] 40 -> 41...
2017-07-25 00:01:03.646 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:03.647 13486 INFO migrate.versioning.api [-] 41 -> 42...
2017-07-25 00:01:03.671 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:03.672 13486 INFO migrate.versioning.api [-] 42 -> 43...
2017-07-25 00:01:03.696 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:03.697 13486 INFO migrate.versioning.api [-] 43 -> 44...
2017-07-25 00:01:03.721 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:03.721 13486 INFO migrate.versioning.api [-] 44 -> 45...
2017-07-25 00:01:03.746 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:03.747 13486 INFO migrate.versioning.api [-] 45 -> 46...
2017-07-25 00:01:03.774 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:03.774 13486 INFO migrate.versioning.api [-] 46 -> 47...
2017-07-25 00:01:03.838 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:03.839 13486 INFO migrate.versioning.api [-] 47 -> 48...
2017-07-25 00:01:04.064 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:04.065 13486 INFO migrate.versioning.api [-] 48 -> 49...
2017-07-25 00:01:04.482 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:04.483 13486 INFO migrate.versioning.api [-] 49 -> 50...
2017-07-25 00:01:04.864 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:04.864 13486 INFO migrate.versioning.api [-] 50 -> 51...
2017-07-25 00:01:05.047 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:05.047 13486 INFO migrate.versioning.api [-] 51 -> 52...
2017-07-25 00:01:05.307 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:05.307 13486 INFO migrate.versioning.api [-] 52 -> 53...
2017-07-25 00:01:06.032 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:06.032 13486 INFO migrate.versioning.api [-] 53 -> 54...
2017-07-25 00:01:06.265 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:06.266 13486 INFO migrate.versioning.api [-] 54 -> 55...
2017-07-25 00:01:06.600 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:06.600 13486 INFO migrate.versioning.api [-] 55 -> 56...
2017-07-25 00:01:06.624 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:06.625 13486 INFO migrate.versioning.api [-] 56 -> 57...
2017-07-25 00:01:06.649 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:06.650 13486 INFO migrate.versioning.api [-] 57 -> 58...
2017-07-25 00:01:06.674 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:06.675 13486 INFO migrate.versioning.api [-] 58 -> 59...
2017-07-25 00:01:06.699 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:06.700 13486 INFO migrate.versioning.api [-] 59 -> 60...
2017-07-25 00:01:06.725 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:06.726 13486 INFO migrate.versioning.api [-] 60 -> 61...
2017-07-25 00:01:07.140 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:07.140 13486 INFO migrate.versioning.api [-] 61 -> 62...
2017-07-25 00:01:07.474 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:07.475 13486 INFO migrate.versioning.api [-] 62 -> 63...
2017-07-25 00:01:07.506 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:07.506 13486 INFO migrate.versioning.api [-] 63 -> 64...
2017-07-25 00:01:07.717 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:07.717 13486 INFO migrate.versioning.api [-] 64 -> 65...
2017-07-25 00:01:08.332 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:08.333 13486 INFO migrate.versioning.api [-] 65 -> 66...
2017-07-25 00:01:09.259 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:09.260 13486 INFO migrate.versioning.api [-] 66 -> 67...
2017-07-25 00:01:09.309 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:09.309 13486 INFO migrate.versioning.api [-] 67 -> 68...
2017-07-25 00:01:09.339 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:09.340 13486 INFO migrate.versioning.api [-] 68 -> 69...
2017-07-25 00:01:09.357 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:09.357 13486 INFO migrate.versioning.api [-] 69 -> 70...
2017-07-25 00:01:09.382 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:09.382 13486 INFO migrate.versioning.api [-] 70 -> 71...
2017-07-25 00:01:09.408 13486 INFO migrate.versioning.api [-] done
2017-07-25 00:01:09.408 13486 INFO migrate.versioning.api [-] 71 -> 72...
2017-07-25 00:01:09.432 13486 INFO migrate.versioning.api [-] done
[[email protected] ~]#
修改compute节点配置,以使compute节点能够支持cinder模块
编辑文件 /etc/nova/nova.conf 并添加如下到其中
[[email protected] ~]# vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
重启计算节点的nova服务
[[email protected] ~]# systemctl restart openstack-nova-compute
重启controller端的nova服务
[[email protected] ~]# systemctl restart openstack-nova-api.service
启用并启动cinder服务
[[email protected] ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
[[email protected] ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
[[email protected] ~]# ss -tnl //启动cinder服务后会新增8776端口
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:8776 *:*
LISTEN 0 128 *:25672 *:*
LISTEN 0 128 192.168.10.10:3306 *:*
LISTEN 0 128 127.0.0.1:11211 *:*
LISTEN 0 128 *:9292 *:*
LISTEN 0 128 *:4369 *:*
LISTEN 0 128 *:9696 *:*
LISTEN 0 100 *:6080 *:*
LISTEN 0 128 *:8774 *:*
LISTEN 0 128 *:22022 *:*
LISTEN 0 128 *:8775 *:*
LISTEN 0 128 *:9191 *:*
LISTEN 0 128 :::5000 :::*
LISTEN 0 128 :::5672 :::*
LISTEN 0 128 ::1:11211 :::*
LISTEN 0 128 :::80 :::*
LISTEN 0 128 :::35357 :::*
LISTEN 0 128 :::22022 :::*
[[email protected] ~]#
cinder存储节点配置
kvm宿主机上的操作
给块存储节点添加一块硬盘,先将block节点关机,然后前往kvm控制台进行操作添加虚拟磁盘
[[email protected]_test ~]# qemu-img create -q -f qcow2 /kvm/images/block_lvm.qcow2 100G
[[email protected]_test ~]# ll -h /kvm/images/block_lvm.qcow2
-rw-r--r-- 1 root root 194K Jul 25 00:49 /kvm/images/block_lvm.qcow2
[[email protected]_test ~]# virsh edit block
在原有配置段下面添加如下配置
</disk>
<disk type=‘file‘ device=‘disk‘>
<driver name=‘qemu‘ type=‘qcow2‘/>
<source file=‘/kvm/images/block_lvm.qcow2‘/>
<target dev=‘vdb‘ bus=‘virtio‘/>
<address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x10‘ function=‘0x0‘/>
</disk>
修改完成后保存退出,启动block虚机
block节点上继续操作
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 400G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 399.5G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm
└─centos-data 253:2 0 345.6G 0 lvm /data
vdb 252:16 0 100G 0 disk
[[email protected] ~]# pvcreate /dev/vdb
Physical volume "/dev/vdb" successfully created
[[email protected] ~]# vgcreate cinder-volumes /dev/vdb
Volume group "cinder-volumes" successfully created
[[email protected] ~]# vim /etc/lvm/lvm.conf
添加如下内容
filter = [ "a/vda/", "a/vdb/", "r/.*/"]
[[email protected] ~]# yum install -y openstack-cinder targetcli python-keystone
[[email protected] ~]# cp /etc/cinder/cinder.conf{,.bak}
[[email protected] ~]# vim /etc/cinder/cinder.conf
[[email protected] ~]# grep -v ^# /etc/cinder/cinder.conf | tr -s [[:space:]]
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.10.20
enabled_backends = lvm
glance_api_servers = http://controller:9292
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[[email protected] ~]# systemctl start openstack-cinder-volume.service target.service
前往controller节点进行操作
[[email protected] ~]# . admin-openrc
[[email protected] ~]# cinder-service-list
-bash: cinder-service-list: command not found
[[email protected] ~]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2017-07-25T16:17:10.000000 | - |
| cinder-volume | [email protected] | nova | enabled | up | 2017-07-25T16:17:17.000000 | - |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
[[email protected] ~]#
创建并挂载一个卷
[[email protected] ~]# . demo-openrc
[[email protected] ~]# openstack volume create --size 5 volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-07-26T00:06:22.549611 |
| description | None |
| encrypted | False |
| id | 826e2ea7-4557-4dd5-b540-f213fbedcda4 |
| multiattach | False |
| name | volume1 |
| properties | |
| replication_status | disabled |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | deb3adea97e34fee9161a47940762a53 |
+---------------------+--------------------------------------+
[[email protected] ~]# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 826e2ea7-4557-4dd5-b540-f213fbedcda4 | volume1 | available | 5 | |
+--------------------------------------+--------------+-----------+------+-------------+
[[email protected] ~]# openstack volume show 826e2ea7-4557-4dd5-b540-f213fbedcda4
+------------------------------+--------------------------------------+
| Field | Value |
+------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-07-26T00:06:22.000000 |
| description | None |
| encrypted | False |
| id | 826e2ea7-4557-4dd5-b540-f213fbedcda4 |
| multiattach | False |
| name | volume1 |
| os-vol-tenant-attr:tenant_id | 0200f6457da84abd9055a5c192386747 |
| properties | |
| replication_status | disabled |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | None |
| updated_at | 2017-07-26T00:06:23.000000 |
| user_id | deb3adea97e34fee9161a47940762a53 |
+------------------------------+--------------------------------------+
[[email protected] ~]# openstack server add volume provider-instance volume1
[[email protected] ~]# openstack server start provider-instance
block节点发生的变化
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 400G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 399.5G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm
└─centos-data 253:2 0 345.6G 0 lvm /data
vdb 252:16 0 100G 0 disk
└─cinder--volumes-volume--826e2ea7--4557--4dd5--b540--f213fbedcda4 253:3 0 5G 0 lvm
[[email protected] ~]#
实例上挂载数据卷操作步骤
使用ssh登录实例查看有无数据卷
[[email protected] ~]# ssh -p 22 [email protected]
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 1G 0 disk
`-vda1 253:1 0 1011.9M 0 part /
vdb 253:16 0 5G 0 disk
$ sudo fdisk /dev/vdb //分区
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x344bf1f3.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won‘t be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-10485759, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759):
Using default value 10485759
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
$ sudo mkfs.ext4 -L data /dev/vdb1
mke2fs 1.42.2 (27-Mar-2012)
Filesystem label=data
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310464 blocks
65523 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
$ sudo mkdir /data //创建挂载点
$ sudo vi /etc/fstab //写入fstab文件,方便重启后自动挂载
$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount pt> <type> <options> <dump> <pass>
/dev/root / auto rw,noauto 0 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts defaults,gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs mode=0777 0 0
sysfs /sys sysfs defaults 0 0
tmpfs /run tmpfs rw,nosuid,relatime,size=200k,mode=755 0 0
/dev/vdb1 /data ext4 defaults 0 0
$ sudo mount -a
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 1G 0 disk
`-vda1 253:1 0 1011.9M 0 part /
vdb 253:16 0 5G 0 disk
`-vdb1 253:17 0 5G 0 part /data
$ ls /data/
lost+found
$
block节点在实际生产环境中也数量庞大的,所以也需要在成功配置一个节点后写成自动化配置脚本,以便大批量部署。
本文出自 “爱情防火墙” 博客,请务必保留此出处http://183530300.blog.51cto.com/894387/1957815
以上是关于openstack安装配置—— block node配置的主要内容,如果未能解决你的问题,请参考以下文章
openstack controller ha测试环境搭建记录(十四)——配置cinder(存储节点)