OpenStack安装随笔
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了OpenStack安装随笔相关的知识,希望对你有一定的参考价值。
学习了几天的OpenStack的知识,现在将这几天所学的知识整理一下,有整理错误的地方,希望大家能指出来,共同进步.
OpenStack是一个开源的云平台,它的实现是通过几个不同的组件来共同完成的,这里简单列出每个组件相应的作用.
rabbitmq: openstack的消息队列
keystone: openstack的认证服务,所有的组件都要通过kesytone的认证
glance: openstack的镜像管理
nova: 创建管理云主机的组件
neutron:openstack的网络部分由它来完成
cinder: 存储
本次OpenStack的搭建,由两台主机完成:
管理节点:Marvin-node1,IP: 192.168.203.21/24
计算节点:Marvin-node2,IP: 192.168.203.22/24
系统: CentOS 7.2
需要知道的是,openstack创建虚拟机的时候,全部都是在计算节点上创建的.
下面进行安装.
一. 在两台主机上分别进行安装准备,包括安装openstack的软件仓库,openstack的客户端等,即以下命令在两台主机上都要执行
Marvin-node1: [[email protected]-node1 ~]# yum install centos-release-openstack-newton -y ## 安装openstack软件仓库 [[email protected]-node1 ~]# yum install python-openstackclient -y ## 安装openstack客户端 [[email protected]-node1 ~]# yum install openstack-selinux -y ##openstack中selinux的管理工具 Marvin-node2: [[email protected]-node2~]# yum install centos-release-openstack-newton -y [[email protected]-node2 ~]# yum install python-openstackclient -y [[email protected]-node2 ~]# yum install openstack-selinux -y
二.在管理节点上安装相应的组件(Marvin-node1节点)
a..安装配置数据库
a.1 安装数据库
[[email protected] ~]# yum install mariadb mariadb-server python2-Pymysql -y
a.2 在/etc/my.cnf.d目录下创建openstack.cnf数据库文件
[[email protected] ~]# vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 192.168.203.21 default-storage-engine = innodb innodb_file_per_table max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
a.3 设置数据库开机启动,开启数据库,查看数据是否启动并初始化数据库
[[email protected] ~]# systemctl enable mariadb && systemctl start mariadb ## 启动数据库,并设置开机自起 Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service. [[email protected]-node1 ~]# lsof -i:3306 ## 查看数据是否启动成功 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 4099 mysql 17u IPv4 21843 0t0 TCP Marvin-node1:mysql (LISTEN) [[email protected]-node1 ~]# mysql_secure_installation ## 初始化数据库,设置数据库密码为123456 一路默认就行 NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we‘ll need the current password for the root user. If you‘ve just installed MariaDB, and you haven‘t set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MariaDB root user without the proper authorisation. Set root password? [Y/n] y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] ... Success! Normally, root should only be allowed to connect from ‘localhost‘. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] ... Success! By default, MariaDB comes with a database named ‘test‘ that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] ... Success! Cleaning up... All done! If you‘ve completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB!
[[email protected] ~]# mysql -uroot -p ##初始化数据库后登录数据库测试,看能否正常登录
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \\g.
Your MariaDB connection id is 10
Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type ‘help;‘ or ‘\\h‘ for help. Type ‘\\c‘ to clear the current input statement.
MariaDB [(none)]> exit
Bye
b. 安装配置消息队列,rabbitmq
b.1 安装rabbitmq,启动它并且设置开机自起
[[email protected] ~]# yum install rabbitmq-server -y [[email protected]-node1 ~]# systemctl enable rabbitmq-server && systemctl start rabbitmq-server Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
b.2 创建openstack用户,并且附加权限,启动rabbitmq的web访问插件
[[email protected] ~]# rabbitmqctl add_user openstack openstack ##创建openstack用户,密码也为openstack Creating user "openstack" ... [[email protected]-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" ## 给openstack用户授权 Setting permissions for user "openstack" in vhost "/" ... [[email protected]-node1 ~]# rabbitmq-plugins enable rabbitmq_management ## 开启rabbitmq的web访问插件,可用命令rabbitmq-plugins list 列出rabbitmq所有可用插件 The following plugins have been enabled: mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent rabbitmq_management Applying plugin configuration to [email protected]-node1... started 6 plugins. [[email protected]-node1 ~]# lsof -i:15672 ## 查看web插件是否启动成功,15672是rabbitmq-web端的服务端口 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME beam.smp 4285 rabbitmq 54u IPv4 30328 0t0 TCP *:15672 (LISTEN)
b.3 使用浏览器访问 http://192.168.203.21:15672 用户名密码都为:guest,尝试登录
rabbitmq安装完成
c. 安装配置认证服务keystone
在安装keystone前,先创建对应的数据库keystone,在此我们将后面用到的几个数据库一起建立完成,并赋予本地访问和远程访问的权限,数据库的用户名和密码与数据库保持一直,方便记忆不会弄混(初学建议,高手忽略,生产不建议这样做)
需要建立的数据库分别是:
keystone, glance, nova, nova_api, neutron, cinder
开始创建
[[email protected] ~]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. Your MariaDB connection id is 11 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type ‘help;‘ or ‘\\h‘ for help. Type ‘\\c‘ to clear the current input statement. MariaDB [(none)]> create database keystone; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> grant all on keystone.* to ‘keystone‘@‘localhost‘ identified by ‘keystone‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all on keystone.* to ‘keystone‘@‘%‘ identified by ‘keystone‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> create database glance; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> grant all on glance.* to ‘glance‘@‘localhost‘ identified by ‘glance‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all on glance.* to ‘glance‘@‘%‘ identified by ‘glance‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> create database nova; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> grant all on nova.* to ‘nova‘@‘localhost‘ identified by ‘nova‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all on nova.* to ‘nova‘@‘%‘ identified by ‘nova‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> create database nova_api; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> grant all on nova_api.* to ‘nova‘@‘localhost‘ identified by ‘nova‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all on nova_api.* to ‘nova‘@‘%‘ identified by ‘nova‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> create database neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> grant all on neutron.* to ‘neutron‘@‘localhost‘ identified by ‘neutron‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all on neutron.* to ‘neutron‘@‘%‘ identified by ‘neutron‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> create database cinder; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> grant all on cinder.* to ‘cinder‘@‘localhost‘ identified by ‘cinder‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all on cinder.* to ‘cinder‘@‘%‘ identified by ‘cinder‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | cinder | | glance | | information_schema | | keystone | | mysql | | neutron | | nova | | nova_api | | performance_schema | +--------------------+ 9 rows in set (0.00 sec) MariaDB [(none)]> exit Bye
c.1 安装keystone
[[email protected] ~]# yum install openstack-keystone httpd mod_wsgi -y
c.2 修改配置文件,配置数据库信息,keystone的配置文件是/etc/keystone/keystone.conf
[[email protected] ~]# cd /etc/keystone/ [[email protected]-node1 keystone]# vim keystone.conf ## 在[database]模块中找到数据配置节点,添加数据库信息 613 [database] 640 connection = mysql+pymysql://keystone:[email protected]/keystone
c.3 数据库信息配置完成了,就可以初始化keystone的数据库了,同步完成后记得使用数据库验证
[[email protected] keystone]# su -s /bin/sh -c "keystone-manage db_sync" keystone ## 初始化keystone数据库 [[email protected]-node1 keystone]# mysql -h 192.168.203.21 -ukeystone -pkeystone -e "use keystone;show tables;" ## 验证 +------------------------+ | Tables_in_keystone | +------------------------+ | access_token | | assignment | | config_register | | consumer | | credential | | endpoint | | endpoint_group | | federated_user | | federation_protocol | | group | | id_mapping | | identity_provider | | idp_remote_ids | | implied_role | | local_user | | mapping | | migrate_version | | nonlocal_user | | password | | policy | | policy_association | | project | | project_endpoint | | project_endpoint_group | | region | | request_token | | revocation_event | | role | | sensitive_config | | service | | service_provider | | token | | trust | | trust_role | | user | | user_group_membership | | whitelisted_config | +------------------------+
为了提高认证速度,我们可以配置memcache服务,安装配置memcache,启动它并设置开机自起
[[email protected] keystone]# yum install memcached python-memcached -y [[email protected]-node1 keystone]# vim /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 192.168.203.21,::1" [[email protected]-node1 keystone]# systemctl enable memcached && systemctl start memcached Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service. [[email protected]-node1 keystone]# lsof -i:11211 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME memcached 5671 memcached 26u IPv4 33922 0t0 TCP Marvin-node1:memcache (LISTEN) memcached 5671 memcached 27u IPv6 33923 0t0 TCP localhost:memcache (LISTEN) memcached 5671 memcached 28u IPv4 33924 0t0 UDP Marvin-node1:memcache memcached 5671 memcached 29u IPv6 33925 0t0 UDP localhost:memcache
c.4 修改keystone的配置文件,配置memcache
[[email protected]node1 keystone]# vim keystone.conf ## 需要在[memcache]模块下去修改服务器地址,修改信息如下 1462 [memcache] 1476 servers = 192.168.203.21:11211
c.5 配置tonken(令牌)提供者
[[email protected]node1 keystone]# vim keystone.conf 2614 [token] ## 需要修改的模块 2659 provider = fernet ## token的提供者 2669 driver = memcache ## token的存放位置
可以查看一下,总共修改了keystone.conf文件的多少内容
[[email protected] keystone]# grep ‘^[a-z]‘ keystone.conf connection = mysql+pymysql://keystone:[email protected]/keystone ## 数据库 servers = 192.168.203.21:11211 ## memcache服务器地址 provider = fernet ## token的提供者 driver = memcache ## token的存放位置
c.6 初始化keysrone,并注册相关服务
[[email protected] keystone]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone [[email protected]-node1 keystone]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone [[email protected]-node1 keystone]# keystone-manage bootstrap --bootstrap-password admin > --bootstrap-admin-url http://192.168.203.21:35357/v3/ \\ > --bootstrap-internal-url http://192.168.203.21:35357/v3/ \\ > --bootstrap-public-url http://192.168.203.21:5000/v3/ \\ > --bootstrap-region-id RegionOne
c.7 完成配置后,登录数据库进行检查,和我的配置一样表示keystone配置完成,没有问题
[[email protected] keystone]# mysql -ukeystone -pkeystone Welcome to the MariaDB monitor. Commands end with ; or \\g. Your MariaDB connection id is 15 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type ‘help;‘ or ‘\\h‘ for help. Type ‘\\c‘ to clear the current input statement. MariaDB [(none)]> use keystone; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [keystone]> show tables; +------------------------+ | Tables_in_keystone | +------------------------+ | access_token | | assignment | | config_register | | consumer | | credential | | endpoint | | endpoint_group | | federated_user | | federation_protocol | | group | | id_mapping | | identity_provider | | idp_remote_ids | | implied_role | | local_user | | mapping | | migrate_version | | nonlocal_user | | password | | policy | | policy_association | | project | | project_endpoint | | project_endpoint_group | | region | | request_token | | revocation_event | | role | | sensitive_config | | service | | service_provider | | token | | trust | | trust_role | | user | | user_group_membership | | whitelisted_config | +------------------------+ 37 rows in set (0.00 sec) MariaDB [keystone]> select * from user\\G *************************** 1. row *************************** id: 0778433dcab742ea948d45e328c6a958 extra: {} enabled: 1 default_project_id: NULL created_at: 2017-08-30 00:35:53 last_active_at: NULL 1 row in set (0.00 sec) MariaDB [keystone]> select * from role\\G *************************** 1. row *************************** id: 078de1fee26446a682d8a65eb7baa75a name: admin extra: {} domain_id: <<null>> *************************** 2. row *************************** id: 9fe2ff9ee4384b1894a90878d3e92bab name: _member_ extra: {} domain_id: <<null>> 2 rows in set (0.00 sec) MariaDB [keystone]> select * from endpoint\\G *************************** 1. row *************************** id: 18cb9cc5ed8a432a8741775051d458ea legacy_endpoint_id: NULL interface: public service_id: 5097ac0280414a1b99be529dfbb80efd url: http://192.168.203.21:5000/v3/ extra: {} enabled: 1 region_id: RegionOne *************************** 2. row *************************** id: 5f4ce486d42547479b697165fb87addf legacy_endpoint_id: NULL interface: internal service_id: 5097ac0280414a1b99be529dfbb80efd url: http://192.168.203.21:35357/v3/ extra: {} enabled: 1 region_id: RegionOne *************************** 3. row *************************** id: 6decdc54138d43b5ad0b481d17b9163b legacy_endpoint_id: NULL interface: admin service_id: 5097ac0280414a1b99be529dfbb80efd url: http://192.168.203.21:35357/v3/ extra: {} enabled: 1 region_id: RegionOne 3 rows in set (0.00 sec) MariaDB [keystone]> exit Bye
c.8 配置apache服务
[[email protected] ~]# vim /etc/httpd/conf/httpd.conf 95 ServerName 192.168.203.21:80 ## 95行修改apache地址 [[email protected]-node1 ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ ## 将配置文件做软链接到apache的服务目录下 [[email protected]-node1 ~]# systemctl enable httpd && systemctl start httpd ## 启动apache服务,并设置开机自起 Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. [[email protected]-node1 ~]# lsof -i:80 ## 查看服务是否启动成功 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME httpd 6010 root 4u IPv6 34039 0t0 TCP *:http (LISTEN) httpd 6021 apache 4u IPv6 34039 0t0 TCP *:http (LISTEN) httpd 6022 apache 4u IPv6 34039 0t0 TCP *:http (LISTEN) httpd 6023 apache 4u IPv6 34039 0t0 TCP *:http (LISTEN) httpd 6024 apache 4u IPv6 34039 0t0 TCP *:http (LISTEN) httpd 6025 apache 4u IPv6 34039 0t0 TCP *:http (LISTEN)
c.9 验证信息
[[email protected] ~]# openstack user list Missing value auth-url required for auth plugin password ## 这里不成功,是因为环境变量没有配置 [[email protected]-node1 ~]# export OS_USERNAME=admin [[email protected]-node1 ~]# export OS_PASSWORD=admin [[email protected]-node1 ~]# export OS_PROJECT_NAME=admin [[email protected]-node1 ~]# export OS_USER_DOMAIN_NAME=default [[email protected]-node1 ~]# export OS_PROJECT_DOMAIN_NAME=default [[email protected]-node1 ~]# export OS_AUTH_URL=http://192.168.203.21:35357/v3 [[email protected] ~]# export OS_IDENTITY_API_VERSION=3 ##变量配置完成,在执行命令 [[email protected]-node1 ~]# openstack user list ## 可以看到admin用户,命令执行成功 +----------------------------------+-------+ | ID | Name | +----------------------------------+-------+ | 0778433dcab742ea948d45e328c6a958 | admin | +----------------------------------+-------+
d. 创建项目,用户,角色.并为各用户赋予角色和项目
## 创建service项目 [[email protected]-node1 ~]# openstack project create --domain default > --description "Service Project" service +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Service Project | | domain_id | default | | enabled | True | | id | beca05659ae949169c5311ed8ab4f841 | | is_domain | False | | name | service | | parent_id | default | +-------------+----------------------------------+ ## 创建demo项目 [[email protected]-node1 ~]# openstack project create --domain default > --description "Demo Project" demo +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Demo Project | | domain_id | default | | enabled | True | | id | e3c68befc5494752bf297066513db5aa | | is_domain | False | | name | demo | | parent_id | default | +-------------+----------------------------------+ ## 创建demo用户,密码也为demo [[email protected]-node1 ~]# openstack user create --domain default > --password-prompt demo User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 6f131e4afeaf4e7b8dd594c388cf74e4 | | name | demo | | password_expires_at | None | +---------------------+----------------------------------+ ## 创建一个user角色 [[email protected]-node1 ~]# openstack role create user +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 7d5a5b91b9ca40e4834c159ca7697523 | | name | user | +-----------+----------------------------------+ ## 把demo用户加入的demo项目,并赋予user角色的权限 [[email protected]-node1 ~]# openstack role add --project demo --user demo user
创建glance用户,并将glance用户加入到service项目中,赋予admin角色权限
[[email protected] ~]# openstack user create --domain default --password-prompt glance ## 创建什么用户,密码和用户名相同,好记不容易出错 User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 0ed0ae42a25d4435831621e9315484d7 | | name | glance | | password_expires_at | None | +---------------------+----------------------------------+ [[email protected]-node1 ~]# openstack role add --project service --user glance admin
用同样的方法建立nova,neurton,cinder三个用户,并将他们加入service项目,赋予admin角色权限
[[email protected] ~]# openstack user create --domain default --password-prompt nova User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 71cd43e16b1a41358232f0f81f6d859c | | name | nova | | password_expires_at | None | +---------------------+----------------------------------+ [[email protected]-node1 ~]# openstack role add --project service --user nova admin [[email protected]-node1 ~]# openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | cbbdbcdfc03343a29c91dc55a9b601f0 | | name | neutron | | password_expires_at | None | +---------------------+----------------------------------+ [[email protected]-node1 ~]# openstack role add --project service --user neutron admin [[email protected]-node1 ~]# openstack user create --domain default --password-prompt cinder User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 43fa18cb7c8244ecb49b7dfc74eef98b | | name | cinder | | password_expires_at | None | +---------------------+----------------------------------+ [[email protected]-node1 ~]# openstack role add --project service --user cinder admin
撤销临时变量,分别请求admin和demo的token令牌,查看是否可以成功,注意: admin和demo请求的端口是不一样的,admin使用的是35357,demo使用的是5000端口
[[email protected] ~]# unset OS_AUTH_URL OS_PASSWORD ## 撤销临时变量 [[email protected]-node1 ~]# openstack --os-auth-url http://192.168.203.21:35357/v3 \\ > --os-project-domain-name default --os-user-domain-name default > --os-project-name admin --os-username admin token issue Password: ##输入密码admin +------------+---------------------------------------------------------------------------------------+ | Field | Value | +------------+---------------------------------------------------------------------------------------+ | expires | 2017-08-30 02:06:55+00:00 | | id | gAAAAABZpg-vc5CfiQ5ny46CDfUFeluqvpIoAgtut7Zjn8B3YVVG8Xd6g- | | | 7RvzuiTusYMvsivW_0Qxum7Y_oRgtcQvyGAkAtJL_a5vlBF8njBpeqBhZlRB3enNED- | | | 0sCVYJJyJtgt7pzwb1v_vdzAp3gca0B7mfr80tpg4hAH5Zo2pjOjyWjqds | | project_id | 12dbd56ae8f04d56b4ade27d01618ae6 | | user_id | 0778433dcab742ea948d45e328c6a958 | +------------+---------------------------------------------------------------------------------------+
[[email protected]-node1 ~]# openstack --os-auth-url http://192.168.203.21:5000/v3 \\ > --os-project-domain-name default --os-user-domain-name default > --os-project-name demo --os-username demo token issue Password: ## 输入密码demo +------------+---------------------------------------------------------------------------------------+ | Field | Value | +------------+---------------------------------------------------------------------------------------+ | expires | 2017-08-30 02:08:16+00:00 | | id | gAAAAABZphAA_Igt-PKcW4nX5QPEZFMA4sAGP-X-t3XrS5z6e4wPl-mWB9B1WLpw0btDzyK-cWhiK1Dba7rFV | | | 8LVgSC_gAC2NOTb_bjsKA1z7dLiCZxCZxgg84rsrlgvi6Snzp5L9-zTgiseT6lCcbyYjJknUpuo333HjQycvi | | | 1fLdFYkZiYiuc | | project_id | e3c68befc5494752bf297066513db5aa | | user_id | 6f131e4afeaf4e7b8dd594c388cf74e4 | +------------+---------------------------------------------------------------------------------------+
配置admin和demo的环境变量脚本
[[email protected] ~]# vim admin-openstack export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default port OS_PROJECT_NAME=admin ▽xport OS_USERNAME=admin export OS_PASSWORD=admin export OS_AUTH_URL=http://192.168.203.21:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 [[email protected]-node1 ~]# vim demo-openstack export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=demo export OS_AUTH_URL=http://192.168.203.21:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
测试验证:
[[email protected] ~]# source admin-openstack [[email protected]-node1 ~]# openstack token issue +------------+---------------------------------------------------------------------------------------+ | Field | Value | +------------+---------------------------------------------------------------------------------------+ | expires | 2017-08-30 02:12:15+00:00 | | id | gAAAAABZphDvi2c0CogBgKIN1rWmOl7VzTjlyQuXZL-pesceWkb4hGoYtHSlWFdFY9JhPwV0iypAXLh0_FmKb | | | edOcJikFz3IjEsafh728EdUPisCmMYZhRyX_T9_v8sjPLOeZdxmyq89j-XHwfAx7t- | | | d69Er_Gr4oJPwOlS7Ww71l1gHQ-WBLmo | | project_id | 12dbd56ae8f04d56b4ade27d01618ae6 | | user_id | 0778433dcab742ea948d45e328c6a958 | +------------+---------------------------------------------------------------------------------------+ [[email protected]-node1 ~]# source demo-openstack [[email protected]-node1 ~]# openstack token issue +------------+---------------------------------------------------------------------------------------+ | Field | Value | +------------+---------------------------------------------------------------------------------------+ | expires | 2017-08-30 02:13:32+00:00 | | id | gAAAAABZphE8OsSoJOPlr1Ut6Rb5nCRP1pcKwIwSOLrd8lNBMe0oeJcr2HdG0me_nysjZrR_cK3f4m0Dze5kx | | | tFfSsZ-8QFIeNq8XA_x3vAgqycY8W43hzEwTSRGRUpF4kCWEtGlgdiJnw2ocNCFcXTUB3hMgRAW- | | | 3jtZXjHy577kUiqbhFR57g | | project_id | e3c68befc5494752bf297066513db5aa | | user_id | 6f131e4afeaf4e7b8dd594c388cf74e4 | +------------+---------------------------------------------------------------------------------------+ ##可以比较得到的令牌,在不同变量下令牌不同
e 安装配置glance镜像服务
e.1 创建image的服务实体
[[email protected] ~]# source admin-openstack
[[email protected] ~]# openstack service create --name glance --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 0c1364b4636e486b9a2d7ffd17f0d47c | | name | glance | | type | image | +-------------+----------------------------------+
e.2 创建glance的API端点,分为public,internal, admin
[[email protected] ~]# openstack endpoint create --region RegionOne > image public http://192.168.203.21:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 94f9fd8d266b4566bab4dd90ce15fff1 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 0c1364b4636e486b9a2d7ffd17f0d47c | | service_name | glance | | service_type | image | | url | http://192.168.203.21:9292 | +--------------+----------------------------------+ [[email protected]-node1 ~]# openstack endpoint create --region RegionOne > image internal http://192.168.203.21:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 5128b49d5cb14000b5ba23fe389d84ad | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 0c1364b4636e486b9a2d7ffd17f0d47c | | service_name | glance | | service_type | image | | url | http://192.168.203.21:9292 | +--------------+----------------------------------+ [[email protected]-node1 ~]# openstack endpoint create --region RegionOne > image admin http://192.168.203.21:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | aabda6edf07d4091838386691a21a066 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 0c1364b4636e486b9a2d7ffd17f0d47c | | service_name | glance | | service_type | image | | url | http://192.168.203.21:9292 | +--------------+----------------------------------+
查看创建的信息
[[email protected] ~]# openstack service list +----------------------------------+----------+----------+ | ID | Name | Type | +----------------------------------+----------+----------+ | 0c1364b4636e486b9a2d7ffd17f0d47c | glance | image | | 5097ac0280414a1b99be529dfbb80efd | keystone | identity | +----------------------------------+----------+----------+ [[email protected]-node1 ~]# openstack endpoint list +---------------+-----------+--------------+--------------+---------+-----------+------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +---------------+-----------+--------------+--------------+---------+-----------+------------------+ | 18cb9cc5ed8a4 | RegionOne | keystone | identity | True | public | http://192.168.2 | | 32a8741775051 | | | | | | 03.21:5000/v3/ | | d458ea | | | | | | | | 5128b49d5cb14 | RegionOne | glance | image | True | internal | http://192.168.2 | | 000b5ba23fe38 | | | | | | 03.21:9292 | | 9d84ad | | | | | | | | 5f4ce486d4254 | RegionOne | keystone | identity | True | internal | http://192.168.2 | | 7479b697165fb | | | | | | 03.21:35357/v3/ | | 87addf | | | | | | | | 6decdc54138d4 | RegionOne | keystone | identity | True | admin | http://192.168.2 | | 3b5ad0b481d17 | | | | | | 03.21:35357/v3/ | | b9163b | | | | | | | | 94f9fd8d266b4 | RegionOne | glance | image | True | public | http://192.168.2 | | 566bab4dd90ce | | | | | | 03.21:9292 | | 15fff1 | | | | | | | | aabda6edf07d4 | RegionOne | glance | image | True | admin | http://192.168.2 | | 091838386691a | | | | | | 03.21:9292 | | 21a066 | | | | | | | +---------------+-----------+--------------+--------------+---------+-----------+------------------+
e.3 glance的安装配置
安装glance,并配置同步数据库信息,
[[email protected] ~]# yum install openstack-glance -y [[email protected]-node1 ~]# cd /etc/glance/ [[email protected]-node1 glance]# ll total 448 -rw-r----- 1 root glance 140377 Oct 6 2016 glance-api.conf -rw-r----- 1 root glance 74933 Oct 6 2016 glance-cache.conf -rw-r----- 1 root glance 71932 Oct 6 2016 glance-glare.conf -rw-r----- 1 root glance 66745 Oct 6 2016 glance-registry.conf -rw-r----- 1 root glance 79707 Oct 6 2016 glance-scrubber.conf drwxr-xr-x 2 root root 4096 Aug 30 09:25 metadefs -rw-r----- 1 root glance 1361 Oct 6 2016 policy.json -rw-r----- 1 root glance 1380 Oct 6 2016 schema-image.json ## 修改glance-api.conf和glance-registry.conf文件的数据库信息 [[email protected]-node1 glance]# vim glance-api.conf 1721 [database] ## 需要修改的模块位置 1748 connection = mysql+pymysql://glance:[email protected]/glance ## 需要修改的数据库信息 [[email protected] glance]# vim glance-registry.conf 1011 [database] ## 需要修改的模块位置 1038 connection = mysql+pymysql://glance:[email protected]/glance ## 需要修改的数据库信息 [[email protected] glance]# su -s /bin/sh -c "glance-manage db_sync" glance ## 导入数据库 Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future. /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1171: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade expire_on_commit=expire_on_commit, _conf=conf) /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u‘Duplicate index `ix_image_properties_image_id_name`. This is deprecated and will be disallowed in a future release.‘) result = self._query(query) ## 警告忽略 [[email protected]-node1 glance]# mysql -h 192.168.203.21 -uglance -pglance -e "use glance;show tables;" ##检查是否导入 +----------------------------------+ | Tables_in_glance | +----------------------------------+ | artifact_blob_locations | | artifact_blobs | | artifact_dependencies | | artifact_properties | | artifact_tags | | artifacts | | image_locations | | image_members | | image_properties | | image_tags | | images | | metadef_namespace_resource_types | | metadef_namespaces | | metadef_objects | | metadef_properties | | metadef_resource_types | | metadef_tags | | migrate_version | | task_info | | tasks | +----------------------------------+
继续修改glance的配置文件,配置认证访问服务
[[email protected] glance]# vim glance-api.conf 3178 [keystone_authtoken] ## 模块位置,下面是需要添加的内容 auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = glance 3965 [paste_deploy] ## 模块位置 3990 flavor = keystone ## 需要修改的位置 1837 [glance_store] ## 模块位置 1864 stores = file,http 1896 default_store = file 2196 filesystem_store_datadir = /var/lib/glance/images ## 需要修改的位置 [[email protected]-node1 glance]# vim glance-registry.conf 1127 [keystone_authtoken] auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = glance 1885 [paste_deploy] 1910 flavor = keystone 查看glance-api.conf文件修改的内容 [[email protected]-node1 glance]# grep ‘^[a-z]‘ glance-api.conf connection = mysql+pymysql://glance:[email protected]/glance stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = glance flavor = keystone ## 查看glance-registry.conf修改的内容 [[email protected]-node1 glance]# grep ‘^[a-z]‘ glance-registry.conf connection = mysql+pymysql://glance:[email protected]/glance auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = glance flavor = keystone
启动glance服务并设置开机自启
[[email protected] glance]# systemctl enable openstack-glance-api.service > openstack-glance-registry.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service. [[email protected]-node1 glance]# systemctl start openstack-glance-api.service > openstack-glance-registry.service
上传官方的cirros镜像,测试
[[email protected] ~]# ll total 12992 -rw-r--r-- 1 root root 265 Aug 30 09:10 admin-openstack -rw-------. 1 root root 931 Aug 30 03:02 anaconda-ks.cfg -rw-r--r-- 1 root root 13287936 May 8 2015 cirros-0.3.4-x86_64-disk.img -rw-r--r-- 1 root root 261 Aug 30 09:11 demo-openstack [[email protected]-node1 ~]# openstack image create "cirros" > --file cirros-0.3.4-x86_64-disk.img > --disk-format qcow2 --container-format bare > --public +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2017-08-30T01:55:22Z | | disk_format | qcow2 | | file | /v2/images/272fa290-1cfa-4d98-bce9-fe4401d3a15d/file | | id | 272fa290-1cfa-4d98-bce9-fe4401d3a15d | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | 12dbd56ae8f04d56b4ade27d01618ae6 | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2017-08-30T01:55:22Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+ [[email protected]-node1 ~]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 272fa290-1cfa-4d98-bce9-fe4401d3a15d | cirros | active | +--------------------------------------+--------+--------+ [[email protected]-node1 ~]# glance image-list +--------------------------------------+--------+ | ID | Name | +--------------------------------------+--------+ | 272fa290-1cfa-4d98-bce9-fe4401d3a15d | cirros | +--------------------------------------+--------+
f. nova安装分为管理节点和计算节点
f.1 管理节点安装配置
安装
[[email protected] ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler -y
配置
[[email protected] ~]# cd /etc/nova/ [[email protected]-node1 nova]# ll total 300 -rw-r----- 1 root nova 2717 May 31 00:07 api-paste.ini -rw-r----- 1 root nova 289748 Aug 3 17:52 nova.conf -rw-r----- 1 root nova 4 May 31 00:07 policy.json -rw-r--r-- 1 root root 64 Aug 3 17:52 release -rw-r----- 1 root nova 966 May 31 00:07 rootwrap.conf [[email protected]-node1 nova]# vim nova.conf [DEFAULT] 14 auth_strategy=keystone 2062 use_neutron=True 3052 enabled_apis=osapi_compute,metadata 3265 firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver 3601 transport_url=rabbit://openstack:[email protected] [api_database] 3661 connection=mysql+pymysql://nova:[email protected]/nova_api [database] 4678 connection=mysql+pymysql://nova:[email protected]/nova 5429 [keystone_authtoken] auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova 8345 [vnc] 8384 vncserver_listen=0.0.0.0 8396 vncserver_proxyclient_address=192.168.203.21 4802 [glance] 4813 api_servers=192.168.203.21:9292 6690 [oslo_concurrency] 6705 lock_path=/var/lib/nova/tmp ## 查看nova.conf一共修改了多少内容 [[email protected]-node1 nova]# grep ‘^[a-Z]‘ nova.conf auth_strategy=keystone use_neutron=True enabled_apis=osapi_compute,metadata firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver transport_url=rabbit://openstack:[email protected] connection=mysql+pymysql://nova:[email protected]/nova_api connection=mysql+pymysql://nova:[email protected]/nova api_servers=192.168.203.21:9292 auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova lock_path=/var/lib/nova/tmp vncserver_listen=0.0.0.0 vncserver_proxyclient_address=192.168.203.21
导入nova数据库信息并检查
[[email protected] nova]# su -s /bin/sh -c "nova-manage api_db sync" nova [[email protected]-node1 nova]# su -s /bin/sh -c "nova-manage db sync" nova WARNING: cell0 mapping not found - not syncing cell0. /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u‘Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.‘) result = self._query(query) /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u‘Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.‘) result = self._query(query) ## 警告忽略掉 [[email protected]-node1 nova]# mysql -h 192.168.203.21 -unova -pnova -e "use nova;show tables;" +--------------------------------------------+ | Tables_in_nova | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | block_device_mapping | | bw_usage_cache | | cells | | certificates | | compute_nodes | | console_auth_tokens | | console_pools | | consoles | | dns_domains | | fixed_ips | | floating_ips | | instance_actions | | instance_actions_events | | instance_extra | | instance_faults | | instance_group_member | | instance_group_policy | | instance_groups | | instance_id_mappings | | instance_info_caches | | instance_metadata | | instance_system_metadata | | instance_type_extra_specs | | instance_type_projects | | instance_types | | instances | | inventories | | key_pairs | | migrate_version | | migrations | | networks | | pci_devices | | project_user_quotas | | provider_fw_rules | | quota_classes | | quota_usages | | quotas | | reservations | | resource_provider_aggregates | | resource_providers | | s3_images | | security_group_default_rules | | security_group_instance_association | | security_group_rules | | security_groups | | services | | shadow_agent_builds | | shadow_aggregate_hosts | | shadow_aggregate_metadata | | shadow_aggregates | | shadow_block_device_mapping | | shadow_bw_usage_cache | | shadow_cells | | shadow_certificates | | shadow_compute_nodes | | shadow_console_pools | | shadow_consoles | | shadow_dns_domains | | shadow_fixed_ips | | shadow_floating_ips | | shadow_instance_actions | | shadow_instance_actions_events | | shadow_instance_extra | | shadow_instance_faults | | shadow_instance_group_member | | shadow_instance_group_policy | | shadow_instance_groups | | shadow_instance_id_mappings | | shadow_instance_info_caches | | shadow_instance_metadata | | shadow_instance_system_metadata | | shadow_instance_type_extra_specs | | shadow_instance_type_projects | | shadow_instance_types | | shadow_instances | | shadow_key_pairs | | shadow_migrate_version | | shadow_migrations | | shadow_networks | | shadow_pci_devices | | shadow_project_user_quotas | | shadow_provider_fw_rules | | shadow_quota_classes | | shadow_quota_usages | | shadow_quotas | | shadow_reservations | | shadow_s3_images | | shadow_security_group_default_rules | | shadow_security_group_instance_association | | shadow_security_group_rules | | shadow_security_groups | | shadow_services | | shadow_snapshot_id_mappings | | shadow_snapshots | | shadow_task_log | | shadow_virtual_interfaces | | shadow_volume_id_mappings | | shadow_volume_usage_cache | | snapshot_id_mappings | | snapshots | | tags | | task_log | | virtual_interfaces | | volume_id_mappings | | volume_usage_cache | +--------------------------------------------+ [[email protected]-node1 nova]# mysql -h 192.168.203.21 -unova -pnova -e "use nova_api;show tables;" +------------------------------+ | Tables_in_nova_api | +------------------------------+ | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | build_requests | | cell_mappings | | flavor_extra_specs | | flavor_projects | | flavors | | host_mappings | | instance_group_member | | instance_group_policy | | instance_groups | | instance_mappings | | inventories | | key_pairs | | migrate_version | | request_specs | | resource_provider_aggregates | | resource_providers | +------------------------------+
启动nova服务,并设置开机自启
[[email protected] nova]# systemctl enable openstack-nova-api.service > openstack-nova-consoleauth.service openstack-nova-scheduler.service > openstack-nova-conductor.service openstack-nova-novncproxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service. [[email protected]-node1 nova]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service [[email protected]-node1 nova]#
创建compute服务
[[email protected] nova]# openstack service create --name nova > --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 2ddd52004c904179a9d4da3dbe3c53f7 | | name | nova | | type | compute | +-------------+----------------------------------+
添加nova的三条endpoint记录
[[email protected] nova]# openstack endpoint create --region RegionOne > compute public http://192.168.203.21:8774/v2.1/%\\(tenant_id\\)s +--------------+-----------------------------------------------+ | Field | Value | +--------------+-----------------------------------------------+ | enabled | True | | id | 6ff405467f924e54b52d236a69f596ef | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 2ddd52004c904179a9d4da3dbe3c53f7 | | service_name | nova | | service_type | compute | | url | http://192.168.203.21:8774/v2.1/%(tenant_id)s | +--------------+-----------------------------------------------+ [[email protected]-node1 nova]# openstack endpoint create --region RegionOne > compute internal http://192.168.203.21:8774/v2.1/%\\(tenant_id\\)s +--------------+-----------------------------------------------+ | Field | Value | +--------------+-----------------------------------------------+ | enabled | True | | id | 4a0b1ea353f148fe9fefd65d942a8ed6 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 2ddd52004c904179a9d4da3dbe3c53f7 | | service_name | nova | | service_type | compute | | url | http://192.168.203.21:8774/v2.1/%(tenant_id)s | +--------------+-----------------------------------------------+ [[email protected]-node1 nova]# openstack endpoint create --region RegionOne > compute admin http://192.168.203.21:8774/v2.1/%\\(tenant_id\\)s +--------------+-----------------------------------------------+ | Field | Value | +--------------+-----------------------------------------------+ | enabled | True | | id | 6b06166ae51d4120bf765e3fdeef2943 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 2ddd52004c904179a9d4da3dbe3c53f7 | | service_name | nova | | service_type | compute | | url | http://192.168.203.21:8774/v2.1/%(tenant_id)s | +--------------+-----------------------------------------------+
验证管理节点的nova是否配置成功
[[email protected]node1 nova]# openstack host list +--------------+-------------+----------+ | Host Name | Service | Zone | +--------------+-------------+----------+ | Marvin-node1 | conductor | internal | | Marvin-node1 | consoleauth | internal | | Marvin-node1 | scheduler | internal | +--------------+-------------+----------+ [[email protected]-node1 nova]# nova service-list +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-conductor | Marvin-node1 | internal | enabled | up | 2017-08-30T02:24:34.000000 | - | | 2 | nova-consoleauth | Marvin-node1 | internal | enabled | up | 2017-08-30T02:24:34.000000 | - | | 3 | nova-scheduler | Marvin-node1 | internal | enabled | up | 2017-08-30T02:24:34.000000 | - | +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+ [[email protected]-node1 nova]# openstack endpoint list +---------------+-----------+--------------+--------------+---------+-----------+------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +---------------+-----------+--------------+--------------+---------+-----------+------------------+ | 18cb9cc5ed8a4 | RegionOne | keystone | identity | True | public | http://192.168.2 | | 32a8741775051 | | | | | | 03.21:5000/v3/ | | d458ea | | | | | | | | 4a0b1ea353f14 | RegionOne | nova | compute | True | internal | http://192.168.2 | | 8fe9fefd65d94 | | | | | | 03.21:8774/v2.1/ | | 2a8ed6 | | | | | | %(tenant_id)s | | 5128b49d5cb14 | RegionOne | glance | image | True | internal | http://192.168.2 | | 000b5ba23fe38 | | | | | | 03.21:9292 | | 9d84ad | | | | | | | | 5f4ce486d4254 | RegionOne | keystone | identity | True | internal | http://192.168.2 | | 7479b697165fb | | | | | | 03.21:35357/v3/ | | 87addf | | | | | | | | 6b06166ae51d4 | RegionOne | nova | compute | True | admin | http://192.168.2 | | 120bf765e3fde | | | | | | 03.21:8774/v2.1/ | | ef2943 | | | | | | %(tenant_id)s | | 6decdc54138d4 | RegionOne | keystone | identity | True | admin | http://192.168.2 | | 3b5ad0b481d17 | | | | | | 03.21:35357/v3/ | | b9163b | | | | | | | | 6ff405467f924 | RegionOne | nova | compute | True | public | http://192.168.2 | | e54b52d236a69 | | | | | | 03.21:8774/v2.1/ | | f596ef | | | | | | %(tenant_id)s | | 94f9fd8d266b4 | RegionOne | glance | image | True | public | http://192.168.2 | | 566bab4dd90ce | | | | | | 03.21:9292 | | 15fff1 | | | | | | | | aabda6edf07d4 | RegionOne | glance | image | True | admin | http://192.168.2 | | 091838386691a | | | | | | 03.21:9292 | | 21a066 | | | | | | | +---------------+-----------+--------------+--------------+---------+-----------+------------------+
f.2 计算节点安装配置nova
安装
[[email protected] ~]# yum install openstack-nova-compute -y
配置: 直接将管理节点的配置文件拷贝到本地一份进行修改
[[email protected] ~]# cd /etc/nova/ [[email protected]-node2 nova]# ll total 300 -rw-r-----. 1 root nova 2717 May 31 00:07 api-paste.ini -rw-r-----. 1 root nova 289748 Aug 3 17:52 nova.conf -rw-r-----. 1 root nova 4 May 31 00:07 policy.json -rw-r--r--. 1 root root 64 Aug 3 17:52 release -rw-r-----. 1 root nova 966 May 31 00:07 rootwrap.conf [[email protected]-node2 nova]# mv nova.conf nova.conf.marvin20170830 [[email protected]-node2 nova]# ls api-paste.ini nova.conf.marvin20170830 policy.json release rootwrap.conf [[email protected]-node2 nova]# scp 192.168.203.21:/etc/nova/nova.conf /etc/nova/ The authenticity of host ‘192.168.203.21 (192.168.203.21)‘ can‘t be established. ECDSA key fingerprint is 0a:18:ad:f3:ce:27:27:a2:89:4b:7a:36:01:e5:f1:3c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘192.168.203.21‘ (ECDSA) to the list of known hosts. [email protected]192.168.203.21‘s password: nova.conf 100% 283KB 283.3KB/s 00:00 [[email protected]-node2 nova]# ll total 584 -rw-r-----. 1 root nova 2717 May 31 00:07 api-paste.ini -rw-r-----. 1 root root 290096 Aug 30 10:37 nova.conf -rw-r-----. 1 root nova 289748 Aug 3 17:52 nova.conf.marvin20170830 -rw-r-----. 1 root nova 4 May 31 00:07 policy.json -rw-r--r--. 1 root root 64 Aug 3 17:52 release -rw-r-----. 1 root nova 966 May 31 00:07 rootwrap.conf [[email protected]-node2 nova]# chgrp nova nova.conf [[email protected]-node2 nova]# ll total 584 -rw-r-----. 1 root nova 2717 May 31 00:07 api-paste.ini -rw-r-----. 1 root nova 290096 Aug 30 10:37 nova.conf -rw-r-----. 1 root nova 289748 Aug 3 17:52 nova.conf.marvin20170830 -rw-r-----. 1 root nova 4 May 31 00:07 policy.json -rw-r--r--. 1 root root 64 Aug 3 17:52 release -rw-r-----. 1 root nova 966 May 31 00:07 rootwrap.conf
修改nova.conf
[[email protected] nova]# vim nova.conf
3649 [api_database] 3661 connection=mysql+pymysql://nova:[email protected]/nova_api##删除 4650 [database] 4677 connection=mysql+pymysql://nova:[email protected]/nova ##删除 8343 [vnc] 8394 vncserver_proxyclient_address=192.168.203.22 8413 novncproxy_base_url=http://192.168.203.21:6080/vnc_auto.html 5579 [libvirt] 5672 virt_type=kvm ## 这里注意,如果不支持虚拟化,这里改为qemu 8359 enabled=true 8375 keymap=en-us ## 查看nova.conf总共修改的内容 [[email protected]-node2 nova]# grep ‘^[a-z]‘ nova.conf auth_strategy=keystone use_neutron=True enabled_apis=osapi_compute,metadata firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver transport_url=rabbit://openstack:[email protected] api_servers=192.168.203.21:9292 auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova virt_type=kvm lock_path=/var/lib/nova/tmp enabled=true keymap=en-us vncserver_listen=0.0.0.0 vncserver_proxyclient_address=192.168.203.22 novncproxy_base_url=http://192.168.203.21:6080/vnc_auto.html
查看计算节点是否支持虚拟化,如果支持virt_type模式使用kvm,不支持就是用qume
[[email protected] ~]# egrep \\ ‘(vmx|svm)‘ /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap [[email protected]-node2 ~]# egrep -c ‘(vmx|svm)‘ /proc/cpuinfo 8 ## 这个数字不等于0说明支持虚拟化,可以使用kvm
启动计算节点nova,并设置开机自启
[[email protected] ~]# systemctl enable libvirtd.service openstack-nova-compute.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. [[email protected]-node2 ~]# systemctl start libvirtd.service openstack-nova-compute.service
在管理节点上验证
[[email protected] ~]# nova service-list +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-conductor | Marvin-node1 | internal | enabled | up | 2017-08-30T02:58:14.000000 | - | | 2 | nova-consoleauth | Marvin-node1 | internal | enabled | up | 2017-08-30T02:58:15.000000 | - | | 3 | nova-scheduler | Marvin-node1 | internal | enabled | up | 2017-08-30T02:58:15.000000 | - | | 14 | nova-compute | Marvin-node2 | nova | enabled | up | 2017-08-30T02:58:13.000000 | - | +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+ [[email protected]-node1 ~]# openstack compute service list +----+------------------+--------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+--------------+----------+---------+-------+----------------------------+ | 1 | nova-conductor | Marvin-node1 | internal | enabled | up | 2017-08-30T02:58:34.000000 | | 2 | nova-consoleauth | Marvin-node1 | internal | enabled | up | 2017-08-30T02:58:35.000000 | | 3 | nova-scheduler | Marvin-node1 | internal | enabled | up | 2017-08-30T02:58:35.000000 | | 14 | nova-compute | Marvin-node2 | nova | enabled | up | 2017-08-30T02:58:33.000000 | +----+------------------+--------------+----------+---------+-------+----------------------------+
g. 配置neutrun服务
g.1 管理节点安装neutrun
安装
[[email protected] ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
修改配置文件
[[email protected]node1 neutron]# vim neutron.conf 722 connection = mysql+pymysql://neutron:[email protected]/neutron 27 auth_strategy = keystone 30 core_plugin = ml2 33 service_plugins = 802 [keystone_authtoken] 803 auth_uri = http://192.168.203.21:5000 804 auth_url = http://192.168.203.21:35357 805 memcached_servers = 192.168.203.21:11211 806 auth_type = password 807 project_domain_name = default 808 user_domain_name = default 809 project_name = service 810 username = neutron 811 password = neutron 530 transport_url = rabbit://openstack:[email protected] 118 notify_nova_on_port_status_changes = true 122 notify_nova_on_port_data_changes = true 1001 [nova] 1002 auth_url = http://192.168.203.21:35357 1003 auth_type = password 1004 project_domain_name = default 1005 user_domain_name = default 1006 region_name = RegionOne 1007 project_name = service 1008 username = nova 1009 password = nova 1123 lock_path = /var/lib/neutron/tmp [[email protected]-node1 neutron]# grep ‘^[a-z]‘ neutron.conf auth_strategy = keystone core_plugin = ml2 service_plugins = notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true transport_url = rabbit://openstack:[email protected] connection = mysql+pymysql://neutron:[email protected]/neutron auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron auth_url = http://192.168.203.21:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova lock_path = /var/lib/neutron/tmp
修改/etc/neutron/plugins/ml2/ml2_conf.ini文件
[[email protected] neutron]# cd plugins/ml2/ [[email protected]-node1 ml2]# ll total 40 -rw-r----- 1 root neutron 8313 Aug 16 02:09 linuxbridge_agent.ini -rw-r----- 1 root neutron 8730 Aug 16 02:09 ml2_conf.ini -rw-r----- 1 root neutron 5008 Aug 16 02:09 ml2_conf_sriov.ini -rw-r----- 1 root neutron 5425 Aug 16 02:09 sriov_agent.ini [[email protected]-node1 ml2]# vim ml2_conf.ini 101 [ml2] 109 type_drivers = flat,vlan,gre,vxlan,geneve 114 tenant_network_types = flat,vlan,gre,vxlan,geneve 118 mechanism_drivers = linuxbridge 123 extension_drivers = port_security 150 [ml2_type_flat] 159 flat_networks = public 220 [securitygroup] 236 enable_ipset = true [[email protected]-node1 ml2]# grep ‘^[a-z]‘ ml2_conf.ini type_drivers = flat,vlan,gre,vxlan,geneve tenant_network_types = flat,vlan,gre,vxlan,geneve mechanism_drivers = linuxbridge extension_drivers = port_security flat_networks = public enable_ipset = true
配置Linuxbridge代理
[[email protected]node1 ml2]# vim linuxbridge_agent.ini 132 [linux_bridge] 143 physical_interface_mappings = public:eth0 149 [securitygroup] 156 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 161 enable_security_group = true 168 [vxlan] 176 enable_vxlan = False [[email protected]-node1 ml2]# grep ‘^[a-z]‘ linuxbridge_agent.ini physical_interface_mappings = public:eth0 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver enable_security_group = true enable_vxlan = False
配置DHCP代理
[[email protected] ml2]# cd /etc/neutron/ [[email protected]-node1 neutron]# vim dhcp_agent.ini 16 interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver 32 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq 41 enable_isolated_metadata = True [[email protected]-node1 neutron]# grep ‘^[a-z]‘ dhcp_agent.ini interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True
配置元数据代理
[[email protected]node1 neutron]# vim metadata_agent.ini 22 nova_metadata_ip = 192.168.203.21 34 metadata_proxy_shared_secret = marvin ##共享密钥
配置nova服务来使用网络
[[email protected] neutron]# cd /etc/nova/ [[email protected]-node1 nova]# vim nova.conf 6469 [neutron] url = http://192.168.203.21:9696 auth_url = http://192.168.203.21:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = True metadata_proxy_shared_secret = marvin ##设置ml2的软链接 [[email protected]-node1 nova]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini ## 同步数据库 [[email protected]-node1 nova]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \\ > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo_initial INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b, qos dscp db addition INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73, Add support for VLAN trunking INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502, Add device_id index to Port INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee, provisioning_blocks.py INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048, add revisions table INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4, add dns name to portdnses INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37, Add flavor_id to Router INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa, uniq_routerports0port_id INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf, Add support for Subnet Service Types INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4, add_qos_minimum_bandwidth_rules INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e, add standardattr to qos policies INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule. INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53, device_owner_ha_replicate_int INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70, Rename ml2_network_segments table INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90, Add segment_id to subnet INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4, Add segment_host_mapping table. INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426, Rename ml2_dvr_port_bindings INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524, Remove mtu column from networks. INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a, migrate dns name from port INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad, rename tenant to project INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab, Add routerport bindings for L3 HA INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0, migrate to pluggable ipam INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62, add standardattr to qos policies INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353, Add Name and Description to the networksegments table INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586, Add binding index to RouterL3AgentBinding INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d, Remove availability ranges. INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc, uniq_floatingips0floating_network_id0fixed_port_id0fixed_ip_addr INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d, Add ip_allocation to port OK
重启nova-api服务,启动neutron服务,设置开机自启
[[email protected] ~]# systemctl restart openstack-nova-api.service [[email protected]-node1 ~]# systemctl enable neutron-server.service > neutron-linuxbridge-agent.service neutron-dhcp-agent.service > neutron-metadata-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service. [[email protected]-node1 ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
创建Neutron服务实体
[[email protected] ~]# openstack service create --name neutron > --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | 9712a794bcc74bf0a6e2aa6a0af5aff5 | | name | neutron | | type | network | +-------------+----------------------------------+
注册endpoint
[[email protected] ~]# openstack endpoint create --region RegionOne > network public http://192.168.203.21:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 03a2acaa6e404012b4c50ae629362820 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 9712a794bcc74bf0a6e2aa6a0af5aff5 | | service_name | neutron | | service_type | network | | url | http://192.168.203.21:9696 | +--------------+----------------------------------+ [[email protected]-node1 ~]# openstack endpoint create --region RegionOne > network internal http://192.168.203.21:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 4efd080b14374602a16ddb490620d923 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 9712a794bcc74bf0a6e2aa6a0af5aff5 | | service_name | neutron | | service_type | network | | url | http://192.168.203.21:9696 | +--------------+----------------------------------+ [[email protected]-node1 ~]# openstack endpoint create --region RegionOne > network admin http://192.168.203.21:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 868744f92d3f40c38fabeb17830c4ae1 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 9712a794bcc74bf0a6e2aa6a0af5aff5 | | service_name | neutron | | service_type | network | | url | http://192.168.203.21:9696 | +--------------+----------------------------------+
验证管理端neutron
[[email protected] ~]# neutron agent-list +--------------+--------------+--------------+-------------------+-------+----------------+-----------------+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +--------------+--------------+--------------+-------------------+-------+----------------+-----------------+ | 28e20e39-a72 | Linux bridge | Marvin-node1 | | :-) | True | neutron- | | 2-4310-8851- | agent | | | | | linuxbridge- | | ef364910995e | | | | | | agent | | 57b63dab- | Metadata | Marvin-node1 | | :-) | True | neutron- | | 01e9-4f90 | agent | | | | | metadata-agent | | -b37f- | | | | | | | | 444e875d5e25 | | | | | | | | dc7ef613 | DHCP agent | Marvin-node1 | nova | :-) | True | neutron-dhcp- | | -9abc-4b2e-9 | | | | | | agent | | 5f8-40a1019f | | | | | | | | e4fe | | | | | | | +--------------+--------------+--------------+-------------------+-------+----------------+-----------------+
g.2 计算节点安装neutrun服务
安装
[[email protected] ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
配置neutron.conf
[[email protected] ~]# cd /etc/neutron/ [[email protected]-node2 neutron]# ll total 68 drwxr-xr-x. 4 root root 64 Aug 30 11:58 conf.d -rw-r-----. 1 root neutron 63378 Aug 16 02:09 neutron.conf drwxr-xr-x. 3 root root 16 Aug 30 11:58 plugins -rw-r--r--. 1 root root 1195 Jun 1 23:39 rootwrap.conf [[email protected]-node2 neutron]# vim neutron.conf 27 auth_strategy = keystone 530 transport_url = rabbit://openstack:[email protected] 802 [keystone_authtoken] auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron 1115 lock_path = /var/lib/neutron/tmp [[email protected]-node2 neutron]# grep ‘^[a-z]‘ neutron.conf auth_strategy = keystone transport_url = rabbit://openstack:[email protected] auth_uri = http://192.168.203.21:5000 auth_url = http://192.168.203.21:35357 memcached_servers = 192.168.203.21:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron lock_path = /var/lib/neutron/tmp
配置linuxbridge代理,因为管理节点和计算节点修改的内容一样,我们直接在管理节点拷贝过来,修改权限即可
[[email protected] neutron]# cd plugins/ml2/ [[email protected]-node2 ml2]# ls linuxbridge_agent.ini [[email protected]-node2 ml2]# mv linuxbridge_agent.ini linuxbridge_agent.ini.marvin20170830 [[email protected]-node2 ml2]# scp 192.168.203.21:/etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/ [email protected]192.168.203.21‘s password: linuxbridge_agent.ini 100% 8376 8.2KB/s 00:00 [[email protected]-node2 ml2]# ls linuxbridge_agent.ini linuxbridge_agent.ini.marvin20170830 [[email protected]-node2 ml2]# ll total 24 -rw-r-----. 1 root root 8376 Aug 30 12:06 linuxbridge_agent.ini -rw-r-----. 1 root neutron 8313 Aug 16 02:09 linuxbridge_agent.ini.marvin20170830 [[email protected]-node2 ml2]# chgrp neutron linuxbridge_agent.ini [[email protected]-node2 ml2]# ll total 24 -rw-r-----. 1 root neutron 8376 Aug 30 12:06 linuxbridge_agent.ini -rw-r-----. 1 root neutron 8313 Aug 16 02:09 linuxbridge_agent.ini.marvin20170830
在计算节点的nova中配置网络信息
[[email protected] ml2]# cd /etc/nova/ [[email protected]-node2 nova]# vim nova.conf 6467 [neutron] url = http://192.168.203.21:9696 auth_url = http://192.168.203.21:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron
重启nova服务,启动linuxbridge代理服务,并开机自启
[[email protected] nova]# systemctl restart openstack-nova-compute.service [[email protected]-node2 nova]# systemctl enable neutron-linuxbridge-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [[email protected]-node2 nova]# systemctl start neutron-linuxbridge-agent.service
管理节点上进行验证
[[email protected] ~]# neutron agent-list +--------------+--------------+--------------+-------------------+-------+----------------+-----------------+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +--------------+--------------+--------------+-------------------+-------+----------------+-----------------+ | 28e20e39-a72 | Linux bridge | Marvin-node1 | | :-) | True | neutron- | | 2-4310-8851- | agent | | | | | linuxbridge- | | ef364910995e | | | | | | agent | | 57b63dab- | Metadata | Marvin-node1 | | :-) | True | neutron- | | 01e9-4f90 | agent | | | | | metadata-agent | | -b37f- | | | | | | | | 444e875d5e25 | | | | | | | | 8a64bb40-d70 | Linux bridge | Marvin-node2 | | :-) | True | neutron- | | 0-4021-83ce- | agent | | | | | linuxbridge- | | 749c5459568d | | | | | | agent | | dc7ef613 | DHCP agent | Marvin-node1 | nova | :-) | True | neutron-dhcp- | | -9abc-4b2e-9 | | | | | | agent | | 5f8-40a1019f | | | | | | | | e4fe | | | | | | | +--------------+--------------+--------------+-------------------+-------+----------------+-----------------+
[[email protected] ~]# nova service-list
+----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-conductor | Marvin-node1 | internal | enabled | up | 2017-08-30T04:14:53.000000 | - |
| 2 | nova-consoleauth | Marvin-node1 | internal | enabled | up | 2017-08-30T04:14:56.000000 | - |
| 3 | nova-scheduler | Marvin-node1 | internal | enabled | up | 2017-08-30T04:14:55.000000 | - |
| 14 | nova-compute | Marvin-node2 | nova | enabled | up | 2017-08-30T04:14:52.000000 | - |
+----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
h. 创建云主机的步骤
h.1 创建提供者网络
[[email protected] ~]# neutron net-create --shared --provider:physical_network public > --provider:network_type flat public Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | | | created_at | 2017-08-30T10:33:14Z | | description | | | id | c2cc92db-b701-4ab3-97f4-87f7732135a9 | | ipv4_address_scope | | | ipv6_address_scope | | | mtu | 1500 | | name | public | | port_security_enabled | True | | project_id | 12dbd56ae8f04d56b4ade27d01618ae6 | | provider:network_type | flat | | provider:physical_network | public | | provider:segmentation_id | | | revision_number | 3 | | router:external | False | | shared | True | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 12dbd56ae8f04d56b4ade27d01618ae6 | | updated_at | 2017-08-30T10:33:14Z | +---------------------------+--------------------------------------+ [[email protected]-node1 ~]# neutron net-list +--------------------------------------+--------+---------+ | id | name | subnets | +--------------------------------------+--------+---------+ | c2cc92db-b701-4ab3-97f4-87f7732135a9 | public | | +--------------------------------------+--------+---------+
h.2 创建子网
[[email protected] ~]# openstack subnet create --network public > --allocation-pool start=192.168.203.100,end=192.168.203.200 > --dns-nameserver 192.168.203.254 --gateway 192.168.203.254 > --subnet-range 192.168.203.0/24 public-subnet +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.203.100-192.168.203.200 | | cidr | 192.168.203.0/24 | | created_at | 2017-08-30T10:34:29Z | | description | | | dns_nameservers | 192.168.203.254 | | enable_dhcp | True | | gateway_ip | 192.168.203.254 | | headers | | | host_routes | | | id | 039f41f1-30ad-4bc7-ae9d-72a0f7238833 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | public-subnet | | network_id | c2cc92db-b701-4ab3-97f4-87f7732135a9 | | project_id | 12dbd56ae8f04d56b4ade27d01618ae6 | | project_id | 12dbd56ae8f04d56b4ade27d01618ae6 | | revision_number | 2 | | service_types | [] | | subnetpool_id | None | | updated_at | 2017-08-30T10:34:29Z | +-------------------+--------------------------------------+
[[email protected] ~]# neutron subnet-list
+------------------------------------+---------------+------------------+-------------------------------------+
| id | name | cidr | allocation_pools |
+------------------------------------+---------------+------------------+-------------------------------------+
| 039f41f1-30ad-4bc7-ae9d- | public-subnet | 192.168.203.0/24 | {"start": "192.168.203.100", "end": |
| 72a0f7238833 | | | "192.168.203.200"} |
+------------------------------------+---------------+------------------+-------------------------------------+
创建云主机必备: 镜像,类型
[[email protected] ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano ## 创建云主机类型 +----------------------------+---------+ | Field | Value | +----------------------------+---------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 1 | | id | 0 | | name | m1.nano | | os-flavor-access:is_public | True | | properties | | | ram | 64 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+---------+ 生成一个键值对 [[email protected]-node1 ~]# source demo-openstack [[email protected]-node1 ~]# ssh-keygen -q -N "" Enter file in which to save the key (/root/.ssh/id_rsa): 创建一个mykey的密钥,并把root目录下的id_rsa.pub上传到openstack中 [[email protected]-node1 ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey ##将key上传到openstack服务器 +-------------+-------------------------------------------------+ | Field | Value | +-------------+-------------------------------------------------+ | fingerprint | 77:ee:e3:42:84:f8:76:06:b3:72:98:0b:49:3a:71:a9 | | name | mykey | | user_id | 5802189929594d8bb9b5862a24c45bb2 | +-------------+-------------------------------------------------+ [[email protected]-node1 ~]# openstack keypair list ##查看已经上传的key +-------+-------------------------------------------------+ | Name | Fingerprint | +-------+-------------------------------------------------+ | mykey | 77:ee:e3:42:84:f8:76:06:b3:72:98:0b:49:3a:71:a9 | +-------+-------------------------------------------------+ 增加安全组规则 [[email protected]-node1 ~]# openstack security group rule create --proto icmp default +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2017-08-28T06:02:42Z | | description | | | direction | ingress | | ethertype | IPv4 | | headers | | | id | 434acf54-fecc-43f7-94a4-d62c99e11bf5 | | port_range_max | None | | port_range_min | None | | project_id | e625a8c794ad4039ae379af3a4101935 | | project_id | e625a8c794ad4039ae379af3a4101935 | | protocol | icmp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 1 | | security_group_id | ca9dc733-f1d4-49b7-80d9-f3b1ab862553 | | updated_at | 2017-08-28T06:02:42Z | +-------------------+--------------------------------------+ 开放22端口 [[email protected]-node1 ~]# openstack security group rule create --proto tcp --dst-port 22 default +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2017-08-28T06:03:17Z | | description | | | direction | ingress | | ethertype | IPv4 | | headers | | | id | f89d4f9c-0c1e-4f53-b728-686615531b4b | | port_range_max | 22 | | port_range_min | 22 | | project_id | e625a8c794ad4039ae379af3a4101935 | | project_id | e625a8c794ad4039ae379af3a4101935 | | protocol | tcp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 1 | | security_group_id | ca9dc733-f1d4-49b7-80d9-f3b1ab862553 | | updated_at | 2017-08-28T06:03:17Z | +-------------------+--------------------------------------+
创建一台云主机需要检测的信息
[[email protected] ~]# openstack flavor list ## 云主机类型 +----+---------+-----+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+---------+-----+------+-----------+-------+-----------+ | 0 | m1.nano | 64 | 1 | 0 | 1 | True | +----+---------+-----+------+-----------+-------+-----------+ [[email protected]-node1 ~]# openstack image list ## 云主机镜像 +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 272fa290-1cfa-4d98-bce9-fe4401d3a15d | cirros | active | +--------------------------------------+--------+--------+ [[email protected]-node1 ~]# openstack network list ## 云主机网络
+--------------------------------------+--------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------+--------------------------------------+
| c2cc92db-b701-4ab3-97f4-87f7732135a9 | public | 039f41f1-30ad-4bc7-ae9d-72a0f7238833 |
+--------------------------------------+--------+--------------------------------------+
[[email protected]-node1 ~]# [[email protected]-node1 ~]# openstack security group list ## 云主机安全策略 +--------------------------------+---------+------------------------+--------------------------------+ | ID | Name | Description | Project | +--------------------------------+---------+------------------------+--------------------------------+ | d0b6be21-3521-4329-8960-d8abec | default | Default security group | e3c68befc5494752bf297066513db5 | | 427b01 | | | aa | +--------------------------------+---------+------------------------+--------------------------------+
创建云主机
[[email protected] ~]# source demo-openstack [[email protected]-node1 ~]# openstack server create --flavor m1.nano --image cirros > --nic net-id=c2cc92db-b701-4ab3-97f4-87f7732135a9 --security-group default > --key-name mykey dom1-instance +--------------------------------------+-----------------------------------------------+ | Field | Value | +--------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | oqMAjR9j2LDg | | config_drive | | | created | 2017-08-30T10:43:09Z | | flavor | m1.nano (0) | | hostId | | | id | a4f0c3a6-fe5d-4520-b605-1ed474f02269 | | image | cirros (272fa290-1cfa-4d98-bce9-fe4401d3a15d) | | key_name | mykey | | name | dom1-instance | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | e3c68befc5494752bf297066513db5aa | | properties | | | security_groups | [{u‘name‘: u‘default‘}] | | status | BUILD | | updated | 2017-08-30T10:43:09Z | | user_id | 6f131e4afeaf4e7b8dd594c388cf74e4 | +--------------------------------------+-----------------------------------------------+
查看云主机信息
[[email protected] ~]# openstack server list +--------------------------------------+---------------+--------+------------------------+------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+---------------+--------+------------------------+------------+ | a4f0c3a6-fe5d-4520-b605-1ed474f02269 | dom1-instance | ACTIVE | public=192.168.203.103 | cirros | +--------------------------------------+---------------+--------+------------------------+------------+
查看vncurl地址
[[email protected] ~]# openstack console url show dom1-instance +-------+-------------------------------------------------------------------------------------+ | Field | Value | +-------+-------------------------------------------------------------------------------------+ | type | novnc | | url | http://192.168.203.21:6080/vnc_auto.html?token=3c41be8e-a89b-4f8b-bf42-43e6419d13f1 | +-------+-------------------------------------------------------------------------------------+
使用浏览器访问
http://192.168.203.21:6080/vnc_auto.html?token=3c41be8e-a89b-4f8b-bf42-43e6419d13f1
这里我一直找不到问题,openstack server list可以看到云主机已经获取了IP地址,为什么通过VNC登录后就没有地址,很纠结.之前用自己的笔记本搭建环境,一样配置没有问题,使用vsphere上的虚拟机,安装了两次,都是这个问题,卡在这里了,先写到这里,如果找到问题,我会在这里更新的,如果大家知道什么问题也可以联系笔者,共同进步
----------笔者QQ: 779734791 ,Marvin 欢迎大家一起讨论
以上是关于OpenStack安装随笔的主要内容,如果未能解决你的问题,请参考以下文章