OpenStack Mitaka HA部署方案(随笔)

Posted wanstack

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了OpenStack Mitaka HA部署方案(随笔)相关的知识,希望对你有一定的参考价值。

[Toc]

---
title: Openstack Mitaka 集群安装部署
date: 2017-03-04-14 23:37
tags: Openstack
---
==openstack运维开发群:94776000 欢迎牛逼的你==


### Openstack Mitaka HA 实施部署测试文档

#### 一、环境说明

##### 1、主机环境

```
controller(VIP) 192.168.10.100
controller01 192.168.10.101, 10.0.0.1
controller02 192.168.10.102, 10.0.0.2
controller03 192.168.10.103, 10.0.0.3
compute01 192.168.10.104, 10.0.0.4
compute02 192.168.10.105, 10.0.0.5
```
本次环境仅限于测试环境,主要测试HA功能。具体生产环境请对网络进行划分。

 

#### 二、配置基础环境


##### 1、配置主机解析


```
在对应节点上配置主机名:

hostnamectl set-hostname controller01
hostname contoller01

hostnamectl set-hostname controller02
hostname contoller02

hostnamectl set-hostname controller03
hostname contoller03

hostnamectl set-hostname compute01
hostname compute01

hostnamectl set-hostname compute02
hostname compute02
```


```
在controller01上配置主机解析:

[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.100 controller
192.168.10.101 controller01
192.168.10.102 controller02
192.168.10.103 controller03

192.168.10.104 compute01
192.168.10.105 compute02
```

##### 2、配置ssh互信


```
在controller01上进行配置:

ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ‘‘
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
```


```
拷贝主机名解析配置文件到其他节点
scp /etc/hosts controller02:/etc/hosts
scp /etc/hosts controller03:/etc/hosts
scp /etc/hosts compute01:/etc/hosts
scp /etc/hosts compute02:/etc/hosts
```

##### 3yum 源配置

本次测试机所有节点都可以正常连接网络,故使用阿里云的openstack和base源


```
# 所有控制和计算节点开启yum缓存
[[email protected] ~]# cat /etc/yum.conf 
[main]
cachedir=/var/cache/yum/$basearch/$releasever
# 开启缓存keepcache=1表示开启缓存,keepcache=0表示不开启缓存,默认为0
keepcache=1
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5
bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
distroverpkg=centos-release

# 基础源
yum install wget -y
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

# openstack mitaka源
yum install centos-release-openstack-mitaka -y
# 默认是centos源,建议修改成阿里云的,因为速度快
[[email protected] yum.repos.d]# vim CentOS-OpenStack-mitaka.repo 
# CentOS-OpenStack-mitaka.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Cloud for more
# information

[centos-openstack-mitaka]
name=CentOS-7 - OpenStack mitaka
baseurl=http://mirrors.aliyun.com//centos/7/cloud/$basearch/openstack-mitaka/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud


# galera源
vim mariadb.repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos7-amd64/
enable=1
gpgcheck=1
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
```

scp 到其他所有节点

```
scp CentOS-OpenStack-mitaka.repo controller02:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
scp CentOS-OpenStack-mitaka.repo controller03:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
scp CentOS-OpenStack-mitaka.repo compute01:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
scp CentOS-OpenStack-mitaka.repo compute02:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo

scp mariadb.repo controller02:/etc/yum.repos.d/mariadb.repo
scp mariadb.repo controller03:/etc/yum.repos.d/mariadb.repo
scp mariadb.repo compute01:/etc/yum.repos.d/mariadb.repo
scp mariadb.repo compute02:/etc/yum.repos.d/mariadb.repo
```

 

##### 4、ntp配置
本机环境已经有ntp服务器,故直接使用。如果没有ntp服务器建议使用controller作为ntp服务器

```
yum install ntpdate -y
echo "*/5 * * * * /usr/sbin/ntpdate 192.168.2.161 >/dev/null 2>&1" >> /var/spool/cron/root
/usr/sbin/ntpdate 192.168.2.161
```

##### 5、关闭防火墙和selinux

```
systemctl disable firewalld.service
systemctl stop firewalld.service
sed -i -e "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/selinux/config
sed -i -e "s#SELINUXTYPE=targeted#\#SELINUXTYPE=targeted#g" /etc/selinux/config
setenforce 0
systemctl stop NetworkManager
systemctl disable NetworkManager
```

##### 6、安装配置pacemaker


```
# 所有控制节点安装如下软件
yum install -y pcs pacemaker corosync fence-agents-all resource-agents
修改corosync配置文件
[[email protected] ~]# cat /etc/corosync/corosync.conf
totem {
version: 2
secauth: off
cluster_name: openstack-cluster
transport: udpu
}

nodelist {
node {
ring0_addr: controller01
nodeid: 1
}
node {
ring0_addr: controller02
nodeid: 2
}
node {
ring0_addr: controller03
nodeid: 3
}
}

quorum {
provider: corosync_votequorum
two_node: 1
}

logging {
to_syslog: yes
}
```

```
# 把配置文件拷贝到其他控制节点
scp /etc/corosync/corosync.conf controller02:/etc/corosync/corosync.conf
scp /etc/corosync/corosync.conf controller03:/etc/corosync/corosync.conf
```


```
# 查看成员信息
corosync-cmapctl runtime.totem.pg.mrp.srp.members
```

 

```


# 所有控制节点启动服务
systemctl enable pcsd
systemctl start pcsd

# 所有控制节点设置hacluster用户的密码
echo hacluster | passwd --stdin hacluster

# [controller01]设置到集群节点的认证
pcs cluster auth controller01 controller02 controller03 -u hacluster -p hacluster --force
# [controller01]创建并启动集群
pcs cluster setup --force --name openstack-cluster controller01 controller02 controller03
pcs cluster start --all
# [controller01]设置集群属性
pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000 cluster-recheck-interval=5min
# [controller01] 暂时禁用STONISH,否则资源无法启动
pcs property set stonith-enabled=false

# [controller01] 忽略投票
pcs property set no-quorum-policy=ignore

# [controller01]配置VIP资源,VIP可以在集群节点间浮动
pcs resource create vip ocf:heartbeat:IPaddr2 params ip=192.168.10.100 cidr_netmask="24" op monitor interval="30s"
```

##### 7、安装haproxy


```
# [所有控制节点] 安装软件
yum install -y haproxy

# [所有控制节点] 修改/etc/rsyslog.d/haproxy.conf文件
echo "\$ModLoad imudp" >> /etc/rsyslog.d/haproxy.conf;
echo "\$UDPServerRun 514" >> /etc/rsyslog.d/haproxy.conf;
echo "local3.* /var/log/haproxy.log" >> /etc/rsyslog.d/haproxy.conf;
echo "&~" >> /etc/rsyslog.d/haproxy.conf;

# [所有控制节点] 修改/etc/sysconfig/rsyslog文件
sed -i -e s#SYSLOGD_OPTIONS=\"\"#SYSLOGD_OPTIONS=\"-c 2 -r -m 0\"#g /etc/sysconfig/rsyslog

# [所有控制节点] 重启rsyslog服务
systemctl restart rsyslog

# 创建haproxy基础配置
vim /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the -r option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local3
chroot /var/lib/haproxy
daemon
group haproxy
maxconn 4000
pidfile /var/run/haproxy.pid
user haproxy


#---------------------------------------------------------------------
# common defaults that all the listen and backend sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
log global
maxconn 4000
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s

include conf.d/*.cfg
```

```
# 拷贝到其他控制节点
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
```

```
# [controller01]在pacemaker集群增加haproxy资源
pcs resource create haproxy systemd:haproxy --clone
# Optional表示只在同时停止和/或启动两个资源时才会产生影响。对第一个指定资源进行的任何更改都不会对第二个指定的资源产生影响,定义在前面的资源先确保运行。
pcs constraint order start vip then haproxy-clone kind=Optional
# vip的资源决定了haproxy-clone资源的位置约束
pcs constraint colocation add haproxy-clone with vip
ping -c 3 192.168.10.100
```


##### 8、galera安装配置


```
#所有控制节点上操作基本操作 :安装、设置配置文件 
yum install -y MariaDB-server xinetd

# 在所有控制节点进行配置
vim /usr/lib/systemd/system/mariadb.service
[Service]新添加两行如下参数:
LimitNOFILE=10000
LimitNPROC=10000

systemctl --system daemon-reload 
systemctl restart mariadb.service

# 初始化数据库,在controller01上执行即可
systemctl start mariadb
mysql_secure_installation

# 查看并发数
show variables like ‘max_connections‘;

# 关闭服务修改配置文件
systemctl stop mariadb

# 备份原始配置文件
cp /etc/my.cnf.d/server.cnf /etc/my.cnf.d/bak.server.cnf

```


```
# 配置controller01上的配置文件
cat /etc/my.cnf.d/server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
max_connections = 4096
bind-address= 192.168.10.101

default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M

wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://controller01,controller02,controller03"
wsrep_node_name= controller01
wsrep_node_address= 192.168.10.101
wsrep_sst_method=rsync
```


```
# 配置controller02上的配置文件
cat /etc/my.cnf.d/server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
max_connections = 4096
bind-address= 192.168.10.102

default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M

wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://controller01,controller02,controller03"
wsrep_node_name= controller02
wsrep_node_address= 192.168.10.102
wsrep_sst_method=rsync
```


```
# 配置controller03上的配置文件
cat /etc/my.cnf.d/server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
max_connections = 4096
bind-address= 192.168.10.103

default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M

wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://controller01,controller02,controller03"
wsrep_node_name= controller03
wsrep_node_address= 192.168.10.103
wsrep_sst_method=rsync
```


```
# 在controller01上执行
galera_new_cluster

#查看日志
tail -f /var/log/messages

# 启动其他控制节点
systemctl enable mariadb
systemctl start mariadb
```


```
# 添加check
mysql -uroot -popenstack -e "use mysql;INSERT INTO user(Host, User) VALUES(‘192.168.10.100‘, ‘haproxy_check‘);FLUSH PRIVILEGES;"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON *.* TO ‘root‘@‘controller01‘ IDENTIFIED BY ‘"openstack"‘";
mysql -uroot -popenstack -h 192.168.10.100 -e "SHOW STATUS LIKE ‘wsrep_cluster_size‘;"
```


```
# 配置haproxy for galera
# 所有控制节点创建haproxy配置文件目录

cat /etc/haproxy/haproxy.cfg
listen galera_cluster
bind 192.168.10.100:3306
balance source
#option mysql-check user haproxy_check
server controller01 192.168.10.101:3306 check port 9200 inter 2000 rise 2 fall 5
server controller02 192.168.10.102:3306 check port 9200 inter 2000 rise 2 fall 5
server controller03 192.168.10.103:3306 check port 9200 inter 2000 rise 2 fall 5


# 拷贝配置文件到其他控制节点
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/
```


```
# 重启pacemaker,corosync集群脚本
vim restart-pcs-cluster.sh
#!/bin/sh
pcs cluster stop --all
sleep 10
#ps aux|grep "pcs cluster stop --all"|grep -v grep|awk ‘{print $2 }‘|xargs kill
for i in 01 02 03; do ssh controller$i pcs cluster kill; done
pcs cluster stop --all
pcs cluster start --all
sleep 5
watch -n 0.5 pcs resource
echo "pcs resource"
pcs resource
pcs resource|grep Stop
pcs resource|grep FAILED


# 执行脚本
bash restart-pcs-cluster.sh 
```

##### 9、安装配置rabbitmq-server集群

```
# 所有控制节点
yum install -y rabbitmq-server


# 拷贝controller01上的cookie到其他控制节点上
scp /var/lib/rabbitmq/.erlang.cookie [email protected]:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie [email protected]:/var/lib/rabbitmq/.erlang.cookie

# controller01以外的其他节点设置权限
chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
chmod 400 /var/lib/rabbitmq/.erlang.cookie


# 启动服务
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

# 在任意控制节点上查看集群配置
rabbitmqctl cluster_status

# controller01以外的其他节点 加入集群
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram [email protected]
rabbitmqctl start_app


# 在任意节点 设置ha-mode
rabbitmqctl cluster_status;
rabbitmqctl set_policy ha-all ‘^(?!amq\.).*‘ ‘{"ha-mode": "all"}‘

# 在任意节点执行创建用户
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
```

##### 10、安装配置memcache

```
yum install -y memcached

# controller01上修改配置
cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.10.101,::1"

# controller02上修改配置
cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.10.102,::1"

# controller03上修改配置
cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.10.103,::1"

# 所有节点启动服务
systemctl enable memcached.service
systemctl start memcached.service
```

#### 三、安装配置openstack软件集群


```
# 所有控制节点和计算节点安装openstack 基础包
yum upgrade -y
yum install -y python-openstackclient openstack-selinux openstack-utils
```

##### 1、安装openstack Identity

```

# 在任意节点创建keystone数据库
mysql -uroot -popenstack -e "CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘localhost‘ IDENTIFIED BY ‘"keystone"‘;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘%‘ IDENTIFIED BY ‘"keystone"‘;
FLUSH PRIVILEGES;"


# 所有节点安装keystone软件包
yum install -y openstack-keystone httpd mod_wsgi

# 任意节点生成临时token
openssl rand -hex 10
8464d030a1f7ac3f7207

# 修改keystone配置文件
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token 8464d030a1f7ac3f7207
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:[email protected]/keystone
#openstack-config --set /etc/keystone/keystone.conf token provider fernet

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_durable_queues true

# 拷贝配置文件到其他控制节点
scp /etc/keystone/keystone.conf controller02:/etc/keystone/keystone.conf
scp /etc/keystone/keystone.conf controller03:/etc/keystone/keystone.conf


sed -i -e ‘s#\#ServerName www.example.com:80#ServerName ‘"controller01"‘#g‘ /etc/httpd/conf/httpd.conf
sed -i -e ‘s#\#ServerName www.example.com:80#ServerName ‘"controller02"‘#g‘ /etc/httpd/conf/httpd.conf
sed -i -e ‘s#\#ServerName www.example.com:80#ServerName ‘"controller03"‘#g‘ /etc/httpd/conf/httpd.conf

 

# controller01上的配置
vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 192.168.10.101:5000
Listen 192.168.10.101:35357
<VirtualHost 192.168.10.101:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

<VirtualHost 192.168.10.101:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

# controller02上的配置
vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 192.168.10.102:5000
Listen 192.168.10.102:35357
<VirtualHost 192.168.10.102:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

<VirtualHost 192.168.10.102:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

# controller03上的配置
vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 192.168.10.103:5000
Listen 192.168.10.103:35357
<VirtualHost 192.168.10.103:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

<VirtualHost 192.168.10.103:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

 

 

# 添加haproxy配置
vim /etc/haproxy/haproxy.cfg
listen keystone_admin_cluster
bind 192.168.10.100:35357
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:35357 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:35357 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:35357 check inter 2000 rise 2 fall 5
listen keystone_public_internal_cluster
bind 192.168.10.100:5000
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:5000 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:5000 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:5000 check inter 2000 rise 2 fall 5

# 把haproxy配置拷贝到其他控制节点
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg

# [任一节点]生成数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone


# [任一节点/controller01]初始化Fernet key,并共享给其他节点
#keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

# 在其他控制节点
#mkdir -p /etc/keystone/fernet-keys/

# 在controller01上
#scp /etc/keystone/fernet-keys/* [email protected]:/etc/keystone/fernet-keys/
#scp /etc/keystone/fernet-keys/* [email protected]:/etc/keystone/fernet-keys/

# 在其他控制节点
chown keystone:keystone /etc/keystone/fernet-keys/*

# [任一节点]添加pacemaker资源,openstack资源和haproxy资源无关,可以开启A/A模式
# interleave=true副本交错启动/停止,改变master/clone间的order约束,每个实例像其他克隆实例一样可快速启动/停止,无需等待其他克隆实例。
# interleave这个设置为false的时候,constraint的order顺序的受到其他节点的影响,为true不受其他节点影响
pcs resource create openstack-keystone systemd:httpd --clone interleave=true
bash restart-pcs-cluster.sh

# 在任意节点创建临时token
export OS_TOKEN=8464d030a1f7ac3f7207
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

# [任一节点]service entity and API endpoints
openstack service create --name keystone --description "OpenStack Identity" identity

openstack endpoint create --region RegionOne identity public http://controller:5000/v3
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

# [任一节点]创建项目和用户
openstack domain create --description "Default Domain" default
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default --password admin admin
openstack role create admin
openstack role add --project admin --user admin admin

### [任一节点]创建service项目
openstack project create --domain default --description "Service Project" service

# 在任意节点创建demo项目和用户
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password demo demo
openstack role create user
openstack role add --project demo --user demo user


# 生成keystonerc_admin脚本
echo "export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1=‘[\[email protected]\h \W(keystone_admin)]\$ ‘
">/root/keystonerc_admin
chmod +x /root/keystonerc_admin

# 生成keystonerc_demo脚本
echo "export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1=‘[\[email protected]\h \W(keystone_admin)]\$ ‘
">/root/keystonerc_demo
chmod +x /root/keystonerc_demo


source keystonerc_admin
### check
openstack token issue

source keystonerc_demo
### check
openstack token issue
```

##### 2、安装openstack Image集群

```
# [任一节点]创建数据库
mysql -uroot -popenstack -e "CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘localhost‘ IDENTIFIED BY ‘"glance"‘;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘%‘ IDENTIFIED BY ‘"glance"‘;
FLUSH PRIVILEGES;"

 

# [任一节点]创建用户等
source keystonerc_admin 
openstack user create --domain default --password glance glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

# 所有控制节点安装glance软件包
yum install -y openstack-glance

# [所有控制节点]配置/etc/glance/glance-api.conf文件
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:[email protected]/glance

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glance

openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host controller
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host controller01

# [所有控制节点]配置/etc/glance/glance-registry.conf文件
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:[email protected]/glance

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glance

openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/glance/glance-registry.conf DEFAULT registry_host controller
openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host controller01

scp /etc/glance/glance-api.conf controller02:/etc/glance/glance-api.conf
scp /etc/glance/glance-api.conf controller03:/etc/glance/glance-api.conf
# 修改bind_host为对应的controller02,controller03

scp /etc/glance/glance-registry.conf controller02:/etc/glance/glance-registry.conf
scp /etc/glance/glance-registry.conf controller03:/etc/glance/glance-registry.conf
# 修改bind_host为对应的controller02,controller03

vim /etc/haproxy/haproxy.cfg
# 增加如下配置
listen glance_api_cluster
bind 192.168.10.100:9292
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:9292 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:9292 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:9292 check inter 2000 rise 2 fall 5
listen glance_registry_cluster
bind 192.168.10.100:9191
balance source
option tcpka
option tcplog
server controller01 192.168.10.101:9191 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:9191 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:9191 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg

# [任一节点]生成数据库
su -s /bin/sh -c "glance-manage db_sync" glance

# [任一节点]添加pacemaker资源
pcs resource create openstack-glance-registry systemd:openstack-glance-registry --clone interleave=true
pcs resource create openstack-glance-api systemd:openstack-glance-api --clone interleave=true
# 下面2条表示先启动openstack-keystone-clone然后启动openstack-glance-registry-clone然后启动openstack-glance-api-clone
pcs constraint order start openstack-keystone-clone then openstack-glance-registry-clone
pcs constraint order start openstack-glance-registry-clone then openstack-glance-api-clone
# api随着registry启动而启动,如果registry启动不了,则api也启动不了
pcs constraint colocation add openstack-glance-api-clone with openstack-glance-registry-clone

# 在任意节点重启pacemaker
bash restart-pcs-cluster.sh

# 上传测试镜像
openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
openstack image list
```


##### 3、安装openstack Compute集群(控制节点)

```
# 所有控制节点安装软件包
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler


# [任一节点]创建数据库
mysql -uroot -popenstack -e "CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘localhost‘ IDENTIFIED BY ‘"nova"‘;
GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘%‘ IDENTIFIED BY ‘"nova"‘;
CREATE DATABASE nova_api;
GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova‘@‘localhost‘ IDENTIFIED BY ‘"nova"‘;
GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova‘@‘%‘ IDENTIFIED BY ‘"nova"‘;
FLUSH PRIVILEGES;"

# [任一节点]创建用户等
source keystonerc_admin
openstack user create --domain default --password nova nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s

# [所有控制节点]配置配置nova组件,/etc/nova/nova.conf文件
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
# openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:[email protected]/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:[email protected]/nova

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.101
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.101
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.101
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.101
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.101
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.101

scp /etc/nova/nova.conf controller02:/etc/nova/nova.conf
scp /etc/nova/nova.conf controller03:/etc/nova/nova.conf
# 其他节点修改my_ip,vncserver_listen,vncserver_proxyclient_address,osapi_compute_listen,metadata_listen,vnc novncproxy_host
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.102
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.102
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.102
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.102
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.102
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.102


################################
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.103
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.103
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.103
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.103
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.103
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.103
##################################
# 配置haproxy
vim /etc/haproxy/haproxy.cfg
listen nova_compute_api_cluster
bind 192.168.10.100:8774
balance source
option tcpka
option httpchk
option tcplog

server controller01 192.168.10.101:8774 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:8774 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:8774 check inter 2000 rise 2 fall 5
listen nova_metadata_api_cluster
bind 192.168.10.100:8775
balance source
option tcpka
option tcplog
server controller01 192.168.10.101:8775 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:8775 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:8775 check inter 2000 rise 2 fall 5
listen nova_vncproxy_cluster
bind 192.168.10.100:6080
balance source
option tcpka
option tcplog
server controller01 192.168.10.101:6080 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:6080 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:6080 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg


# [任一节点]生成数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova

# [任一节点]添加pacemaker资源
pcs resource create openstack-nova-consoleauth systemd:openstack-nova-consoleauth --clone interleave=true
pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy --clone interleave=true
pcs resource create openstack-nova-api systemd:openstack-nova-api --clone interleave=true
pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler --clone interleave=true
pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor --clone interleave=true
# 下面几个order属性表示先启动 openstack-keystone-clone 然后启动openstack-nova-consoleauth-clone
# 然后启动openstack-nova-novncproxy-clone,然后启动openstack-nova-api-clone,然后启动openstack-nova-scheduler-clone
# 然后启动openstack-nova-conductor-clone
# 下面几个colocation属性表示consoleauth约束着novncproxy资源的位置,也就是说consoleauth停止,则novncproxy停止。
# 下面的几个colocation属性依次类推
pcs constraint order start openstack-keystone-clone then openstack-nova-consoleauth-clone

pcs constraint order start openstack-nova-consoleauth-clone then openstack-nova-novncproxy-clone
pcs constraint colocation add openstack-nova-novncproxy-clone with openstack-nova-consoleauth-clone

pcs constraint order start openstack-nova-novncproxy-clone then openstack-nova-api-clone
pcs constraint colocation add openstack-nova-api-clone with openstack-nova-novncproxy-clone

pcs constraint order start openstack-nova-api-clone then openstack-nova-scheduler-clone
pcs constraint colocation add openstack-nova-scheduler-clone with openstack-nova-api-clone

pcs constraint order start openstack-nova-scheduler-clone then openstack-nova-conductor-clone
pcs constraint colocation add openstack-nova-conductor-clone with openstack-nova-scheduler-clone

bash restart-pcs-cluster.sh

### [任一节点]测试
source keystonerc_admin
openstack compute service list
```

##### 4、安装配置neutron集群(控制节点)


```
# [任一节点]创建数据库
mysql -uroot -popenstack -e "CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘localhost‘ IDENTIFIED BY ‘"neutron"‘;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘%‘ IDENTIFIED BY ‘"neutron"‘;
FLUSH PRIVILEGES;"

# [任一节点]创建用户等
source /root/keystonerc_admin
openstack user create --domain default --password neutron neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

# 所有控制节点
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ebtables


# [所有控制节点]配置neutron server,/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host 192.168.10.101
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:[email protected]/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password nova

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp


# [所有控制节点]配置ML2 plugin,/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges external:1:4090
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver iptables_hybrid

# [所有控制节点]配置Open vSwitch agent,/etc/neutron/plugins/ml2/openvswitch_agent.ini,注意,此处填写第二块网卡

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 10.0.0.1
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings external:br-ex

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True

# [所有控制节点]配置L3 agent,/etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge

# [所有控制节点]配置DHCP agent,/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True

# [所有控制节点]配置metadata agent,/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 192.168.10.100
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret openstack

# [所有控制节点]配置nova和neutron集成,/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron

openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret openstack

# [所有控制节点]配置L3 agent HA、/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_automatic_l3agent_failover True
openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3
openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2

# [所有控制节点]配置DHCP agent HA、/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3

# [所有控制节点] 配置Open vSwitch (OVS) 服务,创建网桥和端口
systemctl enable openvswitch.service
systemctl start openvswitch.service

# [所有控制节点] 创建ML2配置文件软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

vim /etc/haproxy/haproxy.cfg
listen neutron_api_cluster
bind 192.168.10.100:9696
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:9696 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:9696 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:9696 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg


# 备份原来配置文件
cp /etc/sysconfig/network-scripts/ifcfg-ens160 /etc/sysconfig/network-scripts/bak-ifcfg-ens160
echo "DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep IPADDR|awk -F ‘=‘ ‘{print $2}‘)
NETMASK=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep NETMASK|awk -F ‘=‘ ‘{print $2}‘)
GATEWAY=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep GATEWAY|awk -F ‘=‘ ‘{print $2}‘)
DNS1=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep DNS1|awk -F ‘=‘ ‘{print $2}‘)
DNS2=218.2.2.2
ONBOOT=yes">/etc/sysconfig/network-scripts/ifcfg-br-ex

echo "TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
NAME=ens160
DEVICE=ens160
ONBOOT=yes">/etc/sysconfig/network-scripts/ifcfg-ens160

ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex ens160

systemctl restart network.service

# 拷贝配置文件到其他控制节点并作相应修改
scp /etc/neutron/neutron.conf controller02:/etc/neutron/neutron.conf
scp /etc/neutron/neutron.conf controller03:/etc/neutron/neutron.conf

scp /etc/neutron/plugins/ml2/ml2_conf.ini controller02:/etc/neutron/plugins/ml2/ml2_conf.ini
scp /etc/neutron/plugins/ml2/ml2_conf.ini controller03:/etc/neutron/plugins/ml2/ml2_conf.ini

scp /etc/neutron/plugins/ml2/openvswitch_agent.ini controller02:/etc/neutron/plugins/ml2/openvswitch_agent.ini
scp /etc/neutron/plugins/ml2/openvswitch_agent.ini controller03:/etc/neutron/plugins/ml2/openvswitch_agent.ini

scp /etc/neutron/l3_agent.ini controller02:/etc/neutron/l3_agent.ini
scp /etc/neutron/l3_agent.ini controller03:/etc/neutron/l3_agent.ini

scp /etc/neutron/dhcp_agent.ini controller02:/etc/neutron/dhcp_agent.ini
scp /etc/neutron/dhcp_agent.ini controller03:/etc/neutron/dhcp_agent.ini

scp /etc/neutron/metadata_agent.ini controller02:/etc/neutron/metadata_agent.ini
scp /etc/neutron/metadata_agent.ini controller03:/etc/neutron/metadata_agent.ini

 

# [任一节点]生成数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

# [任一节点]添加pacemaker资源
pcs resource create neutron-server systemd:neutron-server op start timeout=90 --clone interleave=true
pcs constraint order start openstack-keystone-clone then neutron-server-clone

# 全局唯一克隆:参数globally-unique=true。这些资源各不相同。一个节点上运行的克隆实例与另一个节点上运行的实例不同,同一个节点上运行的任何两个实例也不同。
# clone-max: 在集群中最多能运行多少份克隆资源,默认和集群中的节点数相同; clone-node-max:每个节点上最多能运行多少份克隆资源,默认是1;
pcs resource create neutron-scale ocf:neutron:NeutronScale --clone globally-unique=true clone-max=3 interleave=true
pcs constraint order start neutron-server-clone then neutron-scale-clone

pcs resource create neutron-ovs-cleanup ocf:neutron:OVSCleanup --clone interleave=true
pcs resource create neutron-netns-cleanup ocf:neutron:NetnsCleanup --clone interleave=true
pcs resource create neutron-openvswitch-agent systemd:neutron-openvswitch-agent --clone interleave=true
pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=true
pcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=true
pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent --clone interleave=true

pcs constraint order start neutron-scale-clone then neutron-ovs-cleanup-clone
pcs constraint colocation add neutron-ovs-cleanup-clone with neutron-scale-clone
pcs constraint order start neutron-ovs-cleanup-clone then neutron-netns-cleanup-clone
pcs constraint colocation add neutron-netns-cleanup-clone with neutron-ovs-cleanup-clone
pcs constraint order start neutron-netns-cleanup-clone then neutron-openvswitch-agent-clone
pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-netns-cleanup-clone
pcs constraint order start neutron-openvswitch-agent-clone then neutron-dhcp-agent-clone
pcs constraint colocation add neutron-dhcp-agent-clone with neutron-openvswitch-agent-clone
pcs constraint order start neutron-dhcp-agent-clone then neutron-l3-agent-clone
pcs constraint colocation add neutron-l3-agent-clone with neutron-dhcp-agent-clone
pcs constraint order start neutron-l3-agent-clone then neutron-metadata-agent-clone
pcs constraint colocation add neutron-metadata-agent-clone with neutron-l3-agent-clone

bash restart-pcs-cluster.sh

# [任一节点] 测试
soource keystonerc_admin
neutron ext-list
neutron agent-list
ovs-vsctl show
neutron agent-list
```

##### 5、安装配置dashboard集群


```
# 所有节点安装
yum install -y openstack-dashboard


# [所有控制节点] 修改配置文件/etc/openstack-dashboard/local_settings
sed -i -e ‘s#OPENSTACK_HOST =.*#OPENSTACK_HOST = "‘"192.168.10.101"‘"#g‘ -e "s#ALLOWED_HOSTS.*#ALLOWED_HOSTS = [‘*‘,]#g" -e "s#^CACHES#SESSION_ENGINE = ‘django.contrib.sessions.backends.cache‘\nCACHES#g#" -e "s#locmem.LocMemCache‘#memcached.MemcachedCache‘,\n ‘LOCATION‘ : [ ‘controller01:11211‘, ‘controller02:11211‘, ‘controller03:11211‘, ]#g" -e ‘s#^OPENSTACK_KEYSTONE_URL =.*#OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST#g‘ -e "s/^#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT.*/OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True/g" \
-e s/^#OPENSTACK_API_VERSIONS.*/OPENSTACK_API_VERSIONS = {\n "identity": 3,\n "image": 2,\n "volume": 2,\n}\n#OPENSTACK_API_VERSIONS = {/g -e "s/^#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN.*/OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = ‘default‘/g" -e s#^OPENSTACK_KEYSTONE_DEFAULT_ROLE.*#OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"#g -e "s#^LOCAL_PATH.*#LOCAL_PATH = ‘/var/lib/openstack-dashboard‘#g" -e "s#^SECRET_KEY.*#SECRET_KEY = ‘4050e76a15dfb7755fe3‘#g" -e "s#‘enable_ha_router‘.*#‘enable_ha_router‘: True,#g" -e s#TIME_ZONE = .*#TIME_ZONE = "‘"Asia/Shanghai"‘"#g /etc/openstack-dashboard/local_settings

scp /etc/openstack-dashboard/local_settings controller02:/etc/openstack-dashboard/local_settings
scp /etc/openstack-dashboard/local_settings controller03:/etc/openstack-dashboard/local_settings

# 在controller02上修改
sed -i -e s#OPENSTACK_HOST =.*#OPENSTACK_HOST = "‘"192.168.10.102"‘"#g /etc/openstack-dashboard/local_settings
# 在controiller03上修改
sed -i -e s#OPENSTACK_HOST =.*#OPENSTACK_HOST = "‘"192.168.10.103"‘"#g /etc/openstack-dashboard/local_settings

 

# [所有控制节点]
echo "COMPRESS_OFFLINE = True" >> /etc/openstack-dashboard/local_settings
python /usr/share/openstack-dashboard/manage.py compress

# [所有控制节点] 设置HTTPD在特定的IP上监听
sed -i -e s/^Listen.*/Listen ‘"$(ip addr show dev br-ex scope global | grep "inet " | sed -e ‘s#.*inet ##g‘ -e ‘s#/.*##g‘|head -n 1)"‘:80/g /etc/httpd/conf/httpd.conf

 

vim /etc/haproxy/haproxy.cfg
listen dashboard_cluster
bind 192.168.10.100:80
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:80 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:80 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:80 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
```


##### 6、安装配置cinder


```
# 所有控制节点
yum install -y openstack-cinder

# [任一节点]创建数据库
mysql -uroot -popenstack -e "CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO cinder@localhost IDENTIFIED BY "cinder";
GRANT ALL PRIVILEGES ON cinder.* TO cinder@% IDENTIFIED BY "cinder";
FLUSH PRIVILEGES;"

# [任一节点]创建用户等
. /root/keystonerc_admin

# [任一节点]创建用户等
openstack user create --domain default --password cinder cinder
openstack role add --project service --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

#创建cinder服务的API endpoints
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

#[所有控制节点] 修改/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:[email protected]/cinder
openstack-config --set /etc/cinder/cinder.conf database max_retries -1

openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password cinder

openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen $(ip addr show dev br-ex scope global | grep "inet " | sed -e s#.*inet ##g -e s#/.*##g|head -n 1)
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip $(ip addr show dev br-ex scope global | grep "inet " | sed -e s#.*inet ##g -e s#/.*##g|head -n 1)
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292


# [任一节点]生成数据库
su -s /bin/sh -c "cinder-manage db sync" cinder

# 所有控制节点修改计算节点配置
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

# 重启计算节点 nova-api
# pcs resource restart openstack-nova-api-clone


# 安装配置存储节点 ,存储节点和控制节点复用
# 所有节点
yum install lvm2 -y
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

yum install openstack-cinder targetcli python-keystone -y


# 所有控制节点修改部分配置文件
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm

# 增加haproxy.cfg配置文件
vim /etc/haproxy/haproxy.cfg
listen cinder_api_cluster
bind 192.168.10.100:8776
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:8776 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:8776 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:8776 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg


# [任一节点]添加pacemaker资源
pcs resource create openstack-cinder-api systemd:openstack-cinder-api --clone interleave=true
pcs resource create openstack-cinder-scheduler systemd:openstack-cinder-scheduler --clone interleave=true
pcs resource create openstack-cinder-volume systemd:openstack-cinder-volume

pcs constraint order start openstack-keystone-clone then openstack-cinder-api-clone
pcs constraint order start openstack-cinder-api-clone then openstack-cinder-scheduler-clone
pcs constraint colocation add openstack-cinder-scheduler-clone with openstack-cinder-api-clone
pcs constraint order start openstack-cinder-scheduler-clone then openstack-cinder-volume
pcs constraint colocation add openstack-cinder-volume with openstack-cinder-scheduler-clone

# 重启集群
bash restart-pcs-cluster.sh
# [任一节点]测试
. /root/keystonerc_admin
cinder service-list
```
#### 7、安装配置ceilometer和aodh集群
##### 7.1 安装配置ceilometer集群

实在无力吐槽这个项目,所以不想写了

##### 7.2 安装配置aodh集群

实在无力吐槽这个项目,所以不想写了


#### 四、安装配置计算节点
##### 4.1 OpenStack Compute service
```

# 所有计算节点
yum install -y openstack-nova-compute

# 修改配置文件/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $(ip addr show dev ens160 scope global | grep "inet " | sed -e s#.*inet ##g -e s#/.*##g)
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $(ip addr show dev ens160 scope global | grep "inet " | sed -e s#.*inet ##g -e s#/.*##g)
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.10.100:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf libvirt virt_type $(count=$(egrep -c (vmx|svm) /proc/cpuinfo); if [ $count -eq 0 ];then echo "qemu"; else echo "kvm"; fi)


# 打开虚拟机迁移的监听端口
sed -i -e "s#\#listen_tls *= *0#listen_tls = 0#g" /etc/libvirt/libvirtd.conf
sed -i -e "s#\#listen_tcp *= *1#listen_tcp = 1#g" /etc/libvirt/libvirtd.conf
sed -i -e "s#\#auth_tcp *= *\"sasl\"#auth_tcp = \"none\"#g" /etc/libvirt/libvirtd.conf
sed -i -e "s#\#LIBVIRTD_ARGS *= *\"--listen\"#LIBVIRTD_ARGS=\"--listen\"#g" /etc/sysconfig/libvirtd

#启动服务
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
```


##### 4.2 OpenStack Network service


```
# 安装组件
yum install -y openstack-neutron-openvswitch ebtables ipset
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch


# 修改/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

### 配置Open vSwitch agent,/etc/neutron/plugins/ml2/openvswitch_agent.ini,注意,此处填写第二块网卡
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip $(ip addr show dev ens192 scope global | grep "inet " | sed -e s#.*inet ##g -e s#/.*##g)

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True

### 配置nova和neutron集成,/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

systemctl restart openstack-nova-compute.service
systemctl start openvswitch.service
systemctl restart neutron-openvswitch-agent.service

systemctl enable openvswitch.service
systemctl enable neutron-openvswitch-agent.service
```

#### 五 修补
控制节点:

```
GRANT ALL PRIVILEGES ON *.* TO root@controller01 IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO root@controller02 IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO root@controller03 IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO root@192.168.10.101 IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO root@192.168.10.102 IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO root@192.168.10.103 IDENTIFIED BY "openstack";
```

rabbitmq集群相关:

```
/sbin/service rabbitmq-server stop
/sbin/service rabbitmq-server start
```

 

```
# 设置资源超时时间
pcs resource op defaults timeout=90s

# 清除错误
pcs resource cleanup openstack-keystone-clone
```

 


##### mariadb集群排错

```
报错描述如下:节点启动不了,查看 tailf /var/log/messages日志发现如下报错:
[ERROR] WSREP: gcs/src/gcs_group.cpp:group_post_state_exchange():321
解决错误: rm -f /var/lib/mysql/grastate.dat
然后重启服务即可
```


#### 六 增加dvr功能
##### 6.1 控制节点配置

```
vim /etc/neutron/neutron.conf
[DEFAULT]
router_distributed = true

vim /etc/neutron/plugins/ml2/ml2_conf.ini 
mechanism_drivers = openvswitch,linuxbridge,l2population

vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
enable_distributed_routing = true
[agent]
l2_population = True

vim /etc/neutron/l3_agent.ini 
[DEFAULT]
agent_mode = dvr_snat


vim /etc/openstack-dashboard/local_settings 
enable_distributed_router: True,

重启控制节点neutron相关服务,重启httpd服务
```


##### 6.2 计算节点配置

```
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
mechanism_drivers = openvswitch,l2population


vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
enable_distributed_routing = true
[agent]
l2_population = True

vim /etc/neutron/l3_agent.ini 
[DEFAULT]
interface_driver = openvswitch
external_network_bridge =
agent_mode = dvr


重启neutron相关服务

ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex ens160
openstack-service restart neutron
```


关于rabbitmq连接数限制问题:


```
[[email protected] ~]# cat /etc/security/limits.d/20-nproc.conf 
# Default limit for number of users processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc 4096
root soft nproc unlimited
*    soft    nofile    10240
*    hard    nofile    10240

[[email protected] ~]#ulimit -n 10240

[[email protected] ~]#cat /usr/lib/systemd/system/rabbitmq-server.service
[Service]
LimitNOFILE=10240 #在启动脚本中添加此参数
[[email protected] ~]#systemctl daemon-reload
[[email protected] ~]#systemctl restart rabbitmq-server.service

[[email protected] ~]#rabbitctl status
{file_descriptors,[{total_limit,10140},
{total_used,2135},
{sockets_limit,9124},
{sockets_used,2133}]}
```


#### 关于高可用路由器
只能在系统管理员页面上创建高可用或者DVR分布式路由器


# 关于image镜像共享
把控制节点中的/var/lib/glance/images 镜像目录共享出来。


yum -y install nfs-utils rpcbind -y
mkdir /opt/glance/images/ -p
vim /etc/exports
/opt/glance/images/ 10.128.246.0/23(rw,no_root_squash,no_all_squash,sync)

exportfs -r
systemctl enable rpcbind.service
systemctl start rpcbind.service
systemctl enable nfs-server.service 
systemctl start nfs-server.service


2、2个nova节点查看
showmount -e 10.128.247.153

# 三个控制节点挂载
mount -t nfs 10.128.247.153:/opt/glance/images/ /var/lib/glance/iamges/

chown -R glance.glance /opt/glance/images/


##########
普通用户创建HA路由器
```
neutron router-create router_demo --ha True
```

 

以上是关于OpenStack Mitaka HA部署方案(随笔)的主要内容,如果未能解决你的问题,请参考以下文章

openstack mitaka宿主机怎么可以访问虚拟机

openstack项目day24:OpenStack mitaka部署

Openstack Mitaka for Centos7.2 部署指南

OpenStack mitaka DevStack 部署

学习OpenStack Mitaka单节点部署笔记

OpenStack Mitaka部署<五;