openstack结合glusterfs存储
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了openstack结合glusterfs存储相关的知识,希望对你有一定的参考价值。
1、机器信息2、准备工作
2.1、关闭NetworkManager服务
2.2、上传repo文件
2.3、在每个机器上做时间同步
2.4、每台机器上关闭selinux服务、配置防火墙
2.5、每台机器上配置hosts文件
3、部署O+G环境
3.1、每台机器上安装gfs组件
3.2、在YUN21上安装packstack,并部署openstack
3.3、创建gfs卷并挂载
3.4、进入dash,上传镜像,创建网络,修改配额
3.5、创建glance和cinder卷并挂载
4、系统优化
1、机器信息
主机名 网卡 inet addr
YUN21 eth1 10.0.0.21
eth2 192.168.0.121
eth4 20.0.0.21
YUN22 eth2 192.168.0.122
eth3 10.0.0.22
eth7 20.0.0.22
YUN23 eth2 192.168.0.123
eth3 10.0.0.23
eth7 20.0.0.23
YUN24 eth0 192.168.0.124
eth1 10.0.0.24
eth6 20.0.0.24
安装的是桌面版的centos6.5系统
(安装桌面版系统的原因是不同物理机对于硬件的识别需要软件包支持的差异性,例如一种情况就是在联想物理机上安装带有桌面的系统后不需要安装额外的驱动就可以识别安装在物理机上的intel万兆网卡,这是在没有带有桌面系统上不存在的)。
2、准备工作
2.1、关闭NetworkManager服务
如果不关闭的话,都无法PING通其他机器,在最简版的centos系统中不需要考虑这个服务。
每个机器上都做下面操作
# service NetworkManager stop
# chkconfig NetworkManager off
2.2、上传repo文件
每个机器上操作
# yum makecache
2.3、在每个机器上做时间同步
# yum install -y ntp ntpdate ntp-doc
配置NTP
# vi /etc/ntp.conf
把下面的内容加以注释,使其失效
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
添加一行
server 192.168.0.100
# service ntpd start
# chkconfig ntpd on
# ntpdate -u 192.168.0.124
(IP地址是192.168.0.124的物理机是配置好的NTP服务器端)
2.4、每台机器上关闭selinux服务、配置防火墙
# setenforce 0
# vi /etc/sysconfig/selinux
SELINUX=disabled
# vi /etc/sysconfig/iptables
在ssh规则下添加
-A INPUT -p tcp -m multiport --dports 24007:24047 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 38465:38485 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 16509 -j ACCEPT
# service iptables restart
2.5、每台机器上配置hosts文件
# vi /etc/hosts
添加
192.168.0.121 YUN21
192.168.0.122 YUN22
192.168.0.123 YUN23
192.168.0.124 YUN24
3、部署O+G环境
3.1、每台机器上安装gfs组件
# yum install -y glusterfs-server
3.2、在YUN21上安装packstack,并部署openstack
首先更新每台机器
# yum update -y && reboot
这里在更新的时候由于系统的原因(和最简化系统对比),没有在Centos软件源中找到google为开头的两个软件,可以通过wget的方式把系统镜像源里边的这两个软件下载到本地并安装,之后再次运行命令。
# yum install -y openstack-packstack
# packstack --gen-answer-file answers.txt
# vi answers.txt
修改密码
CONFIG_KEYSTONE_ADMIN_PW=openstack
CONFIG_PROVISION_DEMO=n
修改网络
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
to
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_PUBIF=eth2
CONFIG_NOVA_NETWORK_PRIVIF=eth1
(上边网卡的修改在不同的物理环境中,也就是物理机不同的环境中是不一样的,要视情况而修改,两个参数有“PRIVIF”的表示内部网络,中间一个有“PUBIF”标识的表示外部网络,也就是分配浮动IP的网络。)
添加计算节点
CONFIG_COMPUTE_HOSTS=192.168.0.121
to
CONFIG_COMPUTE_HOSTS=192.168.0.121,192.168.0.122,192.168.0.123,192.168.0.124
# packstack --answer-file answers.txt
配置网桥
[[email protected]YUN21 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth2 ifcfg-eth2.bak
[[email protected]YUN21 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth2 /etc/sysconfig/network-scripts/ifcfg-br-ex
[[email protected]YUN21 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
HWADDR=xx:xx:xx:xx:xx:xx
TYPE=OVSPort
OVS_BRIDGE=br-ex
DEVICETYPE=ovs
ONBOOT=yes
[[email protected]YUN21 ~]# vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
TYPE=OVSBridge
DEVICETYPE=ovs
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.121
NETMASK=255.255.255.128
GATEWAY=10.231.29.1
[[email protected]YUN21 ~]# vi /etc/neutron/plugin.ini
添加
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-ex
[[email protected]YUN21 ~]# service network restart
[[email protected]YUN21 ~]# ifconfig
br-ex Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:192.168.0.121 Bcast:192.168.0.127 Mask:255.255.255.0
inet6 addr: fe80::49b:36ff:fed3:bb5e/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:1407 errors:0 dropped:0 overruns:0 frame:0
TX packets:856 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:309542 (302.2 KiB) TX bytes:171147 (167.1 KiB)
eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:10.0.0.21 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::6e92:bfff:fe0b:de45/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:142 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10730 (10.4 KiB) TX bytes:1128 (1.1 KiB)
Memory:dfa20000-dfa3ffff
eth2 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet6 addr: fe80::6e92:bfff:fe0b:de44/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:176062 errors:0 dropped:0 overruns:0 frame:0
TX packets:80147 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:231167565 (220.4 MiB) TX bytes:9536425 (9.0 MiB)
Memory:dfa00000-dfa1ffff
eth4 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:20.0.0.21 Bcast:20.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::7a24:afff:fe85:3a32/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:670 (670.0 b)
Interrupt:68 Memory:fa000000-fa7fffff
当名为br-ex的网卡的IP替换为eth2网卡的IP时才可以接下来的操作
3.3、创建gfs卷并挂载
[[email protected]YUN21 ~]# service glusterd status
glusterd (pid 3124) is running...
在任意一台机器说那个操作,把其他机器加入到存储池
[[email protected]YUN21 ~]# gluster peer probe 20.0.0.22
peer probe: success.
[[email protected]YUN21 ~]# gluster peer probe 20.0.0.23
peer probe: success.
[[email protected]YUN21 ~]# gluster peer probe 20.0.0.24
peer probe: success.
[[email protected]YUN21 ~]# gluster peer status
Number of Peers: 3
Hostname: 20.0.0.22
Uuid: 434fc5dd-22c9-49c8-9e42-4962279cdca6
State: Peer in Cluster (Connected)
Hostname: 20.0.0.23
Uuid: a3c6770a-0b3b-4dc5-ad94-37e8c06da3b5
State: Peer in Cluster (Connected)
Hostname: 20.0.0.24
Uuid: 13905ea7-0c32-4be0-9708-b6788033070c
State: Peer in Cluster (Connected)
每个机器上创建组成卷的二级目录
# mkdir /gv0/brick
# mkdir /gv1/brick
# mkdir /gv2/brick
创建nova卷
[[email protected]YUN21 ~]# gluster volume create nova replica 2 20.0.0.21:/gv0/brick/ 20.0.0.22:/gv0/brick/ 20.0.0.23:/gv0/brick/ 20.0.0.24:/gv0/brick/
volume create: nova: success: please start the volume to access data
[[email protected]YUN21 ~]# gluster volume start nova
volume start: nova: success
[[email protected]YUN21 ~]# gluster volume status nova
Status of volume: nova
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 20.0.0.21:/gv0/brick 49152 Y 7672
Brick 20.0.0.22:/gv0/brick 49152 Y 30221
Brick 20.0.0.23:/gv0/brick 49152 Y 30432
Brick 20.0.0.24:/gv0/brick 49152 Y 22918
NFS Server on localhost 2049 Y 7687
Self-heal Daemon on localhost N/A Y 7693
NFS Server on 20.0.0.24 2049 Y 22933
Self-heal Daemon on 20.0.0.24 N/A Y 22938
NFS Server on 20.0.0.22 2049 Y 30236
Self-heal Daemon on 20.0.0.22 N/A Y 30242
NFS Server on 20.0.0.23 2049 Y 30447
Self-heal Daemon on 20.0.0.23 N/A Y 30453
Task Status of Volume nova
------------------------------------------------------------------------------
There are no active volume tasks
每台机器上都配置自动挂载
[[email protected]YUN21 ~]# echo "20.0.0.21:/nova /var/lib/nova/instances/ glusterfs defaults,_netdev 0 0" >> /etc/fstab
[[email protected]YUN21 ~]# mount -a
[[email protected]YUN21 ~]# mount
/dev/mapper/vg_YUN21-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_YUN21-lv_gv0 on /gv0 type xfs (rw,nobarrier)
/dev/mapper/vg_YUN21-lv_gv1 on /gv1 type xfs (rw,nobarrier)
/dev/mapper/vg_YUN21-lv_gv2 on /gv2 type xfs (rw,nobarrier)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/srv/loopback-device/swiftloopback on /srv/node/swiftloopback type ext4 (rw,noatime,nodiratime,loop=/dev/loop1,nobarrier,user_xattr)
gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)
20.0.0.21:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[[email protected]YUN22 ~]# echo "20.0.0.22:/nova /var/lib/nova/instances/ glusterfs defaults,_netdev 0 0" >> /etc/fstab
[[email protected]YUN22 ~]# mount -a && mount
/dev/mapper/vg_YUN13-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_YUN13-lv_gv0 on /gv0 type xfs (rw)
/dev/mapper/vg_YUN13-lv_gv1 on /gv1 type xfs (rw)
/dev/mapper/vg_YUN13-lv_gv2 on /gv2 type xfs (rw)
/dev/mapper/vg_YUN13-lv_gv3 on /gv3 type xfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
20.0.0.22:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[[email protected]YUN23 ~]# echo "20.0.0.23:/nova /var/lib/nova/instances/ glusterfs defaults,_netdev 0 0" >> /etc/fstab
[[email protected]YUN23 ~]# mount -a && mount
/dev/mapper/vg_YUN23-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_YUN23-lv_gv0 on /gv0 type xfs (rw)
/dev/mapper/vg_YUN23-lv_gv1 on /gv1 type xfs (rw)
/dev/mapper/vg_YUN23-lv_gv2 on /gv2 type xfs (rw)
/dev/mapper/vg_YUN23-lv_gv3 on /gv3 type xfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
20.0.0.23:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[[email protected]YUN24 ~]# echo "20.0.0.24:/nova /var/lib/nova/instances/ glusterfs defaults,_netdev 0 0" >> /etc/fstab
[[email protected]YUN24 ~]# mount -a && mount
/dev/mapper/vg_YUN17-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_YUN17-lv_gv0 on /gv0 type xfs (rw)
/dev/mapper/vg_YUN17-lv_gv1 on /gv1 type xfs (rw)
/dev/mapper/vg_YUN17-lv_gv2 on /gv2 type xfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
20.0.0.24:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072
每台机器上操作
# vi .bash_profile
在最后添加
export PS1='[\[email protected]\h \W]\$'
mount -a
这个操作是为了保证在系统重启之后,glusterfs卷可以完成自动挂载
查看并修改目录权限
[[email protected]YUN21 ~]#ll -d /var/lib/nova/instances/
drwxr-xr-x 3 root root 46 Dec 25 17:18 /var/lib/nova/instances/
[[email protected]YUN21 ~]#chown -R nova:nova /var/lib/nova/instances/
[[email protected]YUN21 ~]#ll -d /var/lib/nova/instances/
drwxr-xr-x 3 nova nova 46 Dec 25 17:18 /var/lib/nova/instances/
其他三台机器依次做修改
各台机器上重启服务
#service openstack-nova-compute restart
Stopping openstack-nova-compute: [ OK ]
Starting openstack-nova-compute: [ OK ]
3.4、进入dash,上传镜像,创建网络,修改配额
创建实例成功
3.5、创建glance和cinder卷并挂载
[[email protected]YUN21 ~]#gluster volume create glance replica 2 20.0.0.21:/gv1/brick 20.0.0.22:/gv1/brick 20.0.0.23:/gv1/brick 20.0.0.24:/gv1/brick
volume create: glance: success: please start the volume to access data
[[email protected]YUN21 ~]#gluster volume create cinder replica 2 20.0.0.21:/gv2/brick 20.0.0.22:/gv2/brick 20.0.0.23:/gv2/brick 20.0.0.24:/gv2/brick
volume create: cinder: success: please start the volume to access data
[[email protected]YUN21 ~]#gluster volume start glance
volume start: glance: success
[[email protected]YUN21 ~]#gluster volume start cinder
volume start: cinder: success
[[email protected]YUN21 ~]#gluster volume status glance
Status of volume: glance
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 20.0.0.21:/gv1/brick 49153 Y 18269
Brick 20.0.0.22:/gv1/brick 49153 Y 39924
Brick 20.0.0.23:/gv1/brick 49153 Y 40300
Brick 20.0.0.24:/gv1/brick 49153 Y 30920
NFS Server on localhost 2049 Y 18374
Self-heal Daemon on localhost N/A Y 18389
NFS Server on 20.0.0.24 2049 Y 31005
Self-heal Daemon on 20.0.0.24 N/A Y 31015
NFS Server on 20.0.0.22 2049 Y 40010
Self-heal Daemon on 20.0.0.22 N/A Y 40020
NFS Server on 20.0.0.23 2049 Y 40385
Self-heal Daemon on 20.0.0.23 N/A Y 40395
Task Status of Volume glance
------------------------------------------------------------------------------
There are no active volume tasks
[[email protected]YUN21 ~]#gluster volume status cinder
Status of volume: cinder
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 20.0.0.21:/gv2/brick 49154 Y 18362
Brick 20.0.0.22:/gv2/brick 49154 Y 39993
Brick 20.0.0.23:/gv2/brick 49154 Y 40369
Brick 20.0.0.24:/gv2/brick 49154 Y 30989
NFS Server on localhost 2049 Y 18374
Self-heal Daemon on localhost N/A Y 18389
NFS Server on 20.0.0.24 2049 Y 31005
Self-heal Daemon on 20.0.0.24 N/A Y 31015
NFS Server on 20.0.0.23 2049 Y 40385
Self-heal Daemon on 20.0.0.23 N/A Y 40395
NFS Server on 20.0.0.22 2049 Y 40010
Self-heal Daemon on 20.0.0.22 N/A Y 40020
Task Status of Volume cinder
------------------------------------------------------------------------------
There are no active volume tasks
配置glance卷和cinder卷自动挂载(只需要在YUN21上操作)
[[email protected]YUN21 ~]#echo "20.0.0.21:/glance /var/lib/glance/images/ glusterfs defaults,_netdev 0 0" >> /etc/fstab
[[email protected]YUN21 ~]#mount -a && mount
/dev/mapper/vg_YUN21-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_YUN21-lv_gv0 on /gv0 type xfs (rw,nobarrier)
/dev/mapper/vg_YUN21-lv_gv1 on /gv1 type xfs (rw,nobarrier)
/dev/mapper/vg_YUN21-lv_gv2 on /gv2 type xfs (rw,nobarrier)
/srv/loopback-device/swiftloopback on /srv/node/swiftloopback type ext4 (rw,noatime,nodiratime,nobarrier,user_xattr,nobarrier,loop=/dev/loop0)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
20.0.0.21:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
20.0.0.21:/glance on /var/lib/glance/images type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[[email protected]YUN21 ~]#service openstack-glance-api restart
Stopping openstack-glance-api: [ OK ]
Starting openstack-glance-api: [ OK ]
修改目录权限
[[email protected]YUN21 ~]#ll -d /var/lib/glance/images/
drwxr-xr-x 3 root root 46 Dec 25 18:20 /var/lib/glance/images/
[[email protected]YUN21 ~]#chown -R glance:glance /var/lib/glance/images/
[[email protected]YUN21 ~]#ll -d /var/lib/glance/images/
drwxr-xr-x 3 glance glance 46 Dec 25 18:20 /var/lib/glance/images/
配置cinder卷
[[email protected]YUN21 ~]#vi /etc/cinder/share.conf
20.0.0.21:/cinder
[[email protected]YUN21 ~]#chmod 0640 /etc/cinder/share.conf
[[email protected]YUN21 ~]#chown root:cinder /etc/cinder/share.conf
[[email protected]YUN21 ~]#vi /etc/cinder/cinder.conf
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
to
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
添加
glusterfs_shares_config=/etc/cinder/share.conf
glusterfs_mount_point_base=/var/lib/cinder/volumes
[[email protected]YUN21 ~]#for i in api scheduler volume; do sudo service openstack-cinder-${i} restart;done
Stopping openstack-cinder-api: [ OK ]
Starting openstack-cinder-api: [ OK ]
Stopping openstack-cinder-scheduler: [ OK ]
Starting openstack-cinder-scheduler: [ OK ]
Stopping openstack-cinder-volume: [ OK ]
Starting openstack-cinder-volume: [ OK ]
[[email protected]YUN21 ~]#mount
/dev/mapper/vg_YUN21-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_YUN21-lv_gv0 on /gv0 type xfs (rw,nobarrier)
/dev/mapper/vg_YUN21-lv_gv1 on /gv1 type xfs (rw,nobarrier)
/dev/mapper/vg_YUN21-lv_gv2 on /gv2 type xfs (rw,nobarrier)
/srv/loopback-device/swiftloopback on /srv/node/swiftloopback type ext4 (rw,noatime,nodiratime,nobarrier,user_xattr,nobarrier,loop=/dev/loop0)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
20.0.0.21:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
20.0.0.21:/glance on /var/lib/glance/images type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
20.0.0.21:/cinder on /var/lib/cinder/volumes/6c05f25454fce4801c6aae690faff3dc type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
4、系统优化
增加云主机带宽
在控制节点上
[[email protected]YUN21 ~]#vi /etc/neutron/dhcp_agent.ini
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
[[email protected]YUN21 ~]#vi /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1400
[[email protected]YUN21 ~]#ethtool -K eth1 tso off
[[email protected]YUN21 ~]#ethtool -K eth2 tso off
[[email protected]YUN21 ~]#ethtool -K eth4 tso off
[[email protected]YUN21 ~]#ethtool -K eth1 gro off
[[email protected]YUN21 ~]#ethtool -K eth2 gro off
[[email protected]YUN21 ~]#ethtool -K eth4 gro off
[[email protected]YUN21 ~]#vi /etc/rc.d/rc.local
ethtool -K eth1 tso off
ethtool -K eth2 tso off
ethtool -K eth4 tso
对于创建的虚机也依照上边的方式关闭对应网卡的tso和gro服务
以上是关于openstack结合glusterfs存储的主要内容,如果未能解决你的问题,请参考以下文章
openstack运维实战系列(十七)之glance与ceph结合
openstack newton 配置glusterfs 作cinder backend