(WIP) my cloud test bed (by quqi99)

Posted quqi99

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了(WIP) my cloud test bed (by quqi99)相关的知识,希望对你有一定的参考价值。

作者:张华 发表于:2023-03-10
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明

问题

想创建一个local local test bed, 用来方便做各种云实验,如openstack, k8s, ovn, lxd等实验,限制条件为:

  • 只有一台物理机,且为单网卡
  • 尽量使用各种cache来应付特色网络

apt cache

首先就是apt cache:

sudo apt install apt-cacher-ng -y
echo 'PassThroughPattern: .*' |sudo tee -a /etc/apt-cacher-ng/acng.conf
sudo systemctl restart apt-cacher-ng.service && sudo systemctl enable apt-cacher-ng.service
du -sh /var/cache/apt-cacher-ng/
#vim /var/lib/dpkg/info/apt-cacher-ng.postinst
#dpkg --configure apt-cacher-ng

#change the dir from /var/cache/apt-cacher-ng/ to /mnt/udisk/apt-cacher-ng
cat << EOF |sudo tee -a /etc/fstab
#use blkid to see uuid
UUID="d63d7251-ec3d-4ef5-aa92-f3d4c480f20c" /mnt/udisk   ext4    defaults    0  2
EOF
mkfs.ext4 -F -L udisk /dev/sdb1
mkdir /mnt/udisk/apt-cacher-ng
chown -R apt-cacher-ng:apt-cacher-ng /mnt/udisk/apt-cacher-ng
sudo sed -i 's/CacheDir: \\/var\\/cache\\/apt-cacher-ng/CacheDir: \\/mnt\\/udisk\\/apt-cacher-ng/g' /etc/apt-cacher-ng/acng.conf
du -sh /mnt/udisk/apt-cacher-ng

#Use apt cache proxy
echo 'Acquire::http::Proxy "http://proxy:3142";' | sudo tee /etc/apt/apt.conf.d/01acng

pip mirror

#use pip mirror, or use this instead: PYPI_ALTERNATIVE_URL=http://mirrors.aliyun.com/pypi/simple
mkdir -p ~/.pip
cat << EOF |tee ~/.pip/pip.conf
[global]
trusted-host=mirrors.aliyun.com
index-url = http://mirrors.aliyun.com/pypi/simple
disable-pip-version-check = true
timeout = 120
EOF

[可选] image mirror

注:使用sstream-mirror时始终进度是0%, 确保images.maas.io的IP位于whitelist ipset即可可解决, 所以看起来并不需要做image mirror。注意:只有在运行’ping images.maas.io’触发路由器解析该域名之后才能在whitelist里看到该IP(ipset test whiltelist 185.125.190.37)

dig images.maas.io
ipset list |grep Name
ipset test whiltelist 185.125.190.37
# iptables-save |grep -i set |grep whitelist
-A SS_SPEC_WAN_AC -m set --match-set whitelist dst -j RETURN
  • archive.ubuntu.com, 可用 http://mirrors.clooud.tencent.com/ubuntu 代替
  • ports.ubuntu.com, 可用http://ports.ubuntu.com/ubuntu-ports 代替

maas.io与cloud-images.ubuntu.com自己做mirror, 方法如下:

sudo apt -y install simplestreams -y
#for cloud-images.ubuntu.com
sudo sstream-mirror --keyring=/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg --progress --max=1 --path=streams/v1/index.json \\
  https://cloud-images.ubuntu.com/releases/ /images/simplestreams 'arch=amd64' 'release~(focal|jammy)' \\
  'ftype~(lxd.tar.xz|squashfs|root.tar.xz|root.tar.gz|disk1.img|.json|.sjson)'
#for images.maas.io
KEYRING_FILE=/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg
IMAGE_SRC=https://images.maas.io/ephemeral-v3/stable
IMAGE_DIR=/images/simplestreams/maas/images/ephemeral-v3/stable
sudo mkdir -p /images/simplestreams/maas/images/ephemeral-v3/stable
http_proxy=http://192.168.99.186:7890 https_proxy=http://192.168.99.186:7890 sudo sstream-mirror --keyring=$KEYRING_FILE $IMAGE_SRC $IMAGE_DIR \\
    'arch=amd64' 'release~(jammy|focal)' --max=1 --progress
sudo sstream-mirror --keyring=$KEYRING_FILE $IMAGE_SRC $IMAGE_DIR \\
    'os~(grub*|pxelinux)' --max=1 --progress

#下面是如何使用mages.maas.io mirror, 记得先导入key
scp /home/hua/ca/ca.crt root@192.168.99.221:/usr/local/share/ca-certificates/ca.crt
lxc exec maas -- chmod 644 /usr/local/share/ca-certificates/ca.crt
lxc exec maas -- update-ca-certificates --fresh
lxc exec maas -- wget https://node1.lan/maas/images/ephemeral-v3/stable/streams/v1/index.sjson
#NOTE: we should use /snap/maas/current/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg rather than /usr/share/keyrings/ubuntu-cloudimage-keyring.gpg, so don't need to change keyring_filename
apt install jq -y
BOOT_SOURCE_ID=$(maas admin boot-sources read | jq '.[] | select(.url | contains("images.maas.io/ephemeral-v3")) | .id')
maas admin boot-source update $BOOT_SOURCE_ID url=https://node1.lan:443/maas/images/ephemeral-v3/stable/
maas admin boot-resources import

然后解决密钥:

#https://goharbor.io/docs/2.6.0/install-config/configure-https/
openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -sha512 -days 3650 -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=node1.lan" -key ca.key -out ca.crt
openssl genrsa -out node1.lan.key 4096
openssl req -sha512 -new -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=node1.lan" -key node1.lan.key -out node1.lan.csr
#complies with the Subject Alternative Name (SAN) and x509 v3 extension requirements to avoid 'x509: certificate relies on legacy Common Name field, use SANs instead'
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=node1.lan
DNS.2=node1
DNS.3=hostname
EOF
openssl x509 -req -sha512 -days 3650 -extfile v3.ext -CA ca.crt -CAkey ca.key -CAcreateserial -in node1.lan.csr -out node1.lan.crt
#for docker, the Docker daemon interprets .crt files as CA certificates and .cert files as client certificates.
openssl x509 -inform PEM -in node1.lan.crt -out node1.lan.cert

设置nginx为https, 另外,由于上面使用了一个新目录/images/simplestreams作为root,那需要将/etc/nginx/nginx.conf中添加’user root;'来避免权限问题

$ cat /etc/nginx/sites-available/default
server 
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name node1.lan;
    ssl_certificate /home/hua/ca/node1.lan.crt;
    ssl_certificate_key /home/hua/ca/node1.lan.key;
    #ssl_protocols TLSv1.2;
    ssl_prefer_server_ciphers on; 
    location / 
       root /images/simplestreams;
       index index.html;
      

测试:

curl --resolve node1.lan:443:192.168.99.235 --cacert ~/ca/ca.crt https://node1.lan:443/streams/v1/index.json
sudo cp ~/ca/ca.crt /usr/local/share/ca-certificates/ca.crt
sudo chmod 644 /usr/local/share/ca-certificates/ca.crt
sudo update-ca-certificates --fresh
curl --resolve node1.lan:443:192.168.99.235 https://node1.lan:443/streams/v1/index.json

物理机网络设计

因为只有一台机器node1, 只有一个网卡eno1:

  • br-eth0, 虽然lxd也支持ovs bridge, 但为了更方便使用wol,更稳定的management网络,我们还是决定使用用这个仅有的网卡eno1创建一个linux bridge (br-eth0), 这样br-eth0足够做各种lxd实验了
  • br-data, 若要做openstack实验,还需要创建一个ovs bridge (br-data), 只用node1这一台物理机做实验它并不需要物理网卡,若今后还想加物理机,可以创建一对linux peers来连接br-eth0与br-data,这linux peers的一端加入br-data即可。另外,即使要创建多机环境,用vagrant, lxd很多方案可以解决,并没有加多物理机的需求

下列netplan配置创建了br-eth0,也让br-eth0支持wol通过魔术包唤醒,也创建了一个没有dhcp的br-maas用于maas实验

cat << EOF |sudo tee /etc/netplan/90-local.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      match:
        macaddress: f8:32:e4:be:87:cd
      wakeonlan: true
  bridges:
    br-eth0:
      dhcp4: yes
      interfaces:
      - eno1
      #Use 'etherwake F8:32:E4:BE:87:CD' to wol in bridge
      macaddress: f8:32:e4:be:87:cd
    br-maas:
      #br-maas without dhcp enabled so it's for maas
      dhcp4: false
      addresses:
      - 192.168.9.1/24
      routes:
        - to: default
          via: 192.168.99.1
      nameservers:
        addresses:
        - 192.168.99.1
EOF
sudo netplan generate
sudo netplan apply

使用netplan的配置是想运行一些post script hook时不方便, 未测试下面使用networkd-dispatcher hook的曲线救国方法.

sudo systemctl stop NetworkManager.service
sudo systemctl disable NetworkManager.service
sudo systemctl stop NetworkManager-wait-online.service
sudo systemctl disable NetworkManager-wait-online.service
sudo systemctl stop NetworkManager-dispatcher.service
sudo systemctl disable NetworkManager-dispatcher.service
sudo apt install netplan.io openvswitch-switch -y
sudo apt install -y networkd-dispatcher -y
cat << EOF |sudo tee /etc/networkd-dispatcher/off.d/start.sh
#!/bin/bash -e
#IFACE='eno1'
if [ \\$IFACE = "eno1" -o \\$IFACE = "br-eth0" ]; then
	if ip link show eno1 | grep "state DOWN" > /dev/null && !(arp -ni br-data | grep "ether" > /dev/null); then
		date > /tmp/start.txt;
		/usr/bin/ovs-vsctl --may-exist add-port br-eth0 eno1
		ip l add name veth-br-eth0 type veth peer name veth-ex
		ip l set dev veth-br-eth0 up
		ip l set dev veth-ex up
		ip l set veth-br-eth0 master br-eth0
	fi
fi
EOF
cat << EOF |sudo tee /etc/networkd-dispatcher/routable.d/stop.sh
#!/bin/bash -e
if [ \\$IFACE = "eno1" -o \\$IFACE = "br-eth0" ]; then
	if ip link show eno1 | grep "state UP" > /dev/null || arp -ni br-data | grep "ether" > /dev/null; then
		date > /tmp/stop.txt;
		systemctl stop hostapd;
	fi
fi
EOF
sudo chmod +x /etc/networkd-dispatcher/off.d/start.sh
sudo chmod +x /etc/networkd-dispatcher/off.d/stop.sh

直接创建ovs-bridge的方法如下,但我们的设计并没有使用ovs-bridge的需求:

auto br-eth0
allow-ovs br-eth0
iface br-eth0 inet static
pre-up /usr/bin/ovs-vsctl -- --may-exist add-br br-eth0
pre-up /usr/bin/ovs-vsctl -- --may-exist add-port br-eth0 eno1
  address 192.168.99.125
  gateway 192.168.99.1
  network 192.168.99.0
  netmask 255.255.255.0
  broadcast 192.168.99.255
ovs_type OVSBridge
ovs_ports eno1

#sudo ip -6 addr add 2001:2:3:4500:fa32:e4ff:febe:87cd/64 dev br-eth0
iface br-phy inet6 static
pre-up modprobe ipv6
address 2001:2:3:4500:fa32:e4ff:febe:87cd
netmask 64
gateway 2001:2:3:4500::1

auto eno1
allow-br-phy eno1
iface eno1 inet manual
ovs_bridge br-eth0
ovs_type OVSPort

使用Networkmanager来代替netplan的方法容易支持post script hook:

root@node1:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto br-eth0
iface br-eth0 inet static
    address 192.168.99.124/24
    gateway 192.168.99.1
    bridge_ports eth0
    dns-nameservers 192.168.99.1
    bridge_stp on
    bridge_fd 0
    bridge_maxwait 0
    up echo -n 0 > /sys/devices/virtual/net/$IFACE/bridge/multicast_snooping
# for stateless it's 'inet6 auto', for stateful it's 'inet6 dhcp'
iface br-eth0 inet6 auto
    #iface eth0 inet6 static
    #address 2001:192:168:99::135                                                                                            
    #gateway 2001:192:168:99::1
    #netmask 64
    # use SLAAC to get global IPv6 address from the router
    # we may not enable ipv6 forwarding, otherwise SLAAC gets disabled
    # sleep 5 is due a bug and 'dhcp 1' indicates that info should be obtained from dhcpv6 server for stateless
    up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6
    up sleep 5
    autoconf 1
    accept_ra 2
    dhcp 1

devstack实验

单机实验,br-data不用veth-ex也行,br-ex用veth-ex也行(这时就可以当br-ex用了 )

  • OVS_PHYSICAL_BRIDGE=br-data将用于这里(ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-data)
  • 如果它想要访问外网的话,可以再设置PUBLIC_INTERFACE=veth-ex, 其中veth-ex为br-eth0(linux bridge)与br-data(ovs bridge)之间的linux peers
  • PUBLIC_INTERFACE也就是br-ex, 也可以让它使用br-data

配置如下,未测试, 仅供参考:

cat << EOF |tee local.conf
[[local|localrc]]
#make rabbitmq-server to run well
#echo '10.0.1.1 node1' |sudo tee -a /etc/hosts
#sudo pip install --upgrade setuptools
#when USE_VENV=True and hitting pip issue, eg: install_ipip.sh related issues, can try:
#find /bak/openstack -name '*.venv' |xargs rm -rf 
#https://docs.openstack.org/devstack/latest/configuration.html
#TARGET_BRANCH=stable/zed
#PYPI_ALTERNATIVE_URL=http://mirrors.aliyun.com/pypi/simple
sudo ovs-vsctl show
sudo ip l add name veth-br-eth0 type veth peer name veth-ex >/dev/null 2>&1
sudo ip l set dev veth-br-eth0 up
sudo ip l set dev veth-ex up
sudo ip l set veth-br-eth0 master br-eth0
sudo ovs-vsctl --may-exist add-br br-data
sudo ovs-vsctl --may-exist add-port br-data veth-ex
sudo ip addr add 10.0.1.1/24 dev br-data >/dev/null >/dev/null 2>&1
USE_VENV=False
OFFLINE=False
DEST=/bak/openstack
PUBLIC_INTERFACE=veth-ex
OVS_PHYSICAL_BRIDGE=br-data
PUBLIC_BRIDGE=br-data
HOST_IP=10.0.1.1
FIXED_RANGE=10.0.1.0/24
NETWORK_GATEWAY=10.0.1.1
PUBLIC_NETWORK_GATEWAY=192.168.99.1
FLOATING_RANGE=192.168.99.0/24
Q_FLOATING_ALLOCATION_POOL=start=192.168.99.240,end=192.168.99.249
disable_service tempest
disable_service horizon
disable_service memory_tracker
ADMIN_PASSWORD=password
DATABASE_PASSWORD=\\$ADMIN_PASSWORD
RABBIT_PASSWORD=\\$ADMIN_PASSWORD
SERVICE_PASSWORD=\\$ADMIN_PASSWORD
IP_VERSION=4
SYSLOG=False
VERBOSE=True
LOGFILE=\\$DEST/logs/stack.log
ENABLE_DEBUG_LOG_LEVEL=False
SCREEN_LOGDIR=\\$DEST/logs
LOG_COLOR=False
LOGDAYS=5
Q_USE_DEBUG_COMMAND=False
WSGI_MODE=mod_wsgi
KEYSTONE_USE_MOD_WSGI=False
NOVA_USE_MOD_WSGI=False
CINDER_USE_MOD_WSGI=False
mysql_GATHER_PERFORMANCE=False
DOWNLOAD_DEFAULT_IMAGES=False
IMAGE_URLS="http://download.cirros-cloud.net/0.6.1/cirros-0.6.1-x86_64-disk.img"
heartbeat_timeout_threshold=7200
#GIT_BASE=http://git.trystack.cn
EOF

安装lxd

配置lxd默认创建两块网卡:

  • eth0: br-eth0
  • eth1: lxdbr0 with dhcp

对于maas lxd 容器,可能还需要一块没有dhcp的网卡(上面netplan中创建了br-maas), 可这样用它:lxc config device add maas eth2 nic name=eth2 nictype=bridged parent=br-maas

sudo snap install lxd --classic
sudo usermod -aG $USER lxd
sudo chown -R $USER ~/.config/
export EDITOR=vim
# MUST NOT use sudo, so must cd to home dir to run it
cd ~ && lxd init --auto
#lxc network set lxdbr0 ipv4.address=10.10.10.1/24
#lxc network set lxdbr0 ipv6.address none

#Change the default storage
lxc profile device remove default root
lxc storage delete default
cat << EOF | sudo tee -a /etc/fstab
#mount -o bind /images/lxd /var/snap/lxd/common/lxd/storage-pools
/var/snap/lxd/common/lxd/storage-pools /images/lxd none bind 0 0
EOF
mkdir /images/lxd && sudo mount -a
sudo systemctl restart snap.lxd.daemon
lxc storage create default dir && lxc storage show default
lxc profile device add default root disk path=/ pool=default
lxd sql global "SELECT * FROM storage_pools_config"

#Use br-data for lxd containers
cat << EOF |tee /tmp/default.yaml
config:
  boot.autostart: "true"
  linux.kernel_modules: openvswitch,nbd,ip_tables,ip6_tables
  security.nesting: "true"
  security.privileged: "true"
description: ""
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br-data
    type: nic
  eth1:
    mtu: "9000"
    name: eth1
    nictype: bridged
    parent: lxdbr0
    type: nic
  kvm:
    path: /dev/kvm
    type: unix-char
  mem:
    path: /dev/mem
    type: unix-char
  root:
    path: /
    pool: default
    type: disk
  tun:
    path: /dev/net/tun
    type: unix-char
name: default
EOF
cat /tmp/default.yaml |lxc profile edit default

wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64-lxd.tar.xz
wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.squashfs
lxc image import ./ubuntu-22.04-server-cloudimg-amd64-lxd.tar.xz ./ubuntu-22.04-server-cloudimg-amd64.squashfs --alias jammy
lxc image list

lxc launch jammy maas
lxc config show maas --expanded
lxc exec maas bash

注:上面的配置在安装maas snap版本时会报错:security profiles (cannot setup udev for snap “maas”: cannot reload udev rules: exit status 1
继续使用’lxc profile edit default’来加入:

#https://discourse.maas.io/t/install-with-lxd/757/2
config:
  raw.lxc: |-
    lxc.mount.auto=sys:rw
    lxc.cgroup.devices.allow = c 10:237 rwm
    lxc.apparmor.profile = unconfined
    lxc.cgroup.devices.allow = b 7:* rwm

若容器里如果上不了网,如无法访问api.snapcraft.io,是因为lxd容易默认使用了eth1上的dns=10.10.10.1,下面的配置可让eth0, eth1, eth2都默认使用dns=192.168.99.1来避免特色网络对api.snapcraft.io的污染

lxc exec maas bash
cat << EOF |sudo tee /etc/netplan/50-cloud-init.yaml
#make 192.168.99.1 as default dns instead of 10.10.10.1
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: false
      addresses:
      - 192.168.99.221/24
      routes:
        - to: default
          via: 192.168.99.1
      nameservers:
        addresses:
        - 192.168.99.1
    eth1:
      dhcp4: true
      nameservers:
        addresses:
        - 192.168.99.1
    eth2:
      dhcp4: false
      addresses:
      - 192.168.9.3/24
      nameservers:
        addresses:
        - 192.168.9.3
EOF
#In systemd 239 systemd-resolve has been renamed to resolvectl
resolvectl status
cat /run/systemd/netif/leases/*
nslookup api.snapcraft.io

安装maas

sudo snap install maas --channel=3.3/stable
sudo apt install -y postgresql
sudo -iu postgres psql -d template1 -U postgres
CREATE USER maas WITH ENCRYPTED PASSWORD 'password';
CREATE DATABASE maasdb;
GRANT all privileges on database maasdb to maas;
\\c maasdb
cat << EOF | sudo tee -a /etc/postgresql/14/main/pg_hba.conf
host    maas    maasdb  0/0     md5
EOF
#This maas container has 3 IPs: eth0=192.168.99.221 eth1=10.10.10.238 eth2=192.168.9.3
sudo /snap/bin/maas init region+rack --maas-url http://192.168.99.221:5240/MAAS --database-uri "postgres://maas:password@localhost/maasdb"
sudo /snap/bin/maas createadmin --username admin --password password --email admin@example.com --ssh-import lp:zhhuabj
sudo /snap/bin/maas apikey --username admin |tee ~/admin-api-key
sudo /snap/bin/maas status
#login into http://192.168.99.221:5240/MAAS/r/
#change mirror: http://mirrors.cloud.tencent.com/ubuntu/ for http://archive.ubuntu.com/ubuntu
#change mirror: https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ for http://ports.ubuntu.com/ubuntu-ports
apikey=$(sudo maas apikey --username admin)
maas login admin http://127.0.0.1:5240/MAAS $apikey

正常安装的maas会有一个默认使用maas.io的boot-source, 如果想要添加custom image mirror可使用下列命令管理:

maas admin boot-sources read
maas admin boot-source delete 1
maas admin boot-resources read
maas admin boot-source-selections create <boot-source-id> os="ubuntu" release="jammy" arches="amd64" subarches="*" labels="*"
maas admin boot-source-selections read <boot-source-id>
maas admin boot-resources import
tail -f /var/snap/maas/common/log/*

注意,我们并不需要使用custom image mirror, 只要确保路由器中不要对images.maas.io做代理就能正常从 images.maas.io下载镜像,但如果想要使用mirror的话,可以使用下列方法,对于snap版本的maas, 不需要去修改keyring_filename, 用之前默认的/snap/maas/current/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg即可。

scp /home/hua/ca/ca.crt root@192.168.99.221:/usr/local/share/ca-certificates/ca.crt
lxc exec maas -- chmod 644 /usr/local/share/ca-certificates/ca.crt
lxc exec maas -- update-ca-certificates --fresh
lxc exec maas -- wget https://node1.lan/maas/images/ephemeral-v3/stable/streams/v1/index.sjson
#NOTE: we should use /snap/maas/current/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg rather than /usr/share/keyrings/ubuntu-cloudimage-keyring.gpg, so don't need to change keyring_filename
apt install jq -y
BOOT_SOURCE_ID=$(maas admin boot-sources read | jq '.[] | select(.url | contains("images.maas.io/ephemeral-v3")) | .id')
maas admin boot-source update $BOOT_SOURCE_ID url=https://node1.lan:443/maas/images/ephemeral-v3/stable/
maas admin boot-resources import

#make your network can visit images.maas.io
from simplestreams.contentsource import RequestsUrlReader
url = "https://node1.lan/ephemeral-v3/stable/streams/v1/index.sjson"
r = RequestsUrlReader(url)

创建openstack实验环境

在单机上创建openstack实验环境的方法有:

  • juju/charm + lxd: 直接部署在lxd容器中,好像并发安装时速度较慢, 还是用虚机吧
  • 用虚机的话,vagrant是一个选择,multipath也行(但multipath在运行qemu provider时cpu较高)
  • 若创建openstack over openstack环境,底层openstack采用devstack或者microstack来搭建,然后使用juju openstack provider来通过juju来在租户之下搭建上层openstack环境。这种情况下,不需要修改现有bundle中的machine配置,这样是为每一个openstack组件都会新启动一个虚机来安装。问题是这一台物理机可能性能无法运行如此多的虚机吧、
  • 这台物理机是4核的,那创建三台虚机差不多了,一台做controller, 两台做compute. 在lxd容器中安装maas, 用maas通过pxe来自动安装这三台虚机。然后使用juju maas provider来通过juju管理上层openstack环境,各种控制服务安装在controller虚机的lxd容器里

待续, 目前的问题主要是特色网络造成镜像无法下载,使用sstream-mirror做mirror时也下载不下来。

以上是关于(WIP) my cloud test bed (by quqi99)的主要内容,如果未能解决你的问题,请参考以下文章

从源“https://www.reddit.com”获取 <my-google-cloud-function> 的访问权限已被阻止... CORS

Anylogic - 如何在模拟中测量在制品库存 (WIP)

gtf文件转化为bed12

存储后出现的那些“WIP”和“索引”提交是啥?

bed文件怎么看区域大小

r12.2.5实例中wip进程接口api耗时太长