Docker多主机网络 OpenvSwitch
Posted liujunjun
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Docker多主机网络 OpenvSwitch相关的知识,希望对你有一定的参考价值。
Open vSwitch
Open vSwitch(以下简称为OVS),英文全称:OpenVirtual Switch,顾名思义,Open vSwitch就是开放虚拟交换。我们可以把他理解成一种标准,它旨在通过编程扩展,使庞大的网络自动化(配置、管理、维护),同时还支持标准的管理接口和协议。
也可以把OVS理解成开源虚拟交换机,可以运行在各类虚拟化平台(如KVM,Xen)上的虚拟机交换机。在虚拟化平台上,OVS 可以为动态变化的端点提供 2 层交换功能,很好的控制虚拟网络中的访问策略、网络隔离、流量监控等等。
利用Open vSwitch 构建Docker多主机网络
规划docker网段
默认的docker0的网段是 172.17.0.0/16,我们需要为每一个docker主机的docker0网段重新划分一个新网段
节点 | IP | docker0网段 |
openvswitch01 | 192.168.1.220 | 172.17.1.0/24 |
openvswitch02 | 192.168.1.221 | 172.17.2.0/24 |
安装OVS,docker见安装篇
两台机器上都要安装
修改docker0默认网段
节点1 [root@localhost ~]# vi /lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --bip=172.17.1.1/24 [root@localhost ~]# systemctl daemon-reload [root@localhost ~]# systemctl restart docker 节点2 [root@localhost ~]# vi /lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --bip=172.172.2.1/24 [root@localhost ~]# systemctl daemon-reload [root@localhost ~]# systemctl restart docker
创建网桥并激活
两边都操作
节点1 [root@localhost ~]# ovs-vsctl add-br br0 [root@localhost ~]# ip link set dev br0 up 节点2 [root@localhost ~]# ovs-vsctl add-br br0 [root@localhost ~]# ip link set dev br0 up
建立gre隧道
节点1 [root@localhost ~]# ovs-vsctl add-port br0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.1.221 # 如果有多个节点,需要添加多条greX(gre0,gre1,...) ## 将docker0加入br0 [root@localhost ~]# yum install bridge-utils [root@localhost ~]# brctl addif docker0 br0 节点2
[root@localhost ~]# ovs-vsctl add-port br0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.1.220
# 如果有多个节点,需要添加多条greX(gre0,gre1,...)
## 将docker0加入br0
[root@localhost ~]# yum install bridge-utils
[root@localhost ~]# brctl addif docker0 br0
[root@localhost yum.repos.d]# ovs-vsctl show 523304ce-9283-4cbf-bb4c-83c92506a3ea Bridge "br0" Port "br0" Interface "br0" type: internal Port "gre0" Interface "gre0" type: gre options: {remote_ip="192.168.1.220"} ovs_version: "2.5.2"
查看docker0网段
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 02:42:80:63:3f:36 brd ff:ff:ff:ff:ff:ff inet 172.17.1.1/24 brd 172.17.1.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:80ff:fe63:3f36/64 scope link valid_lft forever preferred_lft forever 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 02:42:03:c4:ce:3c brd ff:ff:ff:ff:ff:ff inet 172.17.2.1/24 brd 172.17.2.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:3ff:fec4:ce3c/64 scope link valid_lft forever preferred_lft forever
添加静态路由
[root@localhost SOURCES]# docker run -it busybox Unable to find image ‘busybox:latest‘ locally latest: Pulling from library/busybox 322973677ef5: Pull complete Digest: sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084 Status: Downloaded newer image for busybox:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:01:02
inet addr:172.17.1.2 Bcast:172.17.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
[root@localhost yum.repos.d]# docker run -it busybox Unable to find image ‘busybox:latest‘ locally latest: Pulling from library/busybox 322973677ef5: Pull complete Digest: sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084 Status: Downloaded newer image for busybox:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:02:02
inet addr:172.17.2.2 Bcast:172.17.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ #
# 两个节点
[root@localhost SOURCES]# ip route add 172.17.0.0/16 dev docker0
测试
/ # ping 172.17.2.2 PING 172.17.2.2 (172.17.2.2): 56 data bytes 64 bytes from 172.17.2.2: seq=0 ttl=63 time=2.379 ms 64 bytes from 172.17.2.2: seq=1 ttl=63 time=7.660 ms 64 bytes from 172.17.2.2: seq=2 ttl=63 time=0.465 ms 64 bytes from 172.17.2.2: seq=3 ttl=63 time=0.757 ms 64 bytes from 172.17.2.2: seq=4 ttl=63 time=0.615 ms 64 bytes from 172.17.2.2: seq=5 ttl=63 time=0.928 ms 64 bytes from 172.17.2.2: seq=6 ttl=63 time=0.610 ms 64 bytes from 172.17.2.2: seq=7 ttl=63 time=0.688 ms 64 bytes from 172.17.2.2: seq=8 ttl=63 time=1.127 ms 64 bytes from 172.17.2.2: seq=9 ttl=63 time=1.152 ms ^C --- 172.17.2.2 ping statistics --- 10 packets transmitted, 10 packets received, 0% packet loss round-trip min/avg/max = 0.465/1.638/7.660 ms / # ping 172.17.1.2 PING 172.17.1.2 (172.17.1.2): 56 data bytes 64 bytes from 172.17.1.2: seq=0 ttl=63 time=4.920 ms 64 bytes from 172.17.1.2: seq=1 ttl=63 time=0.494 ms 64 bytes from 172.17.1.2: seq=2 ttl=63 time=1.217 ms 64 bytes from 172.17.1.2: seq=3 ttl=63 time=1.236 ms 64 bytes from 172.17.1.2: seq=4 ttl=63 time=0.536 ms ^C --- 172.17.1.2 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max = 0.494/1.680/4.920 ms
注:以上配置重启就部分会消失,可以设置脚本启动加载
# 启动br0网桥 ip link set dev br0 up # 将docker0添加到br0中 brctl addif docker0 br0 # 添加静态路由 ip route add 172.17.0.0/16 dev docker0 # 添加到/etc/rc.local
以上是关于Docker多主机网络 OpenvSwitch的主要内容,如果未能解决你的问题,请参考以下文章
docker+openvswitch实现主机与容器的网络通信
docker跨主机container网络互通 bridge/openvswitch