Docker网络 overlay模式
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Docker网络 overlay模式相关的知识,希望对你有一定的参考价值。
参考技术A本文翻译自docker官网: https://docs.docker.com/network/overlay/
The overlay network driver creates a distributed network among multiple
Docker daemon hosts. This network sits on top of (overlays) the host-specific
networks, allowing containers connected to it (including swarm service
containers) to communicate securely when encryption is enabled. Docker
transparently handles routing of each packet to and from the correct Docker
daemon host and the correct destination container.
When you initialize a swarm or join a Docker host to an existing swarm, two
new networks are created on that Docker host:
You can create user-defined overlay networks using docker network create ,
in the same way that you can create user-defined bridge networks. Services
or containers can be connected to more than one network at a time. Services or
containers can only communicate across networks they are each connected to.
Although you can connect both swarm services and standalone containers to an
overlay network, the default behaviors and configuration concerns are different.
For that reason, the rest of this topic is divided into operations that apply to
all overlay networks, those that apply to swarm service networks, and those that
apply to overlay networks used by standalone containers.
To create an overlay network for use with swarm services, use a command like
the following:
To create an overlay network which can be used by swarm services or
standalone containers to communicate with other standalone containers running on
other Docker daemons, add the --attachable flag:
You can specify the IP address range, subnet, gateway, and other options. See
docker network create --help for details.
All swarm service management traffic is encrypted by default, using the
AES algorithm in
GCM mode. Manager nodes in the swarm rotate the key used to encrypt gossip data
every 12 hours.
To encrypt application data as well, add --opt encrypted when creating the
overlay network. This enables IPSEC encryption at the level of the vxlan. This
encryption imposes a non-negligible performance penalty, so you should test this
option before using it in production.
When you enable overlay encryption, Docker creates IPSEC tunnels between all the
nodes where tasks are scheduled for services attached to the overlay network.
These tunnels also use the AES algorithm in GCM mode and manager nodes
automatically rotate the keys every 12 hours.
You can use the overlay network feature with both --opt encrypted --attachable
and attach unmanaged containers to that network:
Most users never need to configure the ingress network, but Docker allows you
to do so. This can be useful if the automatically-chosen subnet conflicts with
one that already exists on your network, or you need to customize other low-level
network settings such as the MTU.
Customizing the ingress network involves removing and recreating it. This is
usually done before you create any services in the swarm. If you have existing
services which publish ports, those services need to be removed before you can
remove the ingress network.
During the time that no ingress network exists, existing services which do not
publish ports continue to function but are not load-balanced. This affects
services which publish ports, such as a WordPress service which publishes port
The docker_gwbridge is a virtual bridge that connects the overlay networks
(including the ingress network) to an individual Docker daemon\'s physical
network. Docker creates it automatically when you initialize a swarm or join a
Docker host to a swarm, but it is not a Docker device. It exists in the kernel
of the Docker host. If you need to customize its settings, you must do so before
joining the Docker host to the swarm, or after temporarily removing the host
from the swarm.
Swarm services connected to the same overlay network effectively expose all
ports to each other. For a port to be accessible outside of the service, that
port must be published using the -p or --publish flag on docker service create or docker service update . Both the legacy colon-separated syntax and
the newer comma-separated value syntax are supported. The longer syntax is
preferred because it is somewhat self-documenting.
<table>
<thead>
<tr>
<th>Flag value</th>
<th>Description</th>
</tr>
</thead>
<tr>
<td><tt>-p 8080:80</tt> or<br /><tt>-p published=8080,target=80</tt></td>
<td>Map TCP port 80 on the service to port 8080 on the routing mesh.</td>
</tr>
<tr>
<td><tt>-p 8080:80/udp</tt> or<br /><tt>-p published=8080,target=80,protocol=udp</tt></td>
<td>Map UDP port 80 on the service to port 8080 on the routing mesh.</td>
</tr>
<tr>
<td><tt>-p 8080:80/tcp -p 8080:80/udp</tt> or <br /><tt>-p published=8080,target=80,protocol=tcp -p published=8080,target=80,protocol=udp</tt></td>
<td>Map TCP port 80 on the service to TCP port 8080 on the routing mesh, and map UDP port 80 on the service to UDP port 8080 on the routing mesh.</td>
</tr>
</table>
By default, swarm services which publish ports do so using the routing mesh.
When you connect to a published port on any swarm node (whether it is running a
given service or not), you are redirected to a worker which is running that
service, transparently. Effectively, Docker acts as a load balancer for your
swarm services. Services using the routing mesh are running in virtual IP (VIP)
mode . Even a service running on each node (by means of the --mode global
flag) uses the routing mesh. When using the routing mesh, there is no guarantee
about which Docker node services client requests.
To bypass the routing mesh, you can start a service using DNS Round Robin
(DNSRR) mode , by setting the --endpoint-mode flag to dnsrr . You must run
your own load balancer in front of the service. A DNS query for the service name
on the Docker host returns a list of IP addresses for the nodes running the
service. Configure your load balancer to consume this list and balance the
traffic across the nodes.
By default, control traffic relating to swarm management and traffic to and from
your applications runs over the same network, though the swarm control traffic
is encrypted. You can configure Docker to use separate network interfaces for
handling the two different types of traffic. When you initialize or join the
swarm, specify --advertise-addr and --datapath-addr separately. You must do
this for each node joining the swarm.
The ingress network is created without the --attachable flag, which means
that only swarm services can use it, and not standalone containers. You can
connect standalone containers to user-defined overlay networks which are created
with the --attachable flag. This gives standalone containers running on
different Docker daemons the ability to communicate without the need to set up
routing on the individual Docker daemon hosts.
For most situations, you should connect to the service name, which is load-balanced and handled by all containers ("tasks") backing the service. To get a list of all tasks backing the service, do a DNS lookup for tasks.<service-name>.
Docker CE overlay网络简单测试
Docker 的多种网络模式中,bridge的网络模式是用于同一台宿主机上的docker之间的互通,如果要实现多台宿主机上docker之间跨节点的通讯就需要借助overlay网络
在 docker swarm 模式中,通过 docker service create 创建的容器默认会使用名为ingress的overlay网络模式,在这种网络模式下,service会在不同节点(宿主机)上建立容器,不同节点上容器的ip会处在同一子网内;
同样的,如果建立多个service,比如,同时建了nginx 和 viz两个service,那么这两个service下的容器也都会在同一子网下,如下所示,同一节点上,serivce nginx 的容器ip 为10.255.0.4,service viz 容器的ip为10.255.0.6,两者都在ingress网络中
#docker network inspect ingress "Internal": false, "Attachable": false, "Ingress": true, "Containers":{ "00bf0cc88d8363581b10a6a64a34cc2864d51926ecaa445fba7af0bc488d553d":{ "Name":"nginxtest.1.5yukmeotwnl2v0smmhy26bwkg", "EndpointID":"064080c4efc9048bf0b0a44ab1d52d63c627f277d9d589be8cc9723c081e2616", "MacAddress": "02:42:0a:ff:00:04", "IPv4Address":"10.255.0.4/16", "IPv6Address": "" }, "ac7ec55f931e1a4c1ece6e56a935ac0871ab6fe88e9eae35e1671513c9204b77":{ "Name":"viz.1.zhmcw7mtvzzrma31l3letnmxp", "EndpointID":"0477642232e30c34c9bdc6cb8e83b0d2726a5169df8daa8c47225b8d16163ec7", "MacAddress": "02:42:0a:ff:00:06", "IPv4Address":"10.255.0.6/16", "IPv6Address": "" }, "ingress-sbox": { "Name": "ingress-endpoint", "EndpointID":"61ae637e13284274480a1f9928bd7c627543336875a64dbdd272850285252136", "MacAddress": "02:42:0a:ff:00:02", "IPv4Address": "10.255.0.2/16", "IPv6Address": "" } }, "Options": { "com.docker.network.driver.overlay.vxlanid_list":"256" …………………………………………………..
如果不想让多个service 在同一子网内,比如多租户的场景,在这种情况下,就需要另外创建自定义overlay 网络,实现不同用户的服务在各自的子网内
创建名为mynet的overlay网络
# docker network create mynet -d overlay
7njqr6p45krfw6msq8wgxdqu3
你还可以使用其他参数 比如 --subnet: 定义子网范围
查看mynet基本信息
# docker network inspect mynet [ { "Name": "mynet", "Id": "7njqr6p45krfw6msq8wgxdqu3", "Created": "0001-01-01T00:00:00Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [] }, "Internal": false, "Attachable": false, "Ingress": false, "Containers": null, "Options": { "com.docker.network.driver.overlay.vxlanid_list":"4096" }, "Labels": null }
如上所示,新创建的mynet network vxlan id 为 4096,不同于 ingress 的 vxlan id 256 ,同时,由于还没有容器被加入到mynet网络 mynet还没有被分配ip地址段
创建一个使用mynet网络的service
docker service create --replicas 2 --name nginx_test01 --network mynet nginx
服务起来后,再次查看mynet网络
#docker network inspect mynet ……………………………………………………. "Options": null, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "Containers": { "a67b21bdc3d1bb144816e436f5cc5a303539ae3db8a7564236740fc46233a665":{ "Name": "nginx_test01.1.xscom3xofubdgzp1xixt69r93", "EndpointID": "0dbd0fca51d0c477ee653e6f0f12048e38acb6e1a404fe1f9ae4e6506563cfce", "MacAddress": "02:42:0a:00:00:03", "IPv4Address": "10.0.0.3/24", "IPv6Address": "" } }, "Options": { "com.docker.network.driver.overlay.vxlanid_list":"4096" ……………………..
可以看到mynet加入了一个容器,它的网段随之也变成了10.0.0.0/24
验证下不同网段下容器是否能否互通
进入使用mynet网络的容器
docker exec –it a67b21bdc3d1 bash
[[email protected] a67b21bdc3d1 /]# ping 10.255.0.6 #ping ingress 网络下的容器
PING 10.255.0.6 (10.255.0.6) 56(84) bytes of data.
无法ping通,说明vxlan隔离作用生效了,如果是相通的,你可能需要升级下系统内核
以上是关于Docker网络 overlay模式的主要内容,如果未能解决你的问题,请参考以下文章
docker的跨主机网络Overlay,MacVlan网络的实现