云原生第十篇--Docker主机集群化方案 Docker Swarm
Posted 孙和龚
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了云原生第十篇--Docker主机集群化方案 Docker Swarm相关的知识,希望对你有一定的参考价值。
Docker主机集群化方案 Docker Swarm
- 一、docker swarm介绍
- 二、docker swarm概念与架构
- 三、docker swarm集群部署
- 四、docker swarm集群应用
- 五、docker stack
一、docker swarm介绍
Docker Swarm是Docker官方提供的一款集群管理工具,其主要作用是把若干台Docker主机抽象为一个整体,并且通过一个入口统一管理这些Docker主机上的各种Docker资源。Swarm和Kubernetes比较类似,但是更加轻,具有的功能也较kubernetes更少一些。
- 是docker host集群管理工具
- docker官方提供的
- docker 1.12版本以后
- 用来统一集群管理的,把整个集群资源做统一调度
- 比kubernetes要轻量化
- 实现scaling 规模扩大或缩小
- 实现rolling update 滚动更新或版本回退
- 实现service discovery 服务发现
- 实现load balance 负载均衡
- 实现route mesh 路由网格,服务治理
二、docker swarm概念与架构
2.1 架构
2.2 概念
节点 (node): 就是一台docker host上面运行了docker engine.节点分为两类:
- 管理节点(manager node) 负责管理集群中的节点并向工作节点分配任务
- 工作节点(worker node) 接收管理节点分配的任务,运行任务
# docker node ls
服务(services): 在工作节点运行的,由多个任务共同组成
# docker service ls
任务(task): 运行在工作节点上容器或容器中包含应用,是集群中调度最小管理单元
三、docker swarm集群部署
部署3主2从节点集群,另需提前准备1台本地容器镜像仓库服务器(Harbor)
3.1 容器镜像仓库 Harbor准备
3.2 主机准备
3.2.1 主机名
# hostnamectl set-hostname xxx
说明:
sm1 管理节点1
sm2 管理节点2
sm3 管理节点3
sw1 工作节点1
sw2 工作节点2
3.2.2 IP地址
编辑网卡配置文件
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none" 修改为静态
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
添加如下内容:
IPADDR="192.168.10.xxx"
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"
说明:
sm1 管理节点1 192.168.10.10
sm2 管理节点2 192.168.10.11
sm3 管理节点3 192.168.10.12
sw1 工作节点1 192.168.10.13
sw2 工作节点2 192.168.10.14
3.2.3 主机名与IP地址解析
编辑主机/etc/hosts文件,添加主机名解析
# vim /etc/hosts
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.10 sm1
192.168.10.11 sm2
192.168.10.12 sm3
192.168.10.13 sw1
192.168.10.14 sw2
3.3.4 主机时间同步
添加计划任务,实现时间同步,同步服务器为time1.aliyun.com
# crontab -e
no crontab for root - using an empty one
crontab: installing new crontab
查看添加后计划任务
# crontab -l
0 */1 * * * ntpdate time1.aliyun.com
3.2.5 主机安全设置
关闭防火墙并查看其运行状态
# systemctl stop firewalld;systemctl disable firewalld
# firewall-cmd --state
not running
使用非交互式修改selinux配置文件
# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
重启所有的主机系统
# reboot
重启后验证selinux是否关闭
# sestatus
SELinux status: disabled
3.3 docker安装
3.3.1 docker安装
下载YUM源
# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装docker-ce
# yum -y install docker-ce
启动docker服务并设置为开机自启动
# systemctl enable docker;systemctl start docker
3.3.2 配置docker daemon使用harbor
添加daemon.json文件,配置docker daemon使用harbor
# vim /etc/docker/daemon.json
# cat /etc/docker/daemon.json
"insecure-registries": ["http://192.168.10.15"]
重启docker服务
# ystemctl restart docker
深度登录harbor
# docker login 192.168.10.15
Username: admin
Password: 12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
3.4 docker swarm集群初始化
3.4.1 获取docker swarm命令帮助
获取docker swarm命令使用帮助
# docker swarm --help
Usage: docker swarm COMMAND
Manage Swarm
Commands:
ca Display and rotate the root CA
init Initialize a swarm 初始化
join Join a swarm as a node and/or manager 加入集群
join-token Manage join tokens 集群加入时token管理
leave Leave the swarm 离开集群
unlock Unlock swarm
unlock-key Manage the unlock key
update Update the swarm 更新集群
3.4.2 在管理节点初始化
本次在sm1上初始化
初始化集群
# docker swarm init --advertise-addr 192.168.10.10 --listen-addr 192.168.10.10:2377
Swarm initialized: current node (j42cwubrr70pwxdpmesn1cuo6) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-6pddlyiq5f1i35w8d7q4bl1co 192.168.10.10:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
说明:
--advertise-addr 当主机有多块网卡时使用其选择其中一块用于广播,用于其它节点连接管理节点使用
--listen-addr 监听地址,用于承载集群流量使用
3.4.3 添加工作节点到集群
使用初始化过程中生成的token加入集群
[root@sw1 ~]# docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-6pddlyiq5f1i35w8d7q4bl1co 192.168.10.10:2377
This node joined a swarm as a worker.
查看已加入的集群
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 * sm1 Ready Active Leader 20.10.12
4yb34kuma6i9g5hf30vkxm9yc sw1 Ready Active 20.10.12
如果使用的token已过期,可以再次生成新的加入集群的方法,如下命令所示。
重新生成用于添加工作点的token
[root@sm1 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-6pddlyiq5f1i35w8d7q4bl1co 192.168.10.10:2377
加入至集群
[root@sw2 ~]# docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-6pddlyiq5f1i35w8d7q4bl1co 192.168.10.10:2377
This node joined a swarm as a worker.
查看node状态
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 * sm1 Ready Active Leader 20.10.12
4yb34kuma6i9g5hf30vkxm9yc sw1 Ready Active 20.10.12
mekitdu1xbpcttgupwuoiwg91 sw2 Ready Active 20.10.12
3.4.4 添加管理节点到集群
生成用于添加管理节点加入集群所使用的token
[root@sm1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-7g85apo82mwz8ttmgdr7onfhu 192.168.10.10:2377
加入集群
[root@sm2 ~]# docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-7g85apo82mwz8ttmgdr7onfhu 192.168.10.10:2377
This node joined a swarm as a manager.
加入集群
[root@sm3 ~]# docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-7g85apo82mwz8ttmgdr7onfhu 192.168.10.10:2377
This node joined a swarm as a manager.
查看节点状态
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 * sm1 Ready Active Leader 20.10.12
nzpmehm8n87b9a17or2el10lc sm2 Ready Active Reachable 20.10.12
xc2x9z1b33rwdfxc5sdpobf0i sm3 Ready Active Reachable 20.10.12
4yb34kuma6i9g5hf30vkxm9yc sw1 Ready Active 20.10.12
mekitdu1xbpcttgupwuoiwg91 sw2 Ready Active 20.10.12
3.4.5 模拟管理节点出现故障
3.4.5.1 停止docker服务并查看结果
停止docker服务
[root@sm1 ~]# systemctl stop docker
查看node状态,发现sm1不可达,状态为未知,并重启选择出leader
[root@sm2 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 sm1 Unknown Active Unreachable 20.10.12
nzpmehm8n87b9a17or2el10lc * sm2 Ready Active Leader 20.10.12
xc2x9z1b33rwdfxc5sdpobf0i sm3 Ready Active Reachable 20.10.12
4yb34kuma6i9g5hf30vkxm9yc sw1 Ready Active 20.10.12
mekitdu1xbpcttgupwuoiwg91 sw2 Ready Active 20.10.12
3.4.5.2 启动docker服务并查看结果
再次重动docker
[root@sm1 ~]# systemctl start docker
观察可以得知sm1是可达状态,但并不是Leader
[root@sm1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 * sm1 Ready Active Reachable 20.10.12
nzpmehm8n87b9a17or2el10lc sm2 Ready Active Leader 20.10.12
xc2x9z1b33rwdfxc5sdpobf0i sm3 Ready Active Reachable 20.10.12
4yb34kuma6i9g5hf30vkxm9yc sw1 Ready Active 20.10.12
mekitdu1xbpcttgupwuoiwg91 sw2 Ready Active 20.10.12
四、docker swarm集群应用
4.1 容器镜像准备
准备多个版本的容器镜像,以便于后期使用测试。
4.1.1 v1版本
生成网站文件v1版
[root@harbor nginximg]# vim index.html
[root@harbor nginximg]# cat index.html
v1
编写Dockerfile文件,用于构建容器镜像
[root@harbor nginximg]# vim Dockerfile
[root@harbor nginximg]# cat Dockerfile
FROM nginx:latest
MAINTAINER 'tom<tom@kubemsb.com>'
ADD index.html /usr/share/nginx/html
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD /usr/sbin/nginx
使用docker build构建容器镜像
[root@harbor nginximg]# docker build -t 192.168.10.15/library/nginx:v1 .
登录harbor
# docker login 192.168.10.15
Username: admin
Password: 12345
推送容器镜像至harbor
# docker push 192.168.10.15/library/nginx:v1
4.1.2 v2版本
生成网站文件v2版
[root@harbor nginximg]# vim index.html
[root@harbor nginximg]# cat index.html
v2
编写Dockerfile文件,用于构建容器镜像
[root@harbor nginximg]# vim Dockerfile
[root@harbor nginximg]# cat Dockerfile
FROM nginx:latest
MAINTAINER 'tom<tom@kubemsb.com>'
ADD index.html /usr/share/nginx/html
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD /usr/sbin/nginx
使用docker build构建容器镜像
[root@harbor nginximg]# docker build -t 192.168.10.15/library/nginx:v2 .
推送镜像至Harbor
[root@harbor nginximg]# docker push 192.168.10.15/library/nginx:v2
4.2 发布服务
在docker swarm中,对外暴露的是服务(service),而不是容器。
为了保持高可用架构,它准许同时启动多个容器共同支撑一个服务,如果一个容器挂了,它会自动使用另一个容器
4.2.1 使用docker service ls
查看服务
在管理节点(manager node)上操作
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
4.2.2 发布服务
[root@sm1 ~]# docker service create --name nginx-svc-1 --replicas 1 --publish 80:80 192.168.10.15/library/nginx:v1
ucif0ibkjqrd7meal6vqwnduz
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
说明
* 创建一个服务,名为nginx_svc-1
* replicas 1指定1个副本
* --publish 80:80 将服务内部的80端口发布到外部网络的80端口
* 使用的镜像为`192.168.10.15/library/nginx:v1`
4.2.3 查看已发布服务
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ucif0ibkjqrd nginx-svc-1 replicated 1/1 192.168.10.15/library/nginx:v1 *:80->80/tcp
4.2.4 查看已发布服务容器
[root@sm1 ~]# docker service ps nginx-svc-1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
47t0s0egf6xf nginx-svc-1.1 192.168.10.15/library/nginx:v1 sw1 Running Running 48 minutes ago
[root@sw1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1bdf8981f511 192.168.10.15/library/nginx:v1 "/docker-entrypoint.…" 53 minutes ago Up 53 minutes 80/tcp nginx-svc-1.1.47t0s0egf6xf1n8m0c0jez3q0
4.2.5 访问已发布的服务
[root@sm1 ~]# curl http://192.168.10.10
v1
[root@sm1 ~]# curl http://192.168.10.11
v1
[root@sm1 ~]# curl http://192.168.10.12
v1
[root@sm1 ~]# curl http://192.168.10.13
v1
[root@sm1 ~]# curl http://192.168.10.14
v1
在集群之外的主机访问
4.3 服务扩展
使用scale指定副本数来扩展
[root@sm1 ~]# docker service scale nginx-svc-1=2
nginx-svc-1 scaled to 2
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ucif0ibkjqrd nginx-svc-1 replicated 2/2 192.168.10.15/library/nginx:v1 *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc-1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
47t0s0egf6xf nginx-svc-1.1 192.168.10.15/library/nginx:v1 sw1 Running Running about an hour ago
oy16nuh5udn0 nginx-svc-1.2 192.168.10.15/library/nginx:v1 sw2 Running Running 57 seconds ago
[root@sw1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1bdf8981f511 192.168.10.15/library/nginx:v1 "/docker-entrypoint.…" About an hour ago Up About an hour 80/tcp nginx-svc-1.1.47t0s0egf6xf1n8m0c0jez3q0
[root@sw2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0923c0d10223 192.168.10.15/library/nginx:v1 "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp nginx-svc-1.2.oy16nuh5udn0s1hda5bcpr9hd
问题:现在仅扩展为2个副本,如果把服务扩展到3个副本,集群会如何分配主机呢?
[root@sm1 ~]# docker service scale nginx-svc-1=3
nginx-svc-1 scaled to 3
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
[root@sm1 ~]# docker service ps nginx-svc-1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
47t0s0egf6xf nginx-svc-1.1 192.168.10.15/library/nginx:v1 sw1 Running Running about an hour ago
oy16nuh5udn0 nginx-svc-1.2 192.168.10.15/library/nginx:v1 sw2 Running Running 12 minutes ago
mn9fwxqbc9d1 nginx-svc-1.3 192.168.10.15/library/nginx:v1 sm1 Running Running 9 minutes ago
说明:
当把服务扩展到一定数量时,管理节点也会参与到负载运行中来。
4.4 服务裁减
[root@sm1 ~]# docker service scale nginx-svc-1=2
nginx-svc-1 scaled to 2
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ucif0ibkjqrd nginx-svc-1 replicated 2/2 192.168.10.15/library/nginx:v1 *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc-1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
47t0s0egf6xf nginx-svc-1.1 192.168.10.15/library/nginx:v1 sw1 Running Running 2 hours ago
oy16nuh5udn0 nginx-svc-1.2 192.168.10.15/library/nginx:v1 sw2 Running Running 29 minutes ago
4.5 负载均衡
服务中包含多个容器时,每次访问将以轮询的方式访问到每个容器
修改sw1主机中容器网页文件
[root@sw1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1bdf8981f511 192.168.10.15/library/nginx:v1 "/docker-entrypoint.…" About an hour ago Up About an hour 80/tcp nginx-svc-1.1.47t0s0egf6xf1n8m0c0jez3q0
[root@sw1 ~]# docker exec -it 1bdf bash
root@1bdf8981f511:/# echo "sw1 web" > /usr/share/nginx/html/index.html
root@1bdf8981f511:/# exit
修改sw2主机中容器网页文件
[root@sw2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0923c0d10223 192.168.10.15/library/nginx:v1 "/docker-entrypoint.…" 42 minutes ago Up 42 minutes 80/tcp nginx-svc-1.2.oy16nuh5udn0s1hda5bcpr9hd
[root@sw2 ~]# docker exec -it 0923 bash
root@0923c0d10223:/# echo "sw2 web" > /usr/share/nginx/html/index.html
root@0923c0d10223:/# exit
[root@sm1 ~]# curl http://192.168.10.10
sw1 web
[root@sm1 ~]# curl http://192.168.10.10
sw2 web
[root@sm1 ~]# curl http://192.168.10.10
sw1 web
[root@sm1 ~]# curl http://192.168.10.10
sw2 web
4.6 删除服务
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ucif0ibkjqrd nginx-svc-1 replicated 2/2 192.168.10.15/library/nginx:v1 *:80->80/tcp
[root@sm1 ~]# docker service rm nginx-svc-1
nginx-svc-1
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
4.7 服务版本更新
[root@sm1 ~]# docker service create --name nginx-svc --replicas=1 --publish 80:80 192.168.10.15/library/nginx:v1
yz3wq6f1cgf10vtq5ne4qfwjz
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
[root@sm1 ~]# curl http://192.168.10.10
v1
[root@sm1 ~]# docker service update nginx-svc --image 192.168.10.15/library/nginx:v2
nginx-svc
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
[root@sm1 ~]# curl http://192.168.10.10
v2
4.8 服务版本回退
[root@sm1 ~]# docker service update nginx-svc --image 192.168.10.15/library/nginx:v1
nginx-svc
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
4.9 服务版本滚动间隔更新
# docker service create --name nginx-svc --replicas 60 --publish 80:80 192.168.10.15/library/nginx:v1
pqrt561dckg2wfpect3vf9ll0
overall progress: 60 out of 60 tasks
verify: Service converged
[root@sm1 ~]# docker service update --replicas 60 --image 192.168.10.15/library/nginx:v2 --update-parallelism 5 --update-delay 30s nginx-svc
nginx-svc
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
说明
* --update-parallelism 5 指定并行更新数量
* --update-delay 30s 指定更新间隔时间
docker swarm滚动更新会造成节点上有exit状态的容器,可以考虑清除
命令如下:
[root@sw1 ~]# docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
4.10 副本控制器
副本控制器
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
yz3wq6f1cgf1 nginx-svc replicated 3/3 192.168.10.15/library/nginx:v2 *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
x78l0santsbb nginx-svc.1 192.168.10.15/library/nginx:v2 sw2 Running Running 3 hours ago
ura9isskfxku \\_ nginx-svc.1 192.168.10.15/library/nginx:v1 sm1 Shutdown Shutdown 3 hours ago
z738gvgazish \\_ nginx-svc.1 192.168.10.15/library/nginx:v2 sw1 Shutdown Shutdown 3 hours ago
3qsrkkxn32bl \\_ nginx-svc.1 192.168.10.15/library/nginx:v1 sm3 Shutdown Shutdown 3 hours ago
psbi0mxu3amy nginx-svc.2 192.168.10.15/library/nginx:v2 sw1 Running Running 3 hours ago
zpjw39bwhd78 nginx-svc.3 192.168.10.15/library/nginx:v2 sm1 Running Running 3 hours ago
[root@sm1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81fffd9132d8 192.168.10.15/library/nginx:v2 "/docker-entrypoint.…" 3 hours ago Up 3 hours 80/tcp nginx-svc.3.zpjw39bwhd78pw49svpy4q8zd
[root@sm1 ~]# docker stop 81fffd9132d8;docker rm 81fffd9132d8
81fffd9132d8
81fffd9132d8
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
yz3wq6f1cgf1 nginx-svc replicated 3/3 192.168.10.15/library/nginx:v2 *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
x78l0santsbb nginx-svc.1 192.168.10.15/library/nginx:v2 sw2 Running Running 3 hours ago
ura9isskfxku \\_ nginx-svc.1 192.168.10.15/library/nginx:v1 sm1 Shutdown Shutdown 3 hours ago
z738gvgazish \\_ nginx-svc.1 192.168.10.15/library/nginx:v2 sw1 Shutdown Shutdown 3 hours ago
3qsrkkxn32bl \\_ nginx-svc.1 192.168.10.15/library/nginx:v1 sm3 Shutdown Shutdown 3 hours ago
psbi0mxu3amy nginx-svc.2 192.168.10.15/library/nginx:v2 sw1 Running Running 3 hours ago
qv6ya3crz1fj nginx-svc.3 192.168.10.15/library/nginx:v2 sm1 Running Running 13 seconds ago
zpjw39bwhd78 \\_ nginx-svc.3 192.168.10.15/library/nginx:v2 sm1 Shutdown Failed 19 seconds ago "task: non-zero exit (137)"
4.11 在指定网络中发布服务
[root@sm1 ~]# docker network create -d overlay tomcat-net
mrkgccdfddy8zg92ja6fpox7p
[root@sm1 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
5ba369c13795 bridge bridge local
54568abb541a docker_gwbridge bridge local
4edcb5c4a324 host host local
l6xmfxiiseqk ingress overlay swarm
5d06d748c9c7 none null local
mrkgccdfddy8 tomcat-net overlay swarm
[root@sm1 ~]# docker network inspect tomcat-net
[
"Name": "tomcat-net",
"Id": "mrkgccdfddy8zg92ja6fpox7p",
"Created": "2022-02-16T13:56:52.338589006Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM":
"Driver": "default",
"Options": null,
"Config": [
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
]
,
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom":
"Network": ""
,
"ConfigOnly": false,
"Containers": null,
"Options":
"com.docker.network.driver.overlay.vxlanid_list": "4097"
,
"Labels": null
]
说明:
创建名为tomcat-net的覆盖网络(Overlay Netowork),这是个二层网络,处于该网络下的docker容器,即使宿主机不一样,也能相互访问
# docker service create --name tomcat \\
--network tomcat-net \\
-p 8080:8080 \\
--replicas 2 \\
tomcat:7.0.96-jdk8-openjdk
说明:
创建名为tomcat的服务,使用了刚才创建的覆盖网络
[root@sm1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
wgqkz8vymxkr tomcat replicated 2/2 tomcat:7.0.96-jdk8-openjdk *:8080->8080/tcp
[root@sm1 ~]# docker service ps tomcat
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
fsx1fnssbmtg tomcat.1 tomcat:7.0.96-jdk8-openjdk sm3 Running Running 49 seconds ago
gq0ogycj7orb tomcat.2 tomcat:7.0.96-jdk8-openjdk sm2 Running Running 58以上是关于云原生第十篇--Docker主机集群化方案 Docker Swarm的主要内容,如果未能解决你的问题,请参考以下文章
云原生Swarm解决docker server的集群化管理和部署
云原生之kubernetes实战使用docker作为运行时部署Kubernetes集群