Docker Swarm集群部署应用
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Docker Swarm集群部署应用相关的知识,希望对你有一定的参考价值。
在Docker Swarm集群部署应用
我们过去使用docker run的命令创建容器, 把前面替换成docker service create就行了.
建议搭建一个registry,为所的docker主机提供镜像下载,否则你需要在每个docker主机本地存在容器镜像。
所以搭建一个私有仓库,由私有仓库提供所需要的镜像,
本实验环境中用node1同时作为registry。
拉取本地私有仓库registry,查看registry镜像
基础环境
全部为CentOS7系统,Docker 版本为1.12.6
node1 192.168.1.107 ntpserver、registry
node2 192.168.1.136 dockerfile生成httpd镜像、mamager节点
node3 192.168.1.137 work节点
设置时间同步
[[email protected] ~]# yum -y install ntp [[email protected] ~]# vim /etc/ntp.conf
添加下面两行
server 127.127.1.0
fudge 127.127.1.0 stratum 8
[[email protected] ~]# systemctl restart ntpd [[email protected] ~]# firewall-cmd --permanent --add-port=123/udp [[email protected] ~]# firewall-cmd --reload
node2和node3两台节点服务器进行时间同步
[[email protected] ~]# /usr/sbin/ntpdate 192.168.1.10
21 Aug 15:31:23 ntpdate[4304]: step time server 192.168.1.107 offset 0.621419 sec
[[email protected] ~]# /usr/sbin/ntpdate 192.168.1.107
21 Aug 15:31:52 ntpdate[4239]: adjust time server 192.168.1.107 offset -0.004892 sec
修改三台主机名分别为node1、node2、node3
[[email protected] ~]# vim /etc/hostname
node1
将所有服务器开启路由转发
[[email protected] ~]# vim /etc/sysctl.conf
添加
net.ipv4.ip_forward=1
[[email protected] ~]# sysctl -p
net.ipv4.ip_forward = 1
关闭所有服务器selinux
[[email protected] ~]# setenforce 0
所有服务器配置解析
[[email protected] ~]# vim /etc/hosts
添加
192.168.1.107 node1
192.168.1.136 node2
192.168.1.137 node3
关闭所有服务器防火墙(不关闭即将相关端口放行)
[[email protected] ~]# systemctl stop firewalld.service
配置所有节点密钥登录
在node1设置即可(一路回车)
[[email protected] ~]# ssh-keygen
将密钥发布到各个节点
在node1设置即可
[[email protected] ~]# ssh-copy-id node1 [[email protected] ~]# ssh-copy-id node2 [[email protected] ~]# ssh-copy-id node3
测试密钥登录
[[email protected] ~]# for N in $(seq 1 3); do ssh node$N hostname; done ;
node1
node2
node3
所有服务器安装docker
[[email protected] ~]# yum -y install docker
查看版本(必须是1.12或以上版本)
[[email protected] ~]# docker -v
Docker version 1.12.6, build 88a4867/1.12.6
设置开机自启动并动启docker
[[email protected] ~]# systemctl start docker
在node1初始化swarm集群
[[email protected] ~]# docker swarm init --advertise-addr 192.168.1.107
Swarm initialized: current node (9cxfsbt5294ya0wn7ji0gs3ji) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-0mlhcveie1hpqsj8m9recv40bmv6h3nvvf05rqjqkvdxs4dqpp-7z2advdggin0g9yqvsr4bg4q9 \
192.168.1.107:2377
To add a manager to this swarm, run ‘docker swarm join-token manager‘ and follow the instructions.
node2加入集群
[[email protected] ~]# docker swarm join > --token SWMTKN-1-0mlhcveie1hpqsj8m9recv40bmv6h3nvvf05rqjqkvdxs4dqpp-7z2advdggin0g9yqvsr4bg4q9 > 192.168.1.107:2377
This node joined a swarm as a worker.
node3加入集群
[[email protected] ~]# docker swarm join > --token SWMTKN-1-0mlhcveie1hpqsj8m9recv40bmv6h3nvvf05rqjqkvdxs4dqpp-7z2advdggin0g9yqvsr4bg4q9 > 192.168.1.107:2377
This node joined a swarm as a worker
提升node2为manager节点
[[email protected] ~]# docker node promote node2
Node node2 promoted to a manager in the swarm.
查看集群节点情况
[[email protected] ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6qfftgsw2ba2eq0j0ztvcyjna node3 Ready Active
9cxfsbt5294ya0wn7ji0gs3ji * node1 Ready Active Leader
c1lpchr06vjl7pg5s3leezxrq node2 Ready Active Reachable
至此基础环境已配置完毕
开始搭建私有仓库
在node1上传registry:2镜像也可直接#docker pull registry:2
[[email protected] src]# docker load < registry2.tar [[email protected] ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry 2 c6c14b3960bd 13 months ago 33.28 M
附:registry1是python语言写的,而现在registry2版本即docker distribution更加安全和快速,并且是用go语言写的。
基于私有仓库镜像运行容器
默认情况下,registry2会将仓库存放于容器的/var/lib/registry目录下,这样如果容器被删除,则存放于容器中的镜像也会丢失,所以我们一般情况下会指定本地一个目录挂载到容器的/var/lib/registry下,两个目录下都有!
·registry的默认存储路径是/var/lib/registry,只是个临时目录,一段时间之后就会消失
·所以使用-v参数,指定个本地持久的路径,
运行容器(端口映射、随docker启动时容器亦启动、路径映射、名字)
[[email protected] ~]# mkdir -p /opt/data/registry [[email protected] ~]#docker run -d -p 5000:5000 --restart=always -v /opt/data/registry/:/var/lib/registry --name registry2 registry:2
35639a4b8c60d877488d1bb79e71af288fbb2694cda9aa7d6ae9811b467eb906
[[email protected] ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
35639a4b8c60 registry:2 "/entrypoint.sh /etc/" 34 seconds ago Up 31 seconds 0.0.0.0:5000->5000/tcp regist
[[email protected] ~]# curl 192.168.1.107:5000/v2/_catalog
{"repositories":[]}
返回{"repositories":[]} 说明registry服务工作正常.
注:镜像信息存放在/var/lib/registry目录下,因此这里将宿主机目录映射到/var/lib/registry
所有主机都指向registry服务器:
停止docker服务(node1为例)注意
[[email protected] ~]# systemctl stop docker
修改vim /usr/lib/systemd/system/docker.service,修改后保存退出
[[email protected] ~]# vim /usr/lib/systemd/system/docker.service
修改以下内容(添加红色字部分即可)
ExecStart=/usr/bin/dockerd-current --insecure-registry 192.168.1.107:5000 \
重载docker服务并启动docker服务
[[email protected] ~]# systemctl daemon-reload [[email protected] ~]# systemctl start docker
测试本地镜像仓库
有了本地镜像仓库registry, 现在我们推送一个测试镜像到本机镜像仓库, 测试下registry服务.
上传基础镜像
[[email protected] src]# docker load < centos7.tar
编写Dockerfile创建一简的容器
[[email protected] ~]# mkdir /httpd [[email protected] ~]# cd /httpd/ [[email protected] httpd]# vim Dockerfile
FROM 50dae1ee8677
RUN yum -y install httpd net-tools
RUN sed ‘s/#ServerName /ServerName /g‘ -i /etc/httpd/conf/httpd.conf
EXPOSE 80
CMD ["/usr/sbin/httpd","-DFOREGROUND"]
[[email protected] httpd]# docker build -t 192.168.1.107:5000/centos:httpd .
[[email protected] httpd]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.1.107:5000/centos httpd e500f78679e6 28 seconds ago 334.8 MB
docker.io/centos latest 50dae1ee8677 13 months ago 196.7 MB
测试:在node2主机上推送镜像到registry
[[email protected] httpd]# docker push 192.168.1.107:5000/centos:httpd
The push refers to a repository [192.168.1.107:5000/centos]
8214cbd58105: Pushed
1cba5315c37c: Pushed
0fe55794a0f7: Pushed
httpd: digest: sha256:57aa90b2fb375d33f6e7ddd2a6f446082a8e43720abeb2e738c179e52060b11c size: 949
push成功后, 可以调用registry API查看 registry中的镜像
[[email protected] httpd]# curl 192.168.1.107:5000/v2/_catalog
{"repositories":["centos"]}
测试成功将镜像删除
[[email protected] httpd]# docker rmi -f $(docker images -q)
在node3主机测试从registry下载镜像
[[email protected] ~]# docker pull 192.168.1.107:5000/centos:httpd
Trying to pull repository 192.168.1.107:5000/centos ...
httpd: Pulling from 192.168.1.107:5000/centos
015eb01e8c8a: Pull complete
d779d3709bc9: Pull complete
b992bb2524dd: Pull complete
Digest: sha256:57aa90b2fb375d33f6e7ddd2a6f446082a8e43720abeb2e738c179e52060b11c
[[email protected] ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.1.107:5000/centos httpd fb438be1e9dc 7 minutes ago 334.8 MB
测试成功将镜像删除
[[email protected] ~]# docker rmi 192.168.1.107:5000/centos:httpd
overlay网络
解决了镜像构建问题, 为了让应用跑在swram集群上,我们还需要解决容器间的网络访问问题.
单台服务器的时候我们应用所有的容器都跑在一台主机上, 所以容器之间的网络是互通的. 现在我们的集群有3台主机, 所以docker应用的服务会分布在这3台主机上.
如何保证不同主机上的容器网络互通呢?
swarm集群已经帮我们解决了这个问题了,就是只用overlay network.
在docker 1.12以前, swarm集群需要一个额外的key-value存储(consul, etcd). 来同步网络配置, 保证所有容器在同一个网段中.
在docker 1.12已经内置了这个存储, 集成了overlay networks的支持.
下面我们演示下如何创建一个 overlay network:
注:swarm上默认已有一个名为ingress的overlay 网络, 可以直接使用, 但本文会创建一个新的
为我们的docker应用创建一个名为dockercoins的overlay network
[[email protected] ~ ]# docker network create --driver overlay dockercoins
czrggbi9bzr93hj41xk7wyi7t
查询docker network 列表
[[email protected] ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
243db7b3b091 bridge bridge local
8af0a3999e12 docker_gwbridge bridge local
6eucw2o38a54 dockercoins overlay swarm
406da66db113 host host local
6dxsbfttclfv ingress overlay swarm
f5e4dd3297cc none null local
在网络列表中你可以看到dockercoins网络的SCOPE是swarm, 表示该网络在整个swarm集群生效的, 其他一些网络是local, 表示本机网络.
你只需要在manager节点创建network, swarm集群会自动处理配置到其他的节点,这是你可以查看其他节点的network. dockercoins网络已经都创建了.:
注:一旦新的任务被指定给这个节点,Overlay网络就会被按需创建。
[[email protected] ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
9ee608df7b31 bridge bridge local
f9ec38ea9c8f docker_gwbridge bridge local
6eucw2o38a54 dockercoins overlay swarm
6d5d39f8176e host host local
6dxsbfttclfv ingress overlay swarm
b96fe7a0e42c none null local
node3不是manager节点所以查不到
[[email protected] ~]# ssh node3 docker network ls
NETWORK ID NAME DRIVER SCOPE
57d951547d1b bridge bridge local
7268e014f58b docker_gwbridge bridge local
63eb157bfb4f host host local
6dxsbfttclfv ingress overlay swarm
9c2af9cd387c none
在swarm集群上运行docker应用
概念解释:service
Docker1.12 swarm引入了服务的概念,一个服务由多个任务组成,一个任务即一个运行的容器。
服务包括两种类型:
复制服务(replicated services):类似 k8s 中复制集的概念,保持一定数量的相同任务在集群中运行;
全局服务(global services):类似 k8s 中 daemon 的概念,每个工作节点上运行一个。
发布服务:
在manager上执行如下命令:
下面我们可以使用之前push到本地镜像仓库的镜像启动服务, 以centos:httpd为例:
[[email protected] ~]# docker pull 192.168.1.107:5000/centos:httpd
Trying to pull repository 192.168.1.107:5000/centos ...
httpd: Pulling from 192.168.1.107:5000/centos
015eb01e8c8a: Pull complete
d779d3709bc9: Pull complete
b992bb2524dd: Pull complete
Digest: sha256:57aa90b2fb375d33f6e7ddd2a6f446082a8e43720abeb2e738c179e52060b11c
[[email protected] ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.1.107:5000/centos httpd fb438be1e9dc 31 minutes ago 334.8 MB
registry 2 c6c14b3960bd 13 months ago 33.28 MB
在manager上执行如下命令:
[[email protected] ~]# docker service create --replicas 1 --network dockercoins --name ceshi1 -p 8080:80 192.168.1.107:5000/centos:httpd
9zipvng0tsdpdfnhprzmdymmz
参数:
docker service create命令创建一个 service.
--name标签命名service为web1.
--replicas标签来声明1个运行实体(即容器副本数)
注意, 我们启动的镜像名字192.168.1.107:5000/centos:http使用我们本地镜像仓库的镜像名称, 这样当主机上没有这个镜像时, 会自动到本地镜像仓库拉取镜像.
使用docker service ls查看服务
[[email protected] ~]# docker service list
ID NAME REPLICAS IMAGE COMMAND
2syr9grsirji ceshi1 1/1 192.168.1.107:5000/centos:httpd
docker service inspect命令用户查看service详细信息
使用docker serviceps<SERVICE-ID/NAME>查看服务运行在哪个节点上
[[email protected] ~]# docker service ps ceshi1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
35s0e9ebps6oijty0klhsc6sy ceshi1.1 192.168.1.107:5000/centos:httpd node3 Running Running about a minute ago
现在你可以用浏览器访问http://192.168.1.107:8080 就能访问测试页
事实上, 你可以访问swarm集群中的所有节点 192.168.1.136、192.168.1.137的8080端口, 都可以访问测试页。(注:将firewall防火墙默认区域设置为trusted)
在manager 的Leader上执行如下命令:
[[email protected] ~]# docker service create --replicas 2 --network dockercoins --name ceshi2 -p 8000:80 192.168.1.107:5000/centos:httpd
3ljbgwjrgtiw0njmbby76aqmm
--replicas标签来声明2个运行实体
查看服务:
[[email protected] ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2syr9grsirji ceshi1 1/1 192.168.1.107:5000/centos:httpd
3ljbgwjrgtiw ceshi2 2/2 192.168.1.107:5000/centos:httpd
[[email protected] ~]# docker service ps ceshi2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
1nop65bw2tx5n8oz4ydas1shv ceshi2.1 192.168.1.107:5000/centos:httpd node2 Running Running 4 minutes ago
dspxbg8yd42ftdx0k9p61ld1p ceshi2.2 192.168.1.107:5000/centos:httpd node1 Running Running 4 minutes ago
从上面可以看到ceshi2名称的service有2个副本分别运行在node1和node2节点上。
以全局服务类型运行服务
[[email protected] ~]# docker service create --mode global --name ceshi3 -p 8020:80 192.168.1.107:5000/centos:httpd
3tm3krvcg3xoe6tlobbn4xp8x
[[email protected] ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2syr9grsirji ceshi1 1/1 192.168.1.107:5000/centos:httpd
3ljbgwjrgtiw ceshi2 2/2 192.168.1.107:5000/centos:httpd
3tm3krvcg3xo ceshi3 global 192.168.1.107:5000/centos:httpd
从下面可以看到服务ceshi3在每个节点上都运行一个
[[email protected] ~]# docker service ps ceshi3
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
28qcqbocq3fssq6prry8b1yhr ceshi3 192.168.1.107:5000/centos:httpd node3 Running Running about a minute ago
601t61m1a9vhjsx2enbwrwkk1 \_ ceshi3 192.168.1.107:5000/centos:httpd node2 Running Running about a minute ago
5zip3tapzobnxit0bhi07zvxl \_ ceshi3 192.168.1.107:5000/centos:httpd node1 Running Running about a minute ag
下面我们扩展旧的服务,从下面可以看到ceshi1 service目前只有一个副本
[[email protected] ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2syr9grsirji ceshi1 1/1 192.168.1.107:5000/centos:httpd
3ljbgwjrgtiw ceshi2 2/2 192.168.1.107:5000/centos:httpd
3tm3krvcg3xo ceshi3 global 192.168.1.107:5000/centos:httpd
扩展已有的服务的副本数
这里将ceshi1服务扩展到3个副本
[[email protected] ~]# docker service scale ceshi1=3
ceshi1 scaled to 3
[[email protected] ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2syr9grsirji ceshi1 3/3 192.168.1.107:5000/centos:httpd
3ljbgwjrgtiw ceshi2 2/2 192.168.1.107:5000/centos:httpd
3tm3krvcg3xo ceshi3 global 192.168.1.107:5000/centos:httpd
缩减已有的服务的副本数
这里将ceshi1服务缩减到2个副本
[[email protected] ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2syr9grsirji ceshi1 2/2 192.168.1.107:5000/centos:httpd
3ljbgwjrgtiw ceshi2 2/2 192.168.1.107:5000/centos:httpd
3tm3krvcg3xo ceshi3 global 192.168.1.107:5000/centos:httpd
Swarm节点是自组织(self-organizing)和自修复(self-healing)的,什么意思?只要有节点或容器宕掉,swarm engine就会尝试修复,下面我们来具体看一下
自修复(self-healing)
经过上面的操作之后,我们有以下3个节点:
[[email protected] ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6qfftgsw2ba2eq0j0ztvcyjna node3 Ready Active
9cxfsbt5294ya0wn7ji0gs3ji * node1 Ready Active Reachable
c1lpchr06vjl7pg5s3leezxrq node2 Ready Active Leader
运行着3个服务共5个任务(容器)
[[email protected] ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2syr9grsirji ceshi1 2/2 192.168.1.107:5000/centos:httpd
3ljbgwjrgtiw ceshi2 2/2 192.168.1.107:5000/centos:httpd
3tm3krvcg3xo ceshi3 global 192.168.1.107:5000/centos:httpd
node1节点上运行着容器2个容器还有一个私有仓库注册服务器容器
[[email protected] ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
00ea67d76d15 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 16 minutes ago Up 16 minutes 80/tcp ceshi3.0.5zip3tapzobnxit0bhi07zvxl
cef6786aa5bf 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 25 minutes ago Up 25 minutes 80/tcp ceshi2.2.dspxbg8yd42ftdx0k9p61ld1p
35639a4b8c60 registry:2 "/entrypoint.sh /etc/" 51 minutes ago Up 47 minutes 0.0.0.0:5000->5000/tcp registry2
node2节点上运行着容器3个容器
[[email protected] ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c81a2f073941 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 14 minutes ago Up 13 minutes 80/tcp ceshi1.2.avu47j8eim8gclzkbws9lgvuc
9cb3704e8221 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 17 minutes ago Up 17 minutes 80/tcp ceshi3.0.601t61m1a9vhjsx2enbwrwkk1
35c95d81dc1b 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 26 minutes ago Up 26 minutes 80/tcp ceshi2.1.1nop65bw2tx5n8oz4ydas1shv
node3节点上运行着容器2个容器
[[email protected] ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0542ea9089e7 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 18 minutes ago Up 18 minutes 80/tcp ceshi3.0.28qcqbocq3fssq6prry8b1yhr
cd52075eea06 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 35 minutes ago Up 34 minutes 80/tcp ceshi1.1.35s0e9ebps6oijty0klhsc6sy
现在我们让node3上的容器都宕掉或部分宕掉
[[email protected] ~]# docker stop $(docker ps -aq)
一旦node3上所有容器停止,Docker就会试图在相同的节点上启动2个不同ID的容器。
这就是Docker Swarm Engine的self-healing功能。
[[email protected] ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad2f0a30f991 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 27 seconds ago Up 21 seconds 80/tcp ceshi1.1.7qxmmvfbej3w9yekrqpf4gptf
c6ae75728aa0 192.168.1.107:5000/centos:httpd "/usr/sbin/httpd -DFO" 28 seconds ago Up 22 seconds 80/tcp ceshi3.0.exxmoz1443gybq4txds2vu4v1
Self-Organizing
现在我们让node3整个宕掉,node3上的容器会自动在其它节点上启动。
在manager节点上执行docker server ps服务名
[[email protected] ~]# docker service ps ceshi3
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
exxmoz1443gybq4txds2vu4v1 ceshi3 192.168.1.107:5000/centos:httpd node3 Running Running 2 minutes ago
28qcqbocq3fssq6prry8b1yhr \_ ceshi3 192.168.1.107:5000/centos:httpd node3 Shutdown Complete 3 minutes ago
601t61m1a9vhjsx2enbwrwkk1 \_ ceshi3 192.168.1.107:5000/centos:httpd node2 Running Running 23 minutes ago
5zip3tapzobnxit0bhi07zvxl \_ ceshi3 192.168.1.107:5000/centos:httpd node1 Running Running 23 minutes ago
[[email protected] ~]# docker service ps ceshi1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
7qxmmvfbej3w9yekrqpf4gptf ceshi1.1 192.168.1.107:5000/centos:httpd node3 Running Running 3 minutes ago
35s0e9ebps6oijty0klhsc6sy \_ ceshi1.1 192.168.1.107:5000/centos:httpd node3 Shutdown Complete 3 minutes ago
avu47j8eim8gclzkbws9lgvuc ceshi1.2 192.168.1.107:5000/centos:httpd node2 Running Running 19 minutes ago
a4samjx85fw7v4cqsm3asr09j ceshi1.3 192.168.1.107:5000/centos:httpd node3 Shutdown Shutdown 18 minutes ago
本文出自 “duyuheng” 博客,谢绝转载!
以上是关于Docker Swarm集群部署应用的主要内容,如果未能解决你的问题,请参考以下文章