kubernetes 1.3 的安装和集群环境部署
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kubernetes 1.3 的安装和集群环境部署相关的知识,希望对你有一定的参考价值。
简介:
Docker:是一个开源的应用容器引擎,可以为应用创建一个轻量级的、可移植的、自给自足的容器。
Kubernetes:由Google开源的Docker容器集群管理系统,为容器化的应用提供资源调度、部署运行、服务发现、扩容缩容等功能。
Etcd:由CoreOS开发并维护的一个高可用的键值存储系统,主要用于共享配置和服务发现。
Flannel:Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。
目标:
- etcd集群的搭建;
- docker安装和配置(简单介绍);
- flannel安装和配置(简单介绍);
- k8s集群部署;
准备工作:
主机 | 运行服务 | 角色 |
---|---|---|
172.20.30.19(centos7.1) | etcd docker flannel kube-apiserver kube-controller-manager kube-scheduler |
k8s-master |
172.20.30.21(centos7.1) | etcd docker flannel kubelet kube-proxy |
minion |
172.20.30.18(centos7.1) | etcd docker flannel kubelet kube-proxy |
minion |
172.20.30.20(centos7.1) | etcd docker flannel kubelet kube-proxy |
minion |
安装:
下载好etcd、docker、flannel的rpm安装包,例如:
etcd:
etcd-2.2.5-2.el7.0.1.x86_64.rpm
flannel:
flannel-0.5.3-9.el7.x86_64.rpm
docker:
device-mapper-1.02.107-5.el7_2.5.x86_64.rpm docker-selinux-1.10.3-44.el7.centos.x86_64.rpm
device-mapper-event-1.02.107-5.el7_2.5.x86_64.rpm libseccomp-2.2.1-1.el7.x86_64.rpm
device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64.rpm lvm2-2.02.130-5.el7_2.5.x86_64.rpm
device-mapper-libs-1.02.107-5.el7_2.5.x86_64.rpm lvm2-libs-2.02.130-5.el7_2.5.x86_64.rpm
device-mapper-persistent-data-0.5.5-1.el7.x86_64.rpm oci-register-machine-1.10.3-44.el7.centos.x86_64.rpm
docker-1.10.3-44.el7.centos.x86_64.rpm oci-systemd-hook-1.10.3-44.el7.centos.x86_64.rpm
docker-common-1.10.3-44.el7.centos.x86_64.rpm yajl-2.0.4-4.el7.x86_64.rpm
docker-forward-journald-1.10.3-44.el7.centos.x86_64.rpm
etcd和flannel的安装比较简单,没有依赖关系。docker的安装因为有依赖关系,需要先安装docker的依赖包,才能安装成功。此处不是本文的重点,不做赘述。
四台机器上,都必须安装etcd,docker,和flannel
下载kubernetes 1.3版本的二进制包,点击下载
下载完成后执行一下操作,以在 172.20.30.19上为例:
1
2
3
4
5
6
7
8
|
# tar zxvf kubernetes1.3.tar.gz # 解压二进制包 # cd kubernetes/server # tar zxvf kubernetes-server-linux-amd64.tar.gz # 解压master所需的安装包 # cd kubernetes/server/bin/ # cp kube-apiserver kube-controller-manager kubectl kube-scheduler /usr/bin #把master需要的程序,拷贝到/usr/bin下,也可以设置环境变量达到相同目的 # scp kubelet kube-proxy [email protected]:~ # 把minion需要的程序,scp发送到minion上 # scp kubelet kube-proxy [email protected]:~ # scp kubelet kube-proxy [email protected]:~ |
配置和部署:
1. etcd的配置和部署
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
|
# [member] ETCD_NAME= "etcd-2" ETCD_DATA_DIR= "/data/etcd/" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_LISTEN_PEER_URLS="http://localhost:2380" # 去掉默认的配置 ETCD_LISTEN_PEER_URLS= "http://0.0.0.0:7001" #ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" # 去掉默认的配置 ETCD_LISTEN_CLIENT_URLS= "http://0.0.0.0:4001" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380" ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://172.20.30.21:7001" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." #ETCD_INITIAL_CLUSTER="default=http://localhost:2380" ETCD_INITIAL_CLUSTER= "etcd-1=http://172.20.30.19:7001,etcd-2=http://172.20.30.21:7001,etcd-3=http://172.20.30.18:7001,etcd-4=http://172.20.30.20:7001" # 此处的含义为,要配置包含有4台机器的etcd集群 ETCD_INITIAL_CLUSTER_STATE= "new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379" ETCD_ADVERTISE_CLIENT_URLS= "http://172.20.30.21:4001" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS="" |
修改四台机器中etcd的服务配置: /usr/lib/systemd/system/etcd.service。修改后的文件内容为:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory= /var/lib/etcd/ EnvironmentFile=- /etc/etcd/etcd .conf User=etcd # set GOMAXPROCS to number of processors ExecStart= /bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\\"${ETCD_NAME}\\" --data-dir=\\"${ETCD_DATA_DIR}\\" --listen-client-urls=\\"${ETCD_LISTEN_CLIENT_URLS}\\" --listen-peer-urls=\\"${ETCD_LISTEN_PEER_URLS}\\" --advertise-client-urls=\\"${ETCD_ADVERTISE_CLIENT_URLS}\\" --initial-advertise-peer-urls=\\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\\" --initial-cluster=\\"${ETCD_INITIAL_CLUSTER}\\" --initial-cluster-state=\\"${ETCD_INITIAL_CLUSTER_STATE}\\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target |
在每台机器上执行:
1 # systemctl enable etcd.service 2 # systemctl start etcd.service
然后选择一台机器,在其上执行:
1 # etcdctl set /cluster "example-k8s"
再选取另外一台机器,执行:
1 # etcdctl get /cluster
如果返回 “example-k8s”,则etcd集群部署成功。
2. docker的配置和部署
1
2
3
|
ADD_REGISTRY= "--add-registry docker.midea.registry.hub:10050" DOCKER_OPTS= "--insecure-registry docker.midea.registry.hub:10050" INSECURE_REGISTRY= "--insecure-registry docker.midea.registry.hub:10050" |
以上配置项为本地 register 的地址和服务端口,在docker的服务启动项中有用。具体register的搭建,请参考上一篇文章。
1 [Unit] 2 Description=Docker Application Container Engine 3 Documentation=http://docs.docker.com 4 After=network.target 5 Wants=docker-storage-setup.service 6 7 [Service] 8 Type=notify 9 NotifyAccess=all 10 EnvironmentFile=-/etc/sysconfig/docker 11 EnvironmentFile=-/etc/sysconfig/docker-storage 12 EnvironmentFile=-/etc/sysconfig/docker-network 13 Environment=GOTRACEBACK=crash 14 ExecStart=/bin/sh -c ‘exec -a docker /usr/bin/docker-current daemon \\ #注意,在centos是,此处是个坑。docker启动的时候,systemd是无法获取到docker的pid,可能会导致后面的flannel服务无法启动,需要加上红色部分,让systemd能抓取到 docker的pid 15 --exec-opt native.cgroupdriver=systemd 16 $OPTIONS 17 $DOCKER_STORAGE_OPTIONS 18 $DOCKER_NETWORK_OPTIONS 19 $ADD_REGISTRY 20 $BLOCK_REGISTRY 21 $INSECURE_REGISTRY 22 2>&1 | /usr/bin/forward-journald -tag docker‘ 23 LimitNOFILE=1048576 24 LimitNPROC=1048576 25 LimitCORE=infinity 26 TimeoutStartSec=0 27 MountFlags=slave 28 Restart=on-abnormal 29 StandardOutput=null 30 StandardError=null 31 32 [Install] 33 WantedBy=multi-user.target
分别在每台机器上执行:
1 # systemctl enable docker.service 2 # systemctl start docker
检测docker的运行状态很简单,执行
1 # docker ps
查看是否能正常列出运行中的容器的各个元数据项即可(此时没有container运行,只列出各个元数据项):
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3. flannel的配置和部署
1 # Flanneld configuration options 2 3 # etcd url location. Point this to the server where etcd runs 4 FLANNEL_ETCD="http://172.20.30.21:4001" 5 6 # etcd config key. This is the configuration key that flannel queries 7 # For address range assignment 8 FLANNEL_ETCD_KEY="/k8s/network" #这是一个目录,etcd中的目录 9 10 # Any additional options that you want to pass 11 FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/k8s/flannel/ --etcd-endpoints=http://172.20.30.21:4001"
然后执行:
# etcdctl mkdir /k8s/network
# etcdctl set /k8s/network/config ‘{"Network":"172.100.0.0/16"}‘
该命令含义是,期望docker运行的container实例的地址,都在 172.100.0.0/16 网段中
# systemctl enable flanneld.service # systemctl stop docker # 暂时先关闭docker服务,启动flanneld的时候,会自动拉起docker服务 # systemctl start flanneld.service
命令执行完成,如果没有差错的话,就会顺利地拉起docker。
# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472 inet 172.100.28.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::42:86ff:fe81:6892 prefixlen 64 scopeid 0x20<link> ether 02:42:86:81:68:92 txqueuelen 0 (Ethernet) RX packets 29 bytes 2013 (1.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25 bytes 1994 (1.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.20.30.21 netmask 255.255.255.0 broadcast 172.20.30.255 inet6 fe80::f816:3eff:fe43:21ac prefixlen 64 scopeid 0x20<link> ether fa:16:3e:43:21:ac txqueuelen 1000 (Ethernet) RX packets 13790001 bytes 3573763877 (3.3 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 13919888 bytes 1320674626 (1.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 172.100.28.0 netmask 255.255.0.0 destination 172.100.28.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 120 (120.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 65311 bytes 5768287 (5.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 65311 bytes 5768287 (5.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
以上描述,就部署好了基本的环境,接下来要部署和启动kubernetes服务。
4. kubenetes 部署
1 #! /bin/sh 2 3 # firstly, start etcd 4 systemctl restart etcd 5 6 # secondly, start flanneld 7 systemctl restart flanneld 8 9 # then, start docker 10 systemctl restart docker 11 12 # start the main server of k8s master 13 nohup kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 --cors_allowed_origins=.* --etcd_servers=http://172.20.30.19:4001 --v=1 --logtostderr=false --log_dir=/var/log/k8s/apiserver --service-cluster-ip-range=172.100.0.0/16 & 14 15 nohup kube-controller-manager --master=172.20.30.19:8080 --enable-hostpath-provisioner=false --v=1 --logtostderr=false --log_dir=/var/log/k8s/controller-manager & 16 17 nohup kube-scheduler --master=172.20.30.19:8080 --v=1 --logtostderr=false --log_dir=/var/log/k8s/scheduler &
然后赋予执行权限:
# chmod u+x start_k8s_master.sh
由于安装k8s的操作,已经把kubelet和kube-proxy发送到作为minion机器上了(我们已经悄悄地定义好了k8s集群)
1 #! /bin/sh 2 3 # firstly, start etcd 4 systemctl restart etcd 5 6 # secondly, start flanneld 7 systemctl restart flanneld 8 9 # then, start docker 10 systemctl restart docker 11 12 # start the minion 13 nohup kubelet --address=0.0.0.0 --port=10250 --v=1 --log_dir=/var/log/k8s/kubelet --hostname_override=172.20.30.21 --api_servers=http://172.20.30.19:8080 --logtostderr=false & 14 15 nohup kube-proxy --master=172.20.30.19:8080 --log_dir=/var/log/k8s/proxy --v=1 --logtostderr=false &
然后赋予执行权限:
# chmod u+x start_k8s_minion.sh
发送该脚本到作为minion的主机上。
运行k8s
# ./start_k8s_master.sh
在作为minion的主机上,执行:
# ./start_k8s_minion.sh
在master主机上,执行:
# kubectl get node NAME STATUS AGE 172.20.30.18 Ready 5h 172.20.30.20 Ready 5h 172.20.30.21 Ready 5h
列出以上信息,则表示k8s集群部署成功。
以上是关于kubernetes 1.3 的安装和集群环境部署的主要内容,如果未能解决你的问题,请参考以下文章
Kubernetes——Kubernetes基础+部署Kubernetes集群