k8s-day13-K8S集群部署

Posted linux言叙

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s-day13-K8S集群部署相关的知识,希望对你有一定的参考价值。

3、K8S集群部署

3.1基础环境

centos7.3 连接互联网关闭防火墙:iptables -Fsystemctl disable firewall关闭selinux:sed -i 's/enforcing/disabled/' /etc/selinux/config临时关闭selinux:setenforce 0 master 192.168.89.20node1 192.168.89.30node2 192.168.89.40

3.1.1修改主机名

hostnamectl --static set-hostname k8s-masterhostnamectl --static set-hostname k8s-node-1hostnamectl --static set-hostname k8s-node-2


3.1.2添加ip与主机名解析

echo '192.168.89.20 k8s-master192.168.89.20 etcd192.168.89.20 registry192.168.89.30 k8s-node-1192.168.89.40 k8s-node-2' >> /etc/hosts

3.1.3 开启路由转发功能

vim /etc/sysctl.confnet.ipv4.ip_forward=1sysctl -p

3.2 master部署

3.2.1master节点安装etcd

yum -y install etcd

修改etcd配置:

vi /etc/etcd/etcd.conf#[Member]#ETCD_CORS=""ETCD_DATA_DIR="/var/lib/etcd/default.etcd"#ETCD_WAL_DIR=""#ETCD_LISTEN_PEER_URLS="http://localhost:2380"ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"ETCD_NAME="master"#ETCD_SNAPSHOT_COUNT="100000"#ETCD_HEARTBEAT_INTERVAL="100"#ETCD_ELECTION_TIMEOUT="1000"#ETCD_QUOTA_BACKEND_BYTES="0"#ETCD_MAX_REQUEST_BYTES="1572864"#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"##[Clustering]#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"#ETCD_DISCOVERY=""#ETCD_DISCOVERY_FALLBACK="proxy"#ETCD_DISCOVERY_PROXY=""#ETCD_DISCOVERY_SRV=""#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"#ETCD_INITIAL_CLUSTER_STATE="new"#ETCD_STRICT_RECONFIG_CHECK="true"#ETCD_ENABLE_V2="true"

启动并验证:

systemctl restart etcd

验证:

etcdctl set testdir/testkey0 0etcdctl get testdir/testkey0

 健康检查:

etcdctl -C http://etcd:2379 cluster-healthetcdctl -C http://etcd:4001 cluster-health


3.2.2 master部署docker

yum -y install docker

配置Docker配置文件,使其允许从registry中拉取镜像

vi /etc/sysconfig/docker# /etc/sysconfig/docker# Modify these options if you want to change the way the docker daemon runsOPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/dockerfiOPTIONS='--insecure-registry registry:5000'# Do not add registries in this file anymore. Use /etc/containers/registries.conf...

设置开机自启并启动:

systemctl enable dockersystemctl start docker

注:registry 实际上就是运行在 Docker中的registry镜像的实例。

Registry:注册服务器,用于管理镜像仓库,起到的是服务器的作用。

Repository:镜像仓库,用于存储具体的docker镜像,起到的是仓库存储作用。

3.2.3 安装kubernetes

yum -y install kubernetes

修改配置并启动kuberneteskubernetes master上需要运行以下组件:

Kubernets API Server

Kubernets Controller Manager

Kubernets Scheduler

相应的要更改以下几个配置中带颜色部分信息:

A、修改apiserver配置

 vi /etc/kubernetes/apiserver#### kubernetes system config## The following values are used to configure the kube-apiserver# # The address on the local server to listen to.KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # The port on the local server to listen on.KUBE_API_PORT="--port=8080" # Port minions listen on# KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd clusterKUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379" # Address range to use for servicesKUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" # Add your own!KUBE_API_ARGS=""
B、修改config配置

 vi /etc/kubernetes/config#### kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including## kube-apiserver.service# kube-controller-manager.service# kube-scheduler.service# kubelet.service# kube-proxy.service# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver#KUBE_MASTER="--master=http://127.0.0.1:8080"KUBE_MASTER="--master=http://k8s-master:8080"


C、启动服务并设置开机自启动

systemctl start kube-apiserversystemctl enable kube-apiserversystemctl enable kube-controller-managersystemctl start kube-controller-managersystemctl enable kube-schedulersystemctl start kube-scheduler

3.3 node部署

3.3.1 node部署docker

3.2.2 master部署docker。

3.3.2 node部署kubernetes

yum -y install kubernetes

kubernetes node上需要运行以下组件:

Kubelet

Kubernets Proxy

相应的要更改以下几个配置文中带颜色部分信息:

A、修改config文件

 vi /etc/kubernetes/config# kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including## kube-apiserver.service# kube-controller-manager.service# kube-scheduler.service# kubelet.service# kube-proxy.service# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver#KUBE_MASTER="--master=http://127.0.0.1:8080"KUBE_MASTER="--master=http://k8s-master:8080" 
B、修改kubelet文件

 vi /etc/kubernetes/kubelet #### kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on# KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname-override=k8s-node-1" # location of the api-serverKUBELET_API_SERVER="--api-servers=http://k8s-master:8080" # pod infrastructure containerKUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own!KUBELET_ARGS=""
C、配置开机启动并启动

systemctl enable kubeletsystemctl start kubeletsystemctl enable kube-proxysystemctl start kube-proxy 

D、master节点查看node状态

kubectl -s http://k8s-master:8080 get nodekubectl get nodes

3.4、部署集群网络插件-Flannel

kubernetes设计了网络模型,但却将它的实现交给了网络插件,CNI网络插件最主要的功能就是实现POD资源能够跨主机进行通讯。常见的CNI网络插件:

FlannelCalicoCanalContivOpenContrailNSX-TKube-router

网络策略:可以达到多租户网络隔离,可以控制入网和出网流量,入网和出网ip访问的一种策略。

A、安装并修改配置文件

master、node上均安装Flannel并编辑/etc/sysconfig/flanneld,修改红色部分

yum -y install flannel

修改配置文件:/etc/sysconfig/flanneld

# Flanneld configuration options # etcd url location. Point this to the server where etcd runsFLANNEL_ETCD_ENDPOINTS="http://etcd:2379" # etcd config key. This is the configuration key that flannel queries# For address range assignmentFLANNEL_ETCD_PREFIX="/atomic.io/network" # Any additional options that you want to pass#FLANNEL_OPTIONS=""
B、master节点配置etcd中关于flannel的key

Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)

etcdctl mk /atomic.io/network/config '{ "Network": "192.168.0.0/16" }'

 创建网络使用的网段必须与Kubernetes配置服务使用的网段相同。

注:更新key值

etcdctl update /atomic.io/network/config '{ "Network": "192.168.0.0/16" }'


C、启动Flannel之后,需要依次重启docker、kubernete

master节点:

systemctl enable flanneldsystemctl restart flanneldservice docker restartsystemctl restart kube-apiserversystemctl restart kube-controller-managersystemctl restart kube-scheduler

Node节点:

systemctl enable flanneldsystemctl start flanneldservice docker restartsystemctl restart kubeletsystemctl restart kube-proxy

查看node节点状态:

[root@k8s-master ~]# kubectl get nodeNAME STATUS AGEk8s-node-1 Ready 21h

至此,k8s基础集群搭建完成。


以上是关于k8s-day13-K8S集群部署的主要内容,如果未能解决你的问题,请参考以下文章

k8s-day5-名词解释:pod

k8s-day2-名词解释:master

k8s-day3-名词解释:master

k8s-day8-名词解释:Deployment

k8s-day12-名词解释:Volume(共享存储)

导致资产预编译在heroku部署上失败的代码片段