2kubeadm快速部署kubernetes(v1.15.0)集群190623

Posted l-dongf

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了2kubeadm快速部署kubernetes(v1.15.0)集群190623相关的知识,希望对你有一定的参考价值。

一、网络规划

  • 节点网络:192.168.100.0/24
  • Service网络:10.96.0.0/12
  • Pod网络(默认):10.244.0.0/16

二、组件分布及节点规划

  • master(192.168.100.51): API Server/ etcd/ controller-manager/ sheduler
  • node(192.168.100.61,62, ...):kube-proxy/ kubelet/ docker/ flannel

三、基础环境

  1. Kernel 3.10+ or 4+
  2. docker version <=17.03
  3. 停用swap
  1. 配置hosts解析
192.168.100.51  master
192.168.100.61  node01
192.168.100.62  node02
  1. 配置时间同步
# echo "0 */1 * * * /usr/sbin/ntpdate ntp.aliyun.com" >> /var/spool/cron/root
# date; ssh node01 'date'; ssh node02 'date'
  1. 关闭iptables/firewalld,关闭selinux
# systemctl stop firewalld
# systemctl disable firewalld
# iptables -vnL
# vim /etc/selinux/config
SELINUX=disabled
# reboot
  1. 安装docker环境
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum list docker-ce --showduplicates |sort -r  #列出可以版本的docker-ce
# yum install -y --setopt=obsoletes=0 docker-ce-17.03.0.ce-1.el7.centos
# systemctl start docker
# systemctl enable docker
# docker version
 Version:      17.03.0-ce
  1. 导入提前准备好的kubernetes镜像(所有节点)
# vim pull-k8s-images.sh
#!/bin/bash
img_list='k8s.gcr.io/kube-apiserver:v1.15.0
k8s.gcr.io/kube-controller-manager:v1.15.0
k8s.gcr.io/kube-scheduler:v1.15.0
k8s.gcr.io/kube-proxy:v1.15.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
quay.io/coreos/flannel:v0.11.0-amd64'
for img in $img_list; do  #下载镜像
        docker pull $img
done
docker save -o k8s-img-v1.15.0.gz $img_list  #打包
# bash pull-k8s-images.sh
# docker load -i k8s-img-v1.15.0.tar.gz  #导入

以上5步需要在所以节点都执行

四、部署master节点

  1. 安装kubelet/kubeadm/kubectl
[[email protected] ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[[email protected] ~]# yum install kubelet kubeadm kubectl -y
[[email protected] ~]# systemctl enable kubelet
  1. 初始化master
[[email protected] ~]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
  1. 配置集群管理员
[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 记录加入集群的令牌
kubeadm join 192.168.100.51:6443 --token nag8y9.9vllybijsnn7xrzd     --discovery-token-ca-cert-hash sha256:0f8e9cec4c19ca004fd7c9a906691e5295dd5e38e5265e0edcba0b06cc2a7e14
  1. 部署flannel
[[email protected] ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[[email protected] ~]# kubectl apply -f kube-flannel.yml
  1. 验证步骤
[[email protected] ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   "health":"true"
[[email protected] ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   54m   v1.15.0
[[email protected] ~]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-4vwgt         1/1     Running   0          54m
coredns-5c98db65d4-72l8v         1/1     Running   0          54m
etcd-master                      1/1     Running   0          53m
kube-apiserver-master            1/1     Running   0          53m
kube-controller-manager-master   1/1     Running   0          53m
kube-flannel-ds-amd64-8wznx      1/1     Running   0          9m22s
kube-proxy-wb86v                 1/1     Running   0          54m
kube-scheduler-master            1/1     Running   0          53m

五、部署node节点

  1. 安装kubelet/kubeadm
[[email protected] ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[[email protected] ~]# yum install kubeadm kubelet -y
[[email protected] ~]# systemctl enable kubelet
  1. 将node加入集群
[[email protected] ~]# kubeadm join 192.168.100.51:6443 --token nag8y9.9vllybijsnn7xrzd     --discovery-token-ca-cert-hash sha256:0f8e9cec4c19ca004fd7c9a906691e5295dd5e38e5265e0edcba0b06cc2a7e14
  1. 在master上执行验证节点是否加入集群
[[email protected] ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   Ready      master   72m     v1.15.0
node01   Ready      <none>   5m33s   v1.15.0
node02   NotReady   <none>   14s     v1.15.0

六、补充

  • 配置k8s忽略使用swap
# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
  • 配置docker代理
# vim /usr/lib/systemd/system/docker.service
[Service]
Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
Environment="NO_PROXY=127.0.0.0/8"
# systemctl daemon-reload
# systemctl restart docker

以上是关于2kubeadm快速部署kubernetes(v1.15.0)集群190623的主要内容,如果未能解决你的问题,请参考以下文章

Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之 部署规划

Kubeadm部署CentOS8三节点Kubernetes V1.18.0集群实践

Kubeadm部署CentOS8三节点Kubernetes V1.18.0集群实践

基于 Kubernetes v1.14.0 之heapster与influxdb部署

基于 Kubernetes v1.14.0 之 vpa 部署

kubernetes 二进制安装(v1.20.15)部署WorkNode节点