Centos7.4部署k8s集群(v1.17.17)

Posted 青衫解衣

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Centos7.4部署k8s集群(v1.17.17)相关的知识,希望对你有一定的参考价值。

linux系统版本:

[root@master ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root@master ~]# uname -a
Linux master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

三台主机角色分配:

10.40.42.103   master  2u4g
10.40.42.105   node1   2u4g
10.40.42.127   node2   4u8g

分别设置主机名:

hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

三台主机添加/etc/hosts解析:

cat >> /etc/hosts << EOF
10.40.42.103    master
10.40.42.105    node1
10.40.42.127    node2
EOF

安装epel扩展源:

yum -y install epel-release

关闭iptables:

systemctl stop firewalld && systemctl disable firewalld

关闭selinux内核防火墙:

sed -i \'s/SELINUX=enforcing/SELINUX=disabled/g\' /etc/selinux/config  #永久生效
setenforce 0   #临时生效

安装docker依赖:

yum install -y yum-utils   device-mapper-persistent-data   lvm2

添加阿里的docker源:

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

\'Centos7.4部署k8s集群(v1.17.17)_k8s\'

安装18.09.6版本的docker:

k8s和docker版本是有兼容问题的,尝试安装最新的docker 有点问题。

yum list docker-ce --showduplicates | sort -r | grep 18.09.6

\'Centos7.4部署k8s集群(v1.17.17)_k8s1.17_02\'

启动docker并创建开机自启动:

systemctl restart docker && systemctl enable docker

\'Centos7.4部署k8s集群(v1.17.17)_k8s_03\'

安装 bash-completion 后,可用tab键补齐几乎任何内容,包括参数、文件、目录甚至包名等.

yum -y install bash-completion
source /etc/profile.d/bash_completion.sh  #使其生效

\'Centos7.4部署k8s集群(v1.17.17)_k8s1.17_04\'

镜像加速:

Docker Hub的服务器在国外,下载镜像会比较慢,国内云厂商阿里云免费做了一个代理,注册阿里云账号就可以使用,大家共用一个也是可以的,不存敏感信息也不花钱。 

\'Centos7.4部署k8s集群(v1.17.17)_k8s_05\'

 

配置镜像加速:

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-\'EOF\'
{

  "registry-mirrors": ["https://4z7jtuuf.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]   #更改docker驱动为systemd
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

\'Centos7.4部署k8s集群(v1.17.17)_k8s1.17_06\'

关闭swapoff

swapoff -a     #临时关闭
sed -i  \'/swap/s/^/#/\' /etc/fstab    #重启生效,修改/etc/fstab

要求iptables不对bridge的数据进行处理,修改内核参数:

临时生效:

[root@master ~]# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
[root@master ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-ip6tables = 1

永久生效:

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

临时修改使其生效:

sysctl -p /etc/sysctl.d/k8s.conf

为什么要修改网桥参考文章:参考文章:https://zhuanlan.zhihu.com/p/374919190

新增kubernetes源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

建立yum源缓存:

yum clean all
yum -y makecache

查看k8s版本,安装指定版本:

为啥安装1.17.5?我尝试安装低于1.15版本,但是需要解决kubectl-cli依赖问题,解决一会发现有点烦了,其次又是老版本,觉得没必要,阿里云都1.16了。尝试安装1.22 但是发现很多插件安装不正常,还需要解决问题,新手没必要。

yum list kubelet --showduplicates | sort -r

\'Centos7.4部署k8s集群(v1.17.17)_k8s_07\'

yum -y install kubeadm-1.17.5 kubectl-1.17.5 kubelet-1.17.5

\'Centos7.4部署k8s集群(v1.17.17)_k8s_08\'

启动kubelet并开机自启动:

systemctl enable kubelet && systemctl restart kubelet

kubelet命令补全:

echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile

\'Centos7.4部署k8s集群(v1.17.17)_k8s1.17_09\'

K8S镜像下载:

首先我三台机器就是国外的,所以本来速度就不慢,配置阿里云是因为国内使用阿里云会快很多。

[root@master ~]# cat image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.17.5
images=(`kubeadm config images list --kubernetes-version=$version|awk -F \'/\' \'{print $2}\'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done

url为阿里云镜像仓库地址,version为安装的kubernetes版本。

 

master初始化

apiserver-advertise-address指定master的ip,pod-network-cidr指定Pod网络的范围,后面网络使用flannel网络方案。

kubeadm init --apiserver-advertise-address 10.40.42.103 --pod-network-cidr=10.244.0.0/16

看到下图这样就表示成功:

因为我这是第二次在本地部署k8s,机器没有重新安装,所以未看到提示root用户要怎么添加环境变量。

Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf

\'Centos7.4部署k8s集群(v1.17.17)_k8s1.17_10\'

初始化master报错处理:

下面报错时我遇到的,不是本次初始化报错:

1.k8s要求虚拟机cpu虚拟机cpu大于1u

error execution phase preflight: [preflight] Some fatal errors occurred:

[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

[ERROR CRI]: container runtime is not running: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

, error: exit status 1

[ERROR Service-Docker]: docker service is not active, please run \'systemctl start docker.service\'

[ERROR IsDockerSystemdCheck]: cannot execute \'docker info\': exit status 1

[ERROR SystemVerification]: failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

 

2.docker没有启动# systemctl restart docker

error execution phase preflight: [preflight] Some fatal errors occurred:

[ERROR CRI]: container runtime is not running: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

, error: exit status 1

[ERROR Service-Docker]: docker service is not active, please run \'systemctl start docker.service\'

[ERROR IsDockerSystemdCheck]: cannot execute \'docker info\': exit status 1

 

3.kubeadm-config.yaml配置文件 {Groupproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}新增的几行应该是json格式,我写的是对齐写的不对:

[root@master ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.

Flag --experimental-upload-certs has been deprecated, use --upload-certs instead

W0815 20:55:21.474805    2432 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Groupproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: error unmarshaling JSON: while dg JSON: json: unknown field "SupportIPVSProxyMode"

[init] Using Kubernetes version: v1.15.1

 

 master初始化成功后加载环境变量:

[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master ~]# source .bash_profile

若为非root用户,则执行如下操作:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

将下面信息分别在两台node节点运行:

kubeadm join 10.40.42.103:6443 --token cwrlpa.yzvsbkecolxjprg3 \\
    --discovery-token-ca-cert-hash sha256:cf53f436d7051c40f38a19ddf8369440d67e3e28ea1c6287529a9d4df7e909b4

查看其它2个节点的信息:

查看节点都已经是ready状态,正常下没有安装kube-flannel插件,状态应该是notready状态,因为当前环境不是全新的,所以存在k8s缓存。

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   138m   v1.17.5
node1    Ready    <none>   102m   v1.17.5
node2    Ready    <none>   101m   v1.17.5

查看当前k8s集群运行的pod信息:

[root@master ~]# kubectl get pod -A

NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE

kube-system   coredns-6955765f44-9r77g         1/1     Running   0          138m

kube-system   coredns-6955765f44-9wsl9         1/1     Running   0          138m

kube-system   etcd-master                      1/1     Running   0          138m

kube-system   kube-apiserver-master            1/1     Running   0          138m

kube-system   kube-controller-manager-master   1/1     Running   0          138m

kube-system   kube-proxy-fzz4x                 1/1     Running   0          102m

kube-system   kube-proxy-p45tc                 1/1     Running   0          101m

kube-system   kube-proxy-zrq6p                 1/1     Running   0          138m

kube-system   kube-scheduler-master            1/1     Running   0          138m

查看各组件的健康状态:

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

安装kube-flannel网络插件,可以解决k8s内部网络:

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds created

\'Centos7.4部署k8s集群(v1.17.17)_Centos7.4部署k8s集群_11\'

安装k8s dashboard:

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/recommended.yaml

修改service部分,增加以下两行:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31443
  selector:
    k8s-app: kubernetes-dashboard

应用dashboard yaml文件:

[root@master ~]# kubectl apply -f recommended.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment.apps/dashboard-metrics-scraper created

创建dashboard 用户:

[root@master ~]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created

[root@master ~]# cat dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

\'Centos7.4部署k8s集群(v1.17.17)_Centos7.4部署k8s集群_12\'

dashboard 角色权限:

[root@master ~]# kubectl apply -f dashboard-ClusterRoleBinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

[root@master ~]# cat dashboard-ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

 

生成获取token:

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk \'{print $1}\')

\'Centos7.4部署k8s集群(v1.17.17)_Centos7.4部署k8s集群_13\'

浏览器登录IP:31443

\'Centos7.4部署k8s集群(v1.17.17)_k8s1.17_14\'

生成的token复制粘贴出来使用:

\'Centos7.4部署k8s集群(v1.17.17)_k8s_15\'

登录k8s dashboard成功:

\'Centos7.4部署k8s集群(v1.17.17)_Centos7.4部署k8s集群_16\'

以上是关于Centos7.4部署k8s集群(v1.17.17)的主要内容,如果未能解决你的问题,请参考以下文章

Centos7.4 安装elasticsearch6.1.3集群部署

搭建k8s集群(平台规划和部署方式介绍)

k8s部署Kafka集群

企业级k8s集群部署

K8s部署Zookeeper集群

ETCD集群部署