#云原生征文#k8s高可用三台master部署 图文并茂

Posted 大数据陈浩

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了#云原生征文#k8s高可用三台master部署 图文并茂相关的知识,希望对你有一定的参考价值。

#云原生征文#k8s高可用三台master部署

每台机器都要部署nginx

1.前提:k8s相关服务必须安装完

关闭每台机器防火墙,postfix,selinux,swap

systemctl disable firewalld && systemctl stop firewalld
systemctl disable postfix && systemctl stop postfix
setenforce 0 && sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
swapoff -a && sed -i s/.*swap.*/#&/ /etc/fstab

修改k8s文件,将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# 生效
sysctl --system

安装配置docker

mkdir -p /etc/docker/

vim /etc/docker/daemon.json
#添加如下配置

"hosts":[
"tcp://0.0.0.0:9998",
"unix:///var/run/docker.sock"
],
"insecure-registries":["192.168.146.101:5005"],
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]

yum install -y yum-utils


yum-config-manager --add-repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo


yum install -y docker-ce docker-ce-cli containerd.io

systemctl enable docker && systemctl start docker

配置kubernetes源

vim /etc/yum.repos.d/kubernetes.repo
#添加如下配置
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装kubeadm,kubelet和kubectl

yum install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6

systemctl enable kubelet

k8s三台master部署

10.0.0.128

10.0.0.215

10.0.0.29

#### 重新生成新的api-server证书

在master节点下执行下列操作:

```shell

# 导出线上kubeadm配置

kubectl -n kube-system get configmap kubeadm-config -o jsonpath=.data.ClusterConfiguration > kubeadm.yaml

#云原生征文#k8s高可用三台master部署

增加apiServer参数certSANs

```yaml

apiServer:

certSANs:

- localhost

- 10.

- 10.

- 10.

- hw-

- hw-

- hw-

extraArgs:

authorization-mode: Node,RBAC

timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager:

dns:

type: CoreDNS

etcd:

local:

dataDir: /var/lib/etcd

imageRepository: registry.aliyuncs.com/google_containers

kind: ClusterConfiguration

kubernetesVersion: v1.18.6

networking:

dnsDomain: cluster.local

podSubnet: 10.244.0.0/16

serviceSubnet: 10.96.0.0/12

scheduler:

```

  1. 更新证书

更新证书

```shell

#把整个/etc/kubernetes做备份

cp -r /etc/kubernetes ~/backups

#删除老的api-server证书

rm /etc/kubernetes/pki/apiserver.crt,key

#直接使用 kubeadm 命令生成一个新的证书

kubeadm init phase certs apiserver --config kubeadm.yaml

#重启 APIServer 来接收新的证书,最简单的方法是直接杀死 APIServer 的容器

docker kill $(docker ps | grep kube-apiserver | grep -v pause | cut -d -f1)

#验证证书

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text

#云原生征文#k8s高可用三台master部署

#将上面的集群配置信息保存到集群的 kubeadm-config 这个 ConfigMap 中去

kubeadm config upload from-file --config kubeadm.yaml

#验证是否保存成功

kubectl -n kube-system get configmap kubeadm-config -o yaml

  1. 负载均衡
  1. k8s高可用部署已部署完k8s相关操作

在所有节点上执行如下操作:

安装组件nginx、keepalived

```shell

yum install nginx keepalived -y

```

在所有节点上使用 nginx 来作为一个负载均衡器

```shell

vim /etc/kubernetes/nginx.conf

```

添加如下内容

```shell

error_log stderr notice;

worker_processes 2;

worker_rlimit_nofile 130048;

worker_shutdown_timeout 10s;

events

multi_accept on;

use epoll;

worker_connections 16384;

stream

upstream kube_apiserver

least_conn;

server 10.0.0.215:6443;

server 10.0.0.128:6443;

server 10.0.0.29:6443;

server

listen 8443;

proxy_pass kube_apiserver;

proxy_timeout 10m;

proxy_connect_timeout 1s;

http

aio threads;

aio_write on;

tcp_nopush on;

tcp_nodelay on;

keepalive_timeout 5m;

keepalive_requests 100;

reset_timedout_connection on;

server_tokens off;

autoindex off;

server

listen 8081;

location /stub_status

stub_status on;

access_log off;

```

部署keepalived服务

yum install keepalived -y

2.更新master节点配置

**修改 kubelet 配置:**

```shell

vim /etc/kubernetes/kubelet.conf

```

将原有的ip改成nginx的代理配置

```yaml

......

server: https://localhost:8443

name: kubernetes

......

#云原生征文#k8s高可用三台master部署

#云原生征文#k8s高可用三台master部署

```

重启服务

```shell

systemctl restart kubelet

```

**修改 controller-manager 配置:**

```shell

vim /etc/kubernetes/controller-manager.conf

```

#云原生征文#k8s高可用三台master部署

将原有的ip改成nginx的代理配置

```yaml

......

server: https://localhost:8443

name: kubernetes

......

```

重启服务

```shell

docker kill $(docker ps | grep kube-controller-manager | grep -v pause | cut -d -f1)

```

**修改 scheduler 配置:**

```shell

vim /etc/kubernetes/scheduler.conf

```

将原有的ip改成nginx的代理配置

```yaml

......

server: https://localhost:8443

name: kubernetes

......

```

重启服务

```shell

docker kill $(docker ps | grep kube-scheduler | grep -v pause | cut -d -f1)

```

**更新kube客户端配置**

```

vim ~/.kube/config

```

将原有的ip改成nginx的代理配置

#云原生征文#k8s高可用三台master部署

```yaml

......

server: https://localhost:8443

name: kubernetes

......

```

**更新 kube-proxy 配置**

```shell

kubectl -n kube-system edit cm kube-proxy

```

将原有的ip改成nginx的代理配置

```yaml

......

kubeconfig.conf: |-

apiVersion: v1

kind: Config

clusters:

- cluster:

certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

server: https://localhost:8443

name: default

......

#云原生征文#k8s高可用三台master部署

```

重启各个节点的 kube-proxy

#云原生征文#k8s高可用三台master部署

#云原生征文#k8s高可用三台master部署

3.更新控制平面(master)配置

从集群中的 ConfigMap 中获取当前配置

```shell

kubectl -n kube-system get configmap kubeadm-config -o jsonpath=.data.ClusterConfiguration > kubeadm.yaml

```

#云原生征文#k8s高可用三台master部署

然后在当前配置文件里面里面添加 `controlPlaneEndpoint` 属性,用于指定控制面板的负载均衡器的地址。

```yaml

controlPlaneEndpoint: localhost:8443 #在首行添加该配置

```

使用以下命令将其上传回集群

```shell

kubeadm config upload from-file --config kubeadm.yaml

```

然后需要在 `kube-public` 命名空间中更新 `cluster-info` 这个 ConfigMap,该命名空间包含一个Kubeconfig 文件,该文件的 `server:` 一行指向单个控制平面节点。只需使用`kubectl -n kube-public edit cm cluster-info` 更新该 `server:` 行以指向控制平面的负载均衡器即可。

```shell

kubectl -n kube-public edit cm cluster-info

```

将原有的ip改成nginx的代理配置

#云原生征文#k8s高可用三台master部署

```yaml

......

server: https://localhost:8443

name: ""

......

```

更新完成就可以看到 cluster-info 的信息变成了负载均衡器的地址了。

```shell

kubectl cluster-info

```

4.生成token

kubeadm init phase upload-certs --upload-certs

kubeadm token create --print-join-command --config kubeadm.yaml

5.添加master节点

kubeadm reset

rm -rf /var/lib/etcd

kubeadm join localhost:8443 --token 4pi1b4.ngn8krw0aonwpnzd --discovery-token-ca-cert-hash sha256:e94427a152103d795535f5ec783f5f4dbaf2f92419682326d8716332d493f683 --control-plane --certificate-key 653c8a46198e675bee0b7b0183049b7e9ee08a2ff567bc5c36b82c28553ad484

```

#云原生征文#k8s高可用三台master部署

6.修改etcd组件配置

登录各个master节点,修改etcd配置

```shell

vim /etc/kubernetes/manifests/etcd.yaml

```

增加所有master的连接

```yaml

......

- --initial-cluster=hw-prd-dtp-hue-server-10-4-46-215=https://10.4.46.215:2380,hw-prd-dtp-k8s-master-10-4-46-128=https://10.4.46.128:2380,hw-prd-dtp-k8s-master-10-4-46-29=https://10.4.46.29:2380

#云原生征文#k8s高可用三台master部署

#云原生征文#k8s高可用三台master部署

......

【本文正在参加云原生有奖征文活动】,活动链接:https://ost.51cto.com/posts/12598”;


以上是关于#云原生征文#k8s高可用三台master部署 图文并茂的主要内容,如果未能解决你的问题,请参考以下文章

k8s高可用环境部署系统准备

记一次阿里云上安装K8S集群 kubeadm安装高可用过程

#云原生征文#自建高可用k8s集群前置概念与操作

#云原生征文#深入Kubernetes(k8s)概念

#云原生征文#Kubernetes(k8s)网络

云原生 | Kubernetes篇自建高可用k8s集群搭建