Kubernetes 系列使用 kubeadm 部署高可用集群
Posted 范桂飓
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kubernetes 系列使用 kubeadm 部署高可用集群相关的知识,希望对你有一定的参考价值。
目录
文章目录
- 目录
- 高可用集群部署拓扑
- 1、网络代理配置
- 2、Load Balancer 环境准备
- 3、Kubernetes Cluster 环境准备
- 4、初始化 Master 主控制平面节点
- (可选)清理或重新进行初始化
- 5、添加 Master 冗余控制平面节点
- 6、添加 Node 工作负载节点
- 7、安装 CNI 网络插件
- 8、安装 Metrics Server
- 9、安装 Dashboard GUI
- 10、访问 Dashboard UI
- 11、通过 NFS 实现持久化存储
高可用集群部署拓扑
官方文档:https://kubernetes.io/zh/docs/setup/production-environment/
- 基础设施:OpenStack
- 虚拟机集群:3 Master、2 Node、2 Load Balancer
- 计算资源:x86-64 processor、2CPU、2GB RAM、20GB free disk space
- 操作系统:CentOS 7.x+
- 版本:Kubernetes 1.18.14
- Container Runtime:Docker
1、网络代理配置
因为要科学上网,所以需要对 HTTP/S Proxy 和 No Proxy 进行精心的配置,否则要么下不下来软件,要么出现网络连通性的错误。
export https_proxy=http://proxy_ip:7890 http_proxy=http://proxy_ip:7890 all_proxy=socks5://proxy_ip:7890 no_proxy=localhost,127.0.0.1,apiserver_endpoint_ip,k8s_mgmt_network_ip_pool,pod_network_ip_pool,service_network_ip_pool
2、Load Balancer 环境准备
基于 OpenStack Octavia LBaaS 来提供 HA Load Balancer,也可以手动的配置 keepalived and haproxy(https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing)。
-
VIP 选择 kube-mgmt-subnet
-
Listener 选择 TCP :6443 Socket(kube-apiserver 的监听端口)
-
Members 选择 3 个 k8s-master
-
Monitor 同样选择 TCP :6443 Socket
注意:创建好 Load Balancer 之后,首先要测试一下 TCP 反向代理运行正常。由于 apiserver 现在尚未运行,所以预期会出现一个连接拒绝错误。在我们初始化了第一个控制平面节点之后,要记得再次进行测试。
# nc -v LOAD_BALANCER_IP PORT
nc -v 192.168.0.100 6443
3、Kubernetes Cluster 环境准备
注意:在所有节点上执行以下操作。
- 科学上网。
- 添加全节点的 Hostname 解析。
# vi /etc/hosts
192.168.0.100 kube-apiserver-endpoint
192.168.0.148 k8s-master-1
192.168.0.112 k8s-master-2
192.168.0.193 k8s-master-3
192.168.0.208 k8s-node-1
192.168.0.174 k8s-node-2
- 开启全节点之间的 SSH 免密登录。
- 禁用 Swap 交换分区,为了保证 kubelet 正常工作。
- 确保 iptables 工具不使用 nftables 后端,nftables 后端与当前的 kubeadm 软件包不兼容,它会导致重复的防火墙规则并破坏 kube-proxy。
- 确保节点之间的网络联通性。
- 关闭 SELinux,为了允许容器访问主机的文件系统。
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
- 在 RHEL/CentOS 7 上为了保证 kube-proxy 控制的数据流量必须进过 iptables 的处理来进行本地路由,所以要确保 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 被设置为 1。
# 确保加载了 br_netfilter 模块。
modprobe br_netfilter
lsmod | grep br_netfilter
# 确保 sysctl 配置,将 Bridge 的 IPv4 流量传递到 iptables 的 Chain(链)
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
- 安装基础依赖软件:
yum install ebtables ethtool ipvsadm -y
安装 Container Runtime
注意:当 Linux 使用 systemd 时,会创建一个 cgroup,此时需要保证 Container Runtime、kubelet 和 systemd 使用的是同一个 cgroup,否则会出现不可预测的问题。为此,我们需要将 Container Runtime、kubelet 配置成使用 systemd 来作为 cgroup 驱动,以此使系统更为稳定。
对于 Docker 而言,设置 native.cgroupdriver=systemd
选项即可。
- 安装:
# 安装依赖包
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# 新增 Docker 仓库
sudo yum-config-manager --add-repo \\
https://download.docker.com/linux/centos/docker-ce.repo
# 安装 Docker CE
sudo yum update -y && sudo yum install -y \\
containerd.io-1.2.13 \\
docker-ce-19.03.11 \\
docker-ce-cli-19.03.11
- 配置:
# 创建 /etc/docker 目录
sudo mkdir /etc/docker
# 设置 Docker daemon
cat <<EOF | sudo tee /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
"max-size": "100m"
,
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
EOF
- 重启:
# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d
# 重启 Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
sudo systemctl status docker
安装 kubeadm、kubelet 和 kubectl
注意:kubeadm 是 Kubernetes Cluster 的部署工具,但 kubeadm 不能用于安装、管理 kubelet 或 kubectl,所以我们需要收到安装它们,并且确保三者的版本仓库是一致的。
- 更新 Kubernetes YUM 仓库:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
- 安装:
# 查询版本
$ yum list kubelet kubeadm kubectl --showduplicates | grep 1.18.14 | sort -r
kubelet.x86_64 1.18.14-0 kubernetes
kubectl.x86_64 1.18.14-0 kubernetes
kubeadm.x86_64 1.18.14-0 kubernetes
# 安装指定版本
yum install -y kubelet-1.18.14 kubeadm-1.18.14 kubectl-1.18.14 --disableexcludes=kubernetes
# 确定版本一致
$ kubeadm version
kubeadm version: &version.InfoMajor:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"2020-12-18T12:08:45Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"
$ kubectl version --client
Client Version: version.InfoMajor:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"2020-12-18T12:11:25Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"
$ kubelet --version
Kubernetes v1.18.14
- 配置:上面我们提到过,需要将 Container Runtime、kubelet 配置成使用 systemd 来作为 cgroup 驱动,以此使系统更为稳定。
# vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
- 启动:
$ systemctl daemon-reload
$ systemctl restart kubelet
$ systemctl enable --now kubelet
$ systemctl status kubelet
注意:kubelet.sercice 每隔几秒就会重启一次,循环等待 kubeadm 的指令。
4、初始化 Master 主控制平面节点
kubeadm init 的工作流
kubeadm init 命令通过执行下列步骤来启动一个 Kubernetes Master:
-
预检测系统状态:当出现 ERROR 时就退出 kubeadm,除非问题得到解决或者显式指定了
--ignore-preflight-errors=<错误列表>
参数。此外,也会出现 WARNING。 -
生成一个自签名的 CA 证书来为每个系统组件建立身份标识:可以显式指定
--cert-dir
CA 中心目录(默认为 /etc/kubernetes/pki),在该目录下方式 CA 证书、密钥等文件。API Server 证书将为任何--apiserver-cert-extra-sans
参数值提供附加的 SAN 条目,必要时将其小写。 -
将 kubeconfig 文件写入 /etc/kubernetes/ 目录:以便 kubelet、Controller Manager 和 Scheduler 用来连接到 API Server,它们都有自己的身份标识,同时生成一个名为 admin.conf 的独立的 kubeconfig 文件,用于管理操作。
-
为 API Server、Controller Manager 和 Scheduler 生成 static Pod 的清单文件:存放在 /etc/kubernetes/manifests 下,kubelet 会轮训监视这个目录,在启动 Kubernetes 时用于创建系统组件的 Pod。假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的 static Pod 清单文件。
待 Master 的 static Pods 都运行正常后,kubeadm init 的工作流程才会继续往下执行。
-
对 Master 使用 Labels 和 Stain mark(污点标记):以此隔离生产工作负载不会调度到 Master 上。
-
生成 Token:将来其他的 Node 可使用该 Token 向 Master 注册自己。也可以显式指定
--token
提供 Token String。 -
为了使 Node 能够遵照启动引导令牌(Bootstrap Tokens)和 TLS 启动引导(TLS bootstrapping)这两份文档中描述的机制加入到 Cluster 中,kubeadm 会执行所有的必要配置:
- 创建一个 ConfigMap 提供添加 Node 到 Cluster 中所需的信息,并为该 ConfigMap 设置相关的 RBAC 访问规则。
- 允许启动引导令牌访问 CSR 签名 API。
- 配置自动签发新的 CSR 请求。
-
通过 API Server 安装一个 DNS 服务器(CoreDNS)和 kube-proxy:注意,尽管现在已经部署了 DNS 服务器,但直到安装 CNI 时才调度它。
执行初始化
注意 1:因为我们要部署高可用集群,所以必须使用选项 --control-plane-endpoint
指定 API Server 的 HA Endpoint。
注意 2:由于 kubeadm 默认从 k8s.grc.io 下载所需镜像,因此可以通过 --image-repository
指定阿里云的镜像仓库。
注意 3:如果显式指定 --upload-certs
,则意味着在扩展冗余 Master 时,你必须要手动地将 CA 证书从主控制平面节点复制到将要加入的冗余控制平面节点上,推荐使用。
- 初始化:
kubeadm init \\
--control-plane-endpoint "192.168.0.100" \\
--kubernetes-version "1.18.14" \\
--pod-network-cidr "10.0.0.0/8" \\
--service-cidr "172.16.0.0/16" \\
--token "abcdef.0123456789abcdef" \\
--token-ttl "0" \\
--image-repository registry.aliyuncs.com/google_containers \\
--upload-certs
W1221 00:02:43.240309 10942 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.14
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.0.1 192.168.0.148 192.168.0.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.0.148 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.0.148 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1221 00:02:47.773223 10942 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1221 00:02:47.774303 10942 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.117265 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \\
--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \\
--control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \\
--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1
- 查看 Pods:检查 Master 的组件是否齐全。
# 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-fh9vb 0/1 Pending 0 23m
coredns-7ff77c879f-qmk7z 0/1 Pending 0 23m
etcd-k8s-master-1 1/1 Running 0 24m
kube-apiserver-k8s-master-1 1/1 Running 0 24m
kube-controller-manager-k8s-master-1 1/1 Running 0 24m
kube-proxy-7hx55 1/1 Running 0 23m
kube-scheduler-k8s-master-1 1/1 Running 0 24m
- 查看 Images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.18.14 8e6bca1d4e68 2 days ago 117MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.18.14 f17e261f4c8a 2 days ago 173MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.14 b734a959c6fb 2 days ago 162MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.18.14 95660d582e82 2 days ago 95.3MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 10 months ago 683kB
registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 10 months ago 43.8MB
registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 14 months ago 288MB
- 查看 Containers:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f9a068b890d7 8e6bca1d4e68 "/usr/local/bin/kube…" 2 minutes ago Up 2 minutes k8s_kube-proxy_kube-proxy-7hx55_kube-system_aacb0da3-16ec-414c-b138-856e2b470bb9_0
3b6adfa0b1a5 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-7hx55_kube-system_aacb0da3-16ec-414c-b138-856e2b470bb9_0
dcc47de63e50 f17e261f4c8a "kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-k8s-master-1_kube-system_c693bd1fadf036d8e2e4df0afd49f062_0
53afb7fbe8c0 b734a959c6fb "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-k8s-master-1_kube-system_f75424d466cd7197fb8095b0f59ea8d9_0
a4101a231c1b 303ce5db0e90 "etcd --advertise-cl…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-k8s-master-1_kube-system_f85e02734d6479f3bb3e468eea87fd3a_0
197f510ff6c5 95660d582e82 "kube-scheduler --au…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-k8s-master-1_kube-system_0213a889f9350758ac9847629f75db19_0
3a4590590093 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-k8s-master-1_kube-system_f75424d466cd7197fb8095b0f59ea8d9_0
4bbdc99a7a68 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-k8s-master-1_kube-system_c693bd1fadf036d8e2e4df0afd49f062_0
19488127c269 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-k8s-master-1_kube-system_f85e02734d6479f3bb3e468eea87fd3a_0
e67d2f7a27b0 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-k8s-master-1_kube-system_0213a889f9350758ac9847629f75db19_0
- 测试 API Server LB 是否正常:
$ nc -v 192.168.0.100 6443
Connection to 192.168.0.100 port 6443 [tcp/sun-sr-https] succeeded!
注意:上述 Token 的过期时间是 24 小时,如果希望在 24 小时之后继续添加不通的节点,则需要重新生产 Token:
# 新建 Token
kubeadm token create
# output: 5didvk.d09sbcov8ph2amjw
# 新建 --discovery-token-ca-cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \\
openssl dgst -sha256 -hex | sed 's/^.* //'
# output: 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
(可选)清理或重新进行初始化
要再次运行 kubeadm init,你必须首先卸载集群,可以在 Master 上触发尽力而为的清理:
kubeadm reset
Reset 过程不会重置或清除 iptables 规则或 IPVS 表。如果你希望重置 iptables 或 IPVS,则必须手动进行:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
ipvsadm -C
根据需求调整参数,重新进行初始化:
kubeadm init <args>
或许,彻底删除节点:
kubectl delete node <node name>
5、添加 Master 冗余控制平面节点
在第一个 Master 初始化完毕之后,我们就可以继续添加冗余 Master 节点了。
- 添加 k8s-master-2:
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \\
--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \\
--control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-2 localhost] and IPs [192.168.0.112 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-2 localhost] and IPs [192.168.0.112 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.0.1 192.168.0.112 192.168.0.100]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W1221 00:30:18.978564 27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1221 00:30:18.986650 27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1221 00:30:18.987613 27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
"level":"warn","ts":"2020-12-21T00:30:34.018+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.0.112:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master-2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
- 添加 k8s-master-3:
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \\
--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \\
--control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544
- 检查 Master 节点数:
# 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-1 NotReady master 35m v1.18.14
k8s-master-2 NotReady master 8m14s v1.18.14
k8s-master-3 NotReady master 2m30s v1.18.14
6、添加 Node 工作负载节点
部署完高可用的 Master 控制平面之后,我们就可以注册任意个 Node 工作负载节点了。
- 添加 Node:
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \\
--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1
W1221 00:39:36.256784 29495 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was以上是关于Kubernetes 系列使用 kubeadm 部署高可用集群的主要内容,如果未能解决你的问题,请参考以下文章
云原生Kubernetes系列第五篇kubeadm v1.20 部署K8S 集群架构(人生这道选择题,总会有遗憾)
kubernetes— 记一次用kubeadm搭建kubernetes v1.9.0集群