kubernetes二进制安装

Posted 礁之

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kubernetes二进制安装相关的知识,希望对你有一定的参考价值。

一、实验环境

系统主机名ip配置运行服务扮演角色
Centos7.4master01192.168.100.2024G双核docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、kube-nginx、flannelmaster节点
Centos7.4master02192.168.100.2034G双核docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、kube-nginx、flannelmaster节点
Centos7.4worker01192.168.100.2052G单核docker、etcd、kubelet、proxy、flannelworker节点
Centos7.4worker02192.168.100.2062G单核docker、etcd、kubelet、proxy、flannelworker节点

以上主机全部都有桥接网卡,虚拟ip192.168.100.204,部署双master实现k8s高可用

二、实验步骤

1、先做基础配置

四台服务器都需要进行操作

#master01
[root@Centos7 ~]# hostnamectl set-hostname master01
[root@Centos7 ~]# su
[root@master01 ~]# cat <<aaa>> /etc/hosts
192.168.100.202 master01
192.168.100.203 master02
192.168.100.205 worker01
192.168.100.206 worker02
aaa
#master02
[root@Centos7 ~]# hostnamectl set-hostname master02
[root@Centos7 ~]# su
[root@master02 ~]# cat <<aaa>> /etc/hosts
> 192.168.100.202 master01
> 192.168.100.203 master02
> 192.168.100.205 worker01
> 192.168.100.206 worker02
> aaa
#worker01
[root@Centos7 ~]# hostnamectl set-hostname worker01
[root@Centos7 ~]# su
[root@worker01 ~]# cat <<aaa>> /etc/hosts
> 192.168.100.202 master01
> 192.168.100.203 master02
> 192.168.100.205 worker01
> 192.168.100.206 worker02
> aaa
#worker02
[root@Centos7 ~]# hostnamectl set-hostname worker02
[root@Centos7 ~]# su
[root@worker02 ~]# cat <<aaa>> /etc/hosts
> 192.168.100.202 master01
> 192.168.100.203 master02
> 192.168.100.205 worker01
> 192.168.100.206 worker02
> aaa

2、编写脚本进行初始化准备

2步骤的所有操作在master01上执行即可!!!!

#在master01上编写脚本
[root@master01 ~]# vim k8sinit.sh
#!/bin/sh
#****************************************************************#
# ScriptName: k8sinit.sh
# Initialize the machine. This needs to be executed on every machine.
# Mkdir k8s directory
yum -y install wget ntpdate && ntpdate ntp1.aliyun.com
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
 yum -y install epel-release
mkdir -p /opt/k8s/bin/
mkdir -p /data/k8s/k8s
mkdir -p /data/k8s/docker
# Disable the SELinux.
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
# Turn off and disable the firewalld.
systemctl stop firewalld
systemctl disable firewalld
# Modify related kernel parameters & Disable the swap.
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.tcp_tw_recycle = 0
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv6.conf.all.disable_ipv6 = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf >&/dev/null
swapoff -a
sed -i '/ swap / s/^\\(.*\\)$/#\\1/g' /etc/fstab
modprobe br_netfilter

# Add ipvs modules
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- nf_conntrack
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

# Install rpm
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel vim
# ADD k8s bin to PATH
echo 'export PATH=/opt/k8s/bin:$PATH' >> /root/.bashrc
#保存退出
[root@master01 ~]# chmod +x k8sinit.sh
#在master01上配置免密登录其他主机
[root@master01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:hjslVhnFN3ZeWAhJR0xXQavf1L1OyF0L2USEqoELTgo root@master01
The key's randomart image is:
+---[RSA 2048]----+
|        .o..o*BO*|
|         o. =o*.o|
|        +  o.+ + |
| E   o + . .  * o|
|  . + = S o  + .=|
|   . o * .  . =.=|
|      o      o *.|
|       .      o  |
|               . |
+----[SHA256]-----+
[root@master01 ~]# ssh-copy-id 192.168.100.202
[root@master01 ~]# ssh-copy-id 192.168.100.203
[root@master01 ~]# ssh-copy-id 192.168.100.205
[root@master01 ~]# ssh-copy-id 192.168.100.206
#编写设置环境变量的脚本
[root@master01 ~]# vim environment.sh   #其中的节点ip和网卡名称要记得修改,如果和环境配置相同则无需修改
!/bin/bash
# 生成 EncryptionConfig 所需的加密 key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

# 集群 MASTER 机器 IP 数组
export MASTER_IPS=(192.168.100.202 192.168.100.203)

# 集群 MASTER IP 对应的主机名数组
export MASTER_NAMES=(master01 master02)

# 集群 NODE 机器 IP 数组
export NODE_IPS=(192.168.100.205 192.168.100.206)

# 集群 NODE IP 对应的主机名数组
export NODE_NAMES=(worker01 worker02)

# 集群所有机器 IP 数组
export ALL_IPS=(192.168.100.202 192.168.100.203 192.168.100.205 192.168.100.206)

# 集群所有IP 对应的主机名数组
export ALL_NAMES=(master01 master02 worker01 worker02)

# etcd 集群服务地址列表
export ETCD_ENDPOINTS="https://192.168.100.202:2379,https://192.168.100.203:2379"

# etcd 集群间通信的 IP 和端口
export ETCD_NODES="master01=https://192.168.100.202:2380,master02=https://192.168.100.203:2380"

# kube-apiserver 的反向代理(kube-nginx)地址端口,这里填虚拟ip地址
export KUBE_APISERVER="https://192.168.100.204:16443"

# 节点间互联网络接口名称
export IFACE="ens32"

# etcd 数据目录
export ETCD_DATA_DIR="/data/k8s/etcd/data"

# etcd WAL 目录,建议是 SSD 磁盘分区,或者和 ETCD_DATA_DIR 不同的磁盘分区
export ETCD_WAL_DIR="/data/k8s/etcd/wal"

# k8s 各组件数据目录
export K8S_DIR="/data/k8s/k8s"

# docker 数据目录
export DOCKER_DIR="/data/k8s/docker"

## 以下参数一般不需要修改
# TLS Bootstrapping 使用的 Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c"

# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段
# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 保证)
SERVICE_CIDR="10.20.0.0/16"

# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)
CLUSTER_CIDR="10.10.0.0/16"

# 服务端口范围 (NodePort Range)
export NODE_PORT_RANGE="1-65535"

# flanneld 网络配置前缀
export FLANNEL_ETCD_PREFIX="/kubernetes/network"

# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)
export CLUSTER_KUBERNETES_SVC_IP="10.20.0.1"

# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
export CLUSTER_DNS_SVC_IP="10.20.0.254"

# 集群 DNS 域名(末尾不带点号)
export CLUSTER_DNS_DOMAIN="cluster.local"

# 将二进制目录 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH
#保存退出
[root@master01 ~]# chmod +x environment.sh 
[root@master01 ~]# source /root/environment.sh  #执行脚本
[root@master01 ~]# for all_ip in $ALL_IPS[@]; do echo $all_ip; done  #运行之前先执行这个命令,看能不能输出所有服务器的ip
[root@master01 ~]# ll
总用量 12
-rw-------. 1 root root 1264 112 2021 anaconda-ks.cfg
-rwxr-xr-x  1 root root 2470 85 16:28 environment.sh
-rwxr-xr-x  1 root root 1627 85 16:19 k8sinit.sh
[root@master01 ~]# source environment.sh   #没有的话重新执行
[root@master01 ~]# for all_ip in $ALL_IPS[@]; do echo $all_ip; done  #就像这样可以输出所有服务器的ip即可
192.168.100.202
192.168.100.203
192.168.100.205
192.168.100.206

#执行循环语句,一次性把四台服务器进行环境准备
[root@master01 ~]# for all_ip in $ALL_IPS[@]   
  do
    echo ">>> $all_ip"
    scp -rp /etc/hosts root@$all_ip:/etc/hosts
    scp -rp k8sinit.sh root@$all_ip:/root/
    ssh root@$all_ip "bash /root/k8sinit.sh"
  done

运行时间较长,一定要有网络环境!

3、创建CA证书和密钥

3步骤操作全部都在master01上执行

#安装cfssl工具集
[root@master01 ~]# mkdir -p /opt/k8s/cert
[root@master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /opt/k8s/bin/cfssl   #下载cfssl软件
[root@master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /opt/k8s/bin/cfssljson #下载json模板
[root@master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /opt/k8s/bin/cfssl-certinfo
[root@master01 ~]# chmod u+x /opt/k8s/bin/*
[root@master01 ~]# cd /opt/k8s/bin/
[root@master01 bin]# ll
总用量 18808
-rwxr--r-- 1 root root 10376657 86 10:09 cfssl
-rwxr--r-- 1 root root  6595195 86 10:10 cfssl-certinfo
-rwxr--r-- 1 root root  2277873 86 10:10 cfssljson
#创建根证书配置文件
[root@master01 bin]# cd
[root@master01 ~]# mkdir -p /opt/k8s/work
[root@master01 ~]#  cd /opt/k8s/work
[root@master01 work]# cfssl print-defaults config > config.json
[root@master01 work]# cfssl print-defaults csr > csr.json
[root@master01 work]# cp config.json ca-config.json
[root@master01 work]# cat > ca-config.json <<EOF
 
     "signing": 
         "default": 
             "expiry": "876000h"
         ,
         "profiles": 
             "kubernetes": 
                 "expiry": "876000h",
                 "usages": [
                     "signing",
                     "key encipherment",
                     "server auth",
                     "client auth"
                 ]
             
         
     
 
 EOF
 
#字段解释:
config.json:可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile;
•	signing: 表示该证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE;
•	server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
•	client auth: 表示server 可以用该CA 对client 提供的证书进行验证;
•	"expiry": "876000h":表示证书有效期设置为 100 年。
# 创建根证书签名请求文件
[root@master01 work]# cp csr.json ca-csr.json
[root@master01 work]# cat > ca-csr.json <<EOF
 
     "CN": "kubernetes",
     "key": 
         "algo": "rsa",
         "size": 2048
     ,
     "names": [
         
             "C": "CN",
             "ST": "Shanghai",
             "L": "Shanghai",
             "O": "k8s",
             "OU": "System"
         
     ],
     "ca": 
         "expiry": "876000h"
  
 
 EOF
#字段解释:
•	CN: Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站是否合法;
•	C:country;
•	ST:state;
•	L:city;
•	O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);
•	OU:organization unit。

#生成CA密钥(ca-key.pem)和证书(ca.pem)
[root@master01 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 
2021/08/06 10:15:01 [INFO] generating a new CA key and certificate from CSR
2021/08/06 10:15:01 [INFO] generate received request
2021/08/06 10:15:01 [INFO] received CSR
2021/08/06 10:15:01 [INFO] generating key: rsa-2048
2021/08/06 10:15:02 [INFO] encoded CSR
2021/08/06 10:15:02 [INFO] signed certificate with serial number 671027392584519656097263783341319452729816665502
[root@master01 work]# echo $?
0

提示:生成证书后,Kubernetes集群需要双向TLS认证,则可将ca-key.pem和ca.pem拷贝到所有要部署的机器的/etc/kubernetes/ssl目录下。不同证书 csr 文件的 CN、C、ST、L、O、OU 组合必须不同,否则可能出现 PEER'S CERTIFICATE HAS AN INVALID SIGNATURE 错误;
后续创建证书的 csr 文件时,CN 都不相同(C、ST、L、O、OU 相同),以达到区分的目的;
#分发证书
[root@master01 work]# source /root/environment.sh
[root@master01 work]#  for all_ip in $ALL_IPS[@];   do     echo ">>> $all_ip";     ssh root@$all_ip "mkdir -p /etc/kubernetes/cert";  scp ca*.pem ca-config.json root@$all_ip:/etc/kubernetes/cert; done
>>> 192.168.100.202
ca-key.pem                                                                                             100% 1679     1.6MB/s   00:00    
ca.pem                                                                                                 100% 1367    56.8KB/s   00:00    
ca-config.json                                                                                         100%  388    75.1KB/s   00:00    
>>> 192.168.100.203
ca-key.pem                                                                                             100% 1679     1.1MB/s   00:00    
ca.pem                                                                                                 100% 1367     1.5MB/s   00:00    
ca-config.json                                                                                         100%  388   594.7KB/s   00:00    
>>> 192.168.100.205
ca-key.pem                                                                                             100% 1679     1.6MB/s   00:00    
ca.pem                                                                                                 100% 1367     1.4MB/s   00:00    
ca-config.json                                                                                         100%  388   429.7KB/s   00:00    
>>> 192.168.100.206
ca-key.pem                                                                                             100% 1679     1.6MB/s   00:00    
ca.pem                                                                                                 100% 1367     1.5MB/s   00:00    
ca-config.json                                                                                         100%  388   629.1KB/s   00:00    

4、部署ETCD集群

4步骤全部都在master01节点上运行

#安装ETCD
etcd 是基于 Raft 的分布式 key-value 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 存储所有运行数据。
[root@master01 ~]# cd /opt/k8s/work  
[root@master01 work]# wget https://github.com/coreos/etcd/releases/download/v3.3.22/etcd-v3.3.10-linux-amd64.tar.gz
[root@master01 work]# ll
总用量 11116
-rw-r--r-- 1 root    root       388 86 10:12 ca-config.json
-rw-r--r-- 1 root    root      1005 86 10:15 ca.csr
-rw-r--r-- 1 root    root       310 86 10:13 ca-csr.json
-rw------- 1 root    root      1679 86 10:15 ca-key.pem
-rw-r--r-- 1 root    root      1367 86 10:15 ca.pem
-rw-r--r-- 1 root    root       567 86 10:12 config.json
-rw-r--r-- 1 root    root       287 86 10:12 csr.json
drwxr-xr-x 3 6810230 users      123 1011 2018 etcd-v3.3.10-linux-amd64
-rw-r--r-- 1 root    root  11353259 325 2020 etcd-v3.3.10-linux-amd64.tar.gz
[root@master01 work]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
#提示:flanneld 版本 (v0.11.0/v0.12.0) 不支持 etcd v3.4.x,本方案部署etcd-v3.3.10版本。
#分发ETCD到master节点上
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in $MASTER_IPS[@]
   do
     echo ">>> $master_ip"
     scp etcd-v3.3.10-linux-amd64/etcd* root@$master_ip:/opt/k8s/bin
     ssh root@$master_ip "chmod +x /opt/k8s/bin/*"
   done
#创建etcd证书和密钥,创建etcd的CA证书请求文件
[root@master01 work]#  cat > etcd-csr.json <<EOF

    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.100.202",
        "192.168.100.203"
  ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "k8s",
            "OU": "System"
        
    ]

EOF
#解释:
hosts:指定授权使用该证书的 etcd 节点 IP 或域名列表,需要将 etcd 集群的三个节点 IP 都列在其中。

##生成密钥和证书
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2021/08/06 10:23:29 [INFO] generate received request
2021/08/06 10:23:29 [INFO] received CSR
2021/08/06 10:23:29 [INFO] generating key: rsa-2048
2021/08/06 10:23:29 [INFO] encoded CSR
2021/08/06 10:23:29 [INFO] signed certificate with serial number 613228402925097686112501293991749855067805987177
2021/08/06 10:23:29 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
#分发证书和私钥
[root@master01 work]# source /root/environment.sh
[root@master01 work]#  for master_ip in $MASTER_IPS[@];   do     echo ">>> $master_ip";     ssh root@$master_ip "mkdir -p /etc/etcd/cert";     scp etcd*.pem root@$master_ip:/etc/etcd/cert/;  done
#创建etcd的systemd
[root@master01 work]#  source /root/environment.sh
[root@master01 work]# cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=$ETCD_DATA_DIR
ExecStart=/opt/k8s/bin/etcd \\\\
  --enable-v2=true \\\\
  --data-dir=$ETCD_DATA_DIR \\\\
  --wal-dir=$ETCD_WAL_DIR \\\\
  --name=##MASTER_NAME## \\\\
  --cert-file=/etc/etcd/cert/etcd.pem \\\\
  --key-file=/etc/etcd/cert/etcd-key.pem \\\\
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\\\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\\\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\\\
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\\\
  --peer-client-cert-auth \\\\
  --client-cert-auth \\\\
  --listen-peer-urls=https://##MASTER_IP##:2380 \\\\
  --initial-advertise-peer-urls=https://##MASTER_IP##:2380 \\\\
  --listen-client-urls=https://##MASTER_IP##:2379,http://127.0.0.1:2379 \\\\
  --advertise-client-urls=https://##MASTER_IP##:2379 \\\\
  --initial-cluster-token=etcd-cluster-0 \\\\
  --initial-cluster=$ETCD_NODES \\\\
  --initial-cluster-state=new \\\\
  --auto-compaction-mode=periodic \\\\
  --auto-compaction-retention=1 \\\\
  --max-request-bytes=33554432 \\\\
  --quota-backend-bytes=6442450944 \\\\
  --heartbeat-interval=250 \\\\
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
#解释:
WorkingDirectory、--data-dir:指定工作目录和数据目录为 $ETCD_DATA_DIR,需在启动服务前创建这个目录;
--wal-dir:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 --data-dir 不同的磁盘;
--name:指定节点名称,当 --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
--cert-file、--key-file:etcd server 与 client 通信时使用的证书和私钥;
--trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
--peer-cert-file、--peer-key-file:etcd 与 peer 通信使用的证书和私钥;
--peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书。
#修改systemd相应地址
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for (( i=0; i < 2; i++ ))
   do
     sed -e "s/##MASTER_NAME##/$MASTER_NAMES[i]/" -e "s/##MASTER_IP##/$MASTER_IPS[i]/" etcd.service.template > etcd-$MASTER_IPS[i].service
   done
[root@master01 work]# ll
总用量 11144
-rw-r--r-- 1 root    root       388 86 10:12 ca-config.json
-rw-r--r-- 1 root    root      1005 86 10:15 ca.csr
-rw-r--r-- 1 root    root       310 86 10:13 ca-csr.json
-rw------- 1 root    root      1679 86 10:15 ca-key.pem
-rw-r--r-- 1 root    root      1367 86 10:15 ca.pem
-rw-r--r-- 1 root    root       567 86 10:12 config.json
-rw-r--r-- 1 root    root       287 86 10:12 csr.json
-rw-r--r-- 1 root    root      1383 86 10:26 etcd-192.168.100.202.service  #会有这两个master节点的service配置文件
-rw-r--r-- 1 root    root      1383 86 10:26 etcd-192.168.100.203.service
-rw-r--r-- 1 root    root      1058 86 10:23 etcd.csr
-rw-r--r-- 1 root    root       354 86 10:21 etcd-csr.json
-rw------- 1 root    root      1679 86 10:23 etcd-key.pem
-rw-r--r-- 1 root    root      1436 86 10:23 etcd.pem
-rw-r--r-- 1 root    root      1382 86 10:25 etcd.service.template
drwxr-xr-x 3 6810230 users      123 1011 2018 etcd-v3.3.10-linux-amd64
-rw-r--r-- 1 root    root  11353259 325 2020 etcd-v3.3.10-linux-amd64.tar.gz   
#分发etcd systemd
[root@master01 work]#  source /root/environment.sh
[root@master01 work]# for master_ip in $MASTER_IPS[@]
   do
     echo ">>> $master_ip"
     scp etcd-$master_ip.service root@$master_ip:/etc/systemd/system/etcd.service
   done
#启动ETCD

[root@master01 work]# source /root/environme

以上是关于kubernetes二进制安装的主要内容,如果未能解决你的问题,请参考以下文章

kubernetes安装(二进制安装)

Kubernetes-1.18.4二进制高可用安装

kubernetes 二进制安装(v1.20.15)加塞一个工作节点

赵渝强使用二进制包部署Kubernetes集群

centos离线二进制安装kubernetes和docker

kubernetes二进制安装