实验:使用二进制编译部署K8S(v1.20)

Posted kiroct

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了实验:使用二进制编译部署K8S(v1.20)相关的知识,希望对你有一定的参考价值。

环境:
```html/xml
k8s集群master01:192.168.206.3 kube-apiserver kube-controller-manager kube-scheduler etcd
k8s集群master02:192.168.206.4

k8s集群node01:192.168.206.5 kubelet kube-proxy docker
k8s集群node02:192.168.206.6

etcd集群节点1:192.168.206.3 etcd
etcd集群节点2:192.168.206.5
etcd集群节点3:192.168.206.6

负载均衡nginx+keepalive01(master):192.168.206.14
负载均衡nginx+keepalive02(backup):192.168.80.15


**首先是操作系统初始化配置**
(所有节点都要操作哦!!)
```html/xml
#关闭防火墙和iptables
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

#关闭selinux
setenforce 0
sed -i s/enforcing/disabled/ /etc/selinux/config

#关闭swap
swapoff -a
sed -ri s/.*swap.*/#&/ /etc/fstab 

#根据规划设置主机名
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02
su -

#在master添加hosts
cat >> /etc/hosts << EOF
192.168.206.3 master01
192.168.206.5 node01
192.168.206.6 node02
EOF

#调整内核参数
-开启网桥模式,可将网桥的流量传递给iptables链,关闭ipv6协议

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

sysctl --system

#时间同步
yum install ntpdate -y
ntpdate time.windows.com

首先是关闭防火墙和sexlinux

下面是关闭swap分区和添加host文件

调整内核参数-开启网桥和关闭ipv6

最后是时钟调整

下面是部署etcd集群
```html/xml
(在 master01 节点上操作!!)

#准备cfssl证书生成工具(这边下载好了,直接拖进去)
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo

chmod +x /usr/local/bin/cfssl*

#生成Etcd证书
mkdir /opt/k8s
cd /opt/k8s/

#上传 etcd-cert.sh 和 etcd.sh 到 /opt/k8s/ 目录中
chmod +x etcd-cert.sh etcd.sh

#创建用于生成CA证书、etcd 服务器证书以及私钥的目录
(先去etcd-cert.sh改ip地址,再运行这个脚本哦!)
mkdir /opt/k8s/etcd-cert
mv etcd-cert.sh etcd-cert/
cd /opt/k8s/etcd-cert/
./etcd-cert.sh
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
ls(查看,成功后会有如下文件)
ca-config.json ca-csr.json ca.pem server.csr server-key.pem
ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem
。。。。。。。。。。。。。。。。。。。。。。。。。。。。

#上传 etcd-v3.4.9-linux-amd64.tar.gz 到 /opt/k8s 目录中,启动etcd服务
cd /opt/k8s/
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz

mkdir -p /opt/etcd/cfg,bin,ssl

cd /opt/k8s/etcd-v3.4.9-linux-amd64/
mv etcd etcdctl /opt/etcd/bin/
cp /opt/k8s/etcd-cert/*.pem /opt/etcd/ssl/

#在master上运行etcd集群,进入卡住状态等待其他节点加入,
cd /opt/k8s/
./etcd.sh etcd01 192.168.206.3 etcd02=https://192.168.206.5:2380,etcd03=https://192.168.206.6:2380

ps -ef | grep etcd

scp -r /opt/etcd/ root@192.168.206.5:/opt/
scp -r /opt/etcd/ root@192.168.206.6:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.206.5:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/etcd.service root@192.168.206.6:/usr/lib/systemd/system/

//在 node01 节点上操作
vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02" #修改
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.206.5:2380" #修改
ETCD_LISTEN_CLIENT_URLS="https://192.168.206.5:2379" #修改

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.206.5:2380" #修改
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.206.5:2379" #修改
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.206.3:2380,etcd02=https://192.168.206.5:2380,etcd03=https://192.168.206.6:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

systemctl start etcd
systemctl enable etcd
systemctl status etcd

//在 node02 节点上操作
vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03" #修改
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.206.6:2380" #修改
ETCD_LISTEN_CLIENT_URLS="https://192.168.206.6:2379" #修改

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.206.6:2380" #修改
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.206.6:2379" #修改
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.206.3:2380,etcd02=https://192.168.206.5:2380,etcd03=https://192.168.206.6:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

systemctl start etcd
systemctl enable etcd
systemctl status etcd

#检查etcd群集状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.206.3:2379,https://192.168.206.5:2379,https://192.168.206.6:2379" endpoint health --write-out=table

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.206.3:2379,https://192.168.206.5:2379,https://192.168.206.6:2379" --write-out=table member list


准备cfssl证书生成工具和授权
![1.png](https://s2.51cto.com/images/20220322/1647961417202424.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
生成CA证书!记得去先去etcd-cert.sh改ip地址,再运行这个脚本哦!
![1.1.png](https://s2.51cto.com/images/20220322/1647961442769752.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.4.png](https://s2.51cto.com/images/20220322/1647961477888414.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.41.png](https://s2.51cto.com/images/20220322/1647961482218268.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.5.png](https://s2.51cto.com/images/20220322/1647961487973477.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.51.png](https://s2.51cto.com/images/20220322/1647961491309404.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

启动etcd服务,配置好后。记得传给2个node节点哦
![1.6.png](https://s2.51cto.com/images/20220322/1647961524264477.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.61.png](https://s2.51cto.com/images/20220322/1647961528734977.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.62.png](https://s2.51cto.com/images/20220322/1647961533682827.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.63.png](https://s2.51cto.com/images/20220322/1647961548442430.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.64.png](https://s2.51cto.com/images/20220322/1647961553252341.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

node2的修改vim /opt/etcd/cfg/etcd (记得开启node的etcd服务)
![2.png](https://s2.51cto.com/images/20220322/1647961631337098.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![2.1.png](https://s2.51cto.com/images/20220322/1647961633695815.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
node3的修改vim /opt/etcd/cfg/etcd (记得开启node的etcd服务)
![3.png](https://s2.51cto.com/images/20220322/1647961667226788.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![3.1.png](https://s2.51cto.com/images/20220322/1647961669593070.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
最后一步运行etcd服务,一定要所有节点都开启了,最后再开启master哦!!!!
![1.65.png](https://s2.51cto.com/images/20220322/1647961558602847.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
检验:使用命令检测节点是否启动
```html/xml
1、
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.206.3:2379,https://192.168.206.5:2379,https://192.168.206.6:2379" endpoint health --write-out=table

2、
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.206.3:2379,https://192.168.206.5:2379,https://192.168.206.6:2379" --write-out=table member list



etcd的任务就完成了!

```html/xml
部署 docker引擎
//所有 node 节点部署docker引擎
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io

systemctl start docker.service
systemctl enable docker.service

![1.png](https://s2.51cto.com/images/20220323/1648042697698116.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![2.png](https://s2.51cto.com/images/20220323/1648042700581933.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

```html/xml
部署 Master 组件 
//在 master01 节点上操作
#上传 master.zip 和 k8s-cert.sh 到 /opt/k8s 目录中,解压 master.zip 压缩包
cd /opt/k8s/
unzip master.zip
chmod +x *.sh

mkdir -p /opt/kubernetes/bin,cfg,ssl,logs

#创建用于生成CA证书、相关组件的证书和私钥的目录
mkdir /opt/k8s/k8s-cert
mv /opt/k8s/k8s-cert.sh /opt/k8s/k8s-cert
cd /opt/k8s/k8s-cert/
./k8s-cert.sh

ls *pem
...........................
admin-key.pem  apiserver-key.pem  ca-key.pem  kube-proxy-key.pem  
admin.pem      apiserver.pem      ca.pem      kube-proxy.pem
...........................
cp ca*pem apiserver*pem /opt/kubernetes/ssl/

#上传 kubernetes-server-linux-amd64.tar.gz 到 /opt/k8s/ 目录中,解压 kubernetes 压缩包
cd /opt/k8s/
tar zxvf kubernetes-server-linux-amd64.tar.gz

cd /opt/k8s/kubernetes/server/bin
cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
ln -s /opt/kubernetes/bin/* /usr/local/bin/

#创建 bootstrap token 认证文件,apiserver 启动时会调用,然后就相当于在集群内创建了一个这个用户,接下来就可以用 RBAC 给他授权
cd /opt/k8s/
vim token.sh
#!/bin/bash
#获取随机数前16个字节内容,以十六进制格式输出,并删除其中空格
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d  )
#生成 token.csv 文件,按照 Token序列号,用户名,UID,用户组 的格式生成
cat > /opt/kubernetes/cfg/token.csv <<EOF
$BOOTSTRAP_TOKEN,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

chmod +x token.sh
./token.sh

cat /opt/kubernetes/cfg/token.csv

cd /opt/k8s/
./apiserver.sh 192.168.206.3 https://192.168.206.3:2379,https://192.168.206.5:2379,https://192.168.206.6:2379

ps aux | grep kube-apiserver

netstat -natp | grep 6443   #安全端口6443用于接收HTTPS请求,用于基于Token文件或客户端证书等认证

cd /opt/k8s/

#启动 scheduler 服务
./scheduler.sh
ps aux | grep kube-scheduler

#启动 controller-manager 服务
./controller-manager.sh
ps aux | grep kube-controller-manager

#生成kubectl连接集群的证书
./admin.sh

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

#通过kubectl工具查看当前集群组件状态
kubectl get cs
............................................
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   "health":"true"   
etcd-1               Healthy   "health":"true"   
etcd-0               Healthy   "health":"true"  
..............................................
#查看版本信息
kubectl version



```html/xml
部署 Worker Node 组件
//在所有 node 节点上操作
#创建kubernetes工作目录
mkdir -p /opt/kubernetes/bin,cfg,ssl,logs

#上传 node.zip 到 /opt 目录中,解压 node.zip 压缩包,获得kubelet.sh、proxy.sh
cd /opt/
unzip node.zip
chmod +x kubelet.sh proxy.sh

//在 master01 节点上操作
#把 kubelet、kube-proxy 拷贝到 node 节点
cd /opt/k8s/kubernetes/server/bin
scp kubelet kube-proxy root@192.168.206.5:/opt/kubernetes/bin/
scp kubelet kube-proxy root@192.168.206.6:/opt/kubernetes/bin/

#上传 kubeconfig.sh 文件到 /opt/k8s/kubeconfig 目录中,生成 kubeconfig 的配置文件
mkdir /opt/k8s/kubeconfig

cd /opt/k8s/kubeconfig
chmod +x kubeconfig.sh
./kubeconfig.sh 192.168.206.3 /opt/k8s/k8s-cert/

scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.206.5:/opt/kubernetes/cfg/

scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.206.6:/opt/kubernetes/cfg/

#RBAC授权,使用户 kubelet-bootstrap 能够有权限发起 CSR 请求
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

//在 node01 节点上操作
#启动 kubelet 服务
cd /opt/
./kubelet.sh 192.168.206.5
ps aux | grep kubelet

//在 master01 节点上操作,通过 CSR 请求
#检查到 node01 节点的 kubelet 发起的 CSR 请求,Pending 表示等待集群给该节点签发证书
kubectl get csr
。。。。。。。。。。。。。。。。。。。。。
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr--JhanQS6uNG1jeghPr5k-VsRYhKie7enJRd5pBivMXg 12s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
#通过 CSR 请求
kubectl certificate approve node-csr--JhanQS6uNG1jeghPr5k-VsRYhKie7enJRd5pBivMXg

#Approved,Issued 表示已授权 CSR 请求并签发证书
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-duiobEzQ0R93HsULoS9NT9JaQylMmid_nBF3Ei3NtFE 2m5s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued

#查看节点,由于网络插件还没有部署,节点会没有准备就绪 NotReady
kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.80.11 NotReady <none> 108s v1.20.11

//在 node01 节点上操作
#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

#启动proxy服务
cd /opt/
./proxy.sh 192.168.206.5
ps aux | grep kube-proxy


![1.png](https://s2.51cto.com/images/20220323/1648047841940120.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.1.png](https://s2.51cto.com/images/20220323/1648047844341262.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.2.png](https://s2.51cto.com/images/20220323/1648047846483033.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

![1.3.png](https://s2.51cto.com/images/20220323/1648047848759512.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.4.png](https://s2.51cto.com/images/20220323/1648047851882276.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.5.png](https://s2.51cto.com/images/20220323/1648047853454799.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.51.png](https://s2.51cto.com/images/20220323/1648047855156106.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.6.png](https://s2.51cto.com/images/20220323/1648047857776671.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

![1.png](https://s2.51cto.com/images/20220325/1648172878161291.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![1.1.png](https://s2.51cto.com/images/20220325/1648172882871083.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

![1.2.png](https://s2.51cto.com/images/20220325/1648172884417102.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![2.png](https://s2.51cto.com/images/20220325/1648172887354629.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![3.1.png](https://s2.51cto.com/images/20220325/1648172889588040.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![3.2.png](https://s2.51cto.com/images/20220325/1648172891430865.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![4.png](https://s2.51cto.com/images/20220325/1648172894391452.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
![5.png](https://s2.51cto.com/images/20220325/1648172896994889.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

```html/xml
部署网络组件
---------- 部署 flannel ----------
//在 node01 节点上操作
#上传 cni-plugins-linux-amd64-v0.8.6.tgz 和 flannel.tar 到 /opt 目录中
cd /opt/
docker load -i flannel.tar

mkdir /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

//在 master01 节点上操作
#上传 kube-flannel.yml 文件到 /opt/k8s 目录中,部署 CNI 网络
cd /opt/k8s
kubectl apply -f kube-flannel.yml 

kubectl get pods -n kube-system
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-hjtc7   1/1     Running   0          7s

kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.206.5   Ready    <none>   81m   v1.20.11

---------- 部署 Calico ----------
//在 master01 节点上操作
#上传 calico.yaml 文件到 /opt/k8s 目录中,部署 CNI 网络
cd /opt/k8s
vim calico.yaml
#修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kube-controller-manager配置文件指定的cluster-cidr网段一样
    - name: CALICO_IPV4POOL_CIDR
      value: "192.168.0.0/16"

kubectl apply -f calico.yaml

kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-659bd7879c-4h8vk   1/1     Running   0          58s
calico-node-nsm6b                          1/1     Running   0          58s
calico-node-tdt8v                          1/1     Running   0          58s

#等 Calico Pod 都 Running,节点也会准备就绪
kubectl get nodes

---------- node02 节点部署 ----------
//在 node01 节点上操作
cd /opt/
scp kubelet.sh proxy.sh root@192.168.206.6:/opt/
scp -r /opt/cni root@192.168.206.6:/opt/

//在 node02 节点上操作
#启动kubelet服务
cd /opt/
chmod +x kubelet.sh
./kubelet.sh 192.168.206.6

//在 master01 节点上操作
kubectl get csr
NAME                                                   AGE  SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-yAhGFjklDLeGNjlA4LRsMCa17Yy0qu73F4eWs8vIMcc   10s  kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-duiobEzQ0R93HsULoS9NT9JaQylMmid_nBF3Ei3NtFE   85m  kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

#通过 CSR 请求
kubectl certificate approve node-csr-yAhGFjklDLeGNjlA4LRsMCa17Yy0qu73F4eWs8vIMcc

kubectl get csr
NAME                                                   AGE  SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-BbqEh6LvhD4R6YdDUeEPthkb6T_CJDcpVsmdvnh81y0   23s  kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-duiobEzQ0R93HsULoS9NT9JaQylMmid_nBF3Ei3NtFE   85m  kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

//node2上面
#加载 ipvs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

#使用proxy.sh脚本启动proxy服务
cd /opt/
chmod +x proxy.sh
./proxy.sh 192.168.206.6

#查看群集中的节点状态
kubectl get nodes

如图所示,全部是准备状态

```html/xml
master02 节点部署
//从 master01 节点上拷贝证书文件、各master组件的配置文件和服务管理文件到 master02 节点
scp -r /opt/etcd/ root@192.168.206.4:/opt/
scp -r /opt/kubernetes/ root@192.168.206.4:/opt
scp /usr/lib/systemd/system/kube-apiserver,kube-controller-manager,kube-scheduler.service root@192.168.206.4:/usr/lib/systemd/system/

cd
ls -A
scp -r .ssh/ 192.168.206.4:pwd
scp -r .kube/ 192.168.206.4:pwd

//修改配置文件kube-apiserver中的IP
vim /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=https://192.168.206.3:2379,https://192.168.206.5:2379,https://192.168.206.6:2379 \\
--bind-address=192.168.206.4 \\ #修改
--secure-port=6443 \\
--advertise-address=192.168.206.4 \\ #修改
......

//在 master02 节点上启动各服务并设置开机自启
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl enable kube-controller-manager.service
systemctl start kube-scheduler.service
systemctl enable kube-scheduler.service

//查看node节点状态
ln -s /opt/kubernetes/bin/* /usr/local/bin/
kubectl get nodes
kubectl get nodes -o wide #-o=wide:输出额外信息;对于Pod,将输出Pod所在的Node名
//此时在master02节点查到的node节点状态仅是从etcd查询到的信息,而此时node节点实际上并未与master02节点建立通信连接,因此需要使用一个VIP把node节点与master节点都关联起来

![1.png](https://s2.51cto.com/images/20220324/1648133480961019.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)![1.1.png](https://s2.51cto.com/images/20220324/1648133484989061.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)![1.2.png](https://s2.51cto.com/images/20220324/1648133488941878.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)![1.3.png](https://s2.51cto.com/images/20220324/1648133490546709.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)![1.4.png](https://s2.51cto.com/images/20220324/1648133492632816.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)![1.5.png](https://s2.51cto.com/images/20220324/1648133495636140.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

```html/xml
负载均衡部署
//配置load balancer集群双机热备负载均衡(nginx实现负载均衡,keepalived实现双机热备)
##### 在lb01、lb02节点上操作 ##### 
//配置nginx的官方在线yum源,配置本地nginx的yum源
cat > /etc/yum.repos.d/nginx.repo << EOF
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
EOF

yum install nginx -y

//修改nginx配置文件,配置四层反向代理负载均衡,指定k8s群集2台master的节点ip和6443端口
vim /etc/nginx/nginx.conf
events 
    worker_connections  1024;


#添加
stream 
    log_format  main  $remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent;

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver 
        server 192.168.80.10:6443;
        server 192.168.80.20:6443;
    
    server 
        listen 6443;
        proxy_pass k8s-apiserver;
    


http 
......

//检查配置文件语法
nginx -t   

//启动nginx服务,查看已监听6443端口
systemctl start nginx
systemctl enable nginx
netstat -natp | grep nginx 

//部署keepalived服务
yum install keepalived -y

//修改keepalived配置文件
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs 
   # 接收邮件地址
   notification_email 
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER   #lb01节点的为 NGINX_MASTER,lb02节点的为 NGINX_BACKUP


#添加一个周期性执行的脚本
vrrp_script check_nginx 
    script "/etc/nginx/check_nginx.sh"  #指定检查nginx存活的脚本路径


vrrp_instance VI_1 
    state MASTER            #lb01节点的为 MASTER,lb02节点的为 BACKUP
    interface ens33         #指定网卡名称 ens33
    virtual_router_id 51    #指定vrid,两个节点要一致
    priority 100            #lb01节点的为 100,lb02节点的为 90
    advert_int 1
    authentication 
        auth_type PASS
        auth_pass 1111
    
    virtual_ipaddress 
        192.168.80.100/24   #指定 VIP
    
    track_script 
        check_nginx         #指定vrrp_script配置的脚本
    


//创建nginx状态检查脚本 
vim /etc/nginx/check_nginx.sh
#!/bin/bash
#egrep -cv "grep|$$" 用于过滤掉包含grep 或者 $$ 表示的当前Shell进程ID
count=$(ps -ef | grep nginx | egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi

chmod +x /etc/nginx/check_nginx.sh

//启动keepalived服务(一定要先启动了nginx服务,再启动keepalived服务)
systemctl start keepalived
systemctl enable keepalived
ip a                #查看VIP是否生成

//修改node节点上的bootstrap.kubeconfig,kubelet.kubeconfig配置文件为VIP
cd /opt/kubernetes/cfg/
vim bootstrap.kubeconfig 
server: https://192.168.80.100:6443

vim kubelet.kubeconfig
server: https://192.168.80.100:6443

vim kube-proxy.kubeconfig
server: https://192.168.80.100:6443

//重启kubelet和kube-proxy服务
systemctl restart kubelet.service 
systemctl restart kube-proxy.service

//在 lb01 上查看 nginx 和 node 、 master 节点的连接状态
netstat -natp | grep nginx
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      44904/nginx: master 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      44904/nginx: master 
tcp        0      0 192.168.80.100:6443     192.168.80.12:46954     ESTABLISHED 44905/nginx: worker 
tcp        0      0 192.168.80.14:45074     192.168.80.10:6443      ESTABLISHED 44905/nginx: worker 
tcp        0      0 192.168.80.14:53308     192.168.80.20:6443      ESTABLISHED 44905/nginx: worker 
tcp        0      0 192.168.80.14:53316     192.168.80.20:6443      ESTABLISHED 44905/nginx: worker 
tcp        0      0 192.168.80.100:6443     192.168.80.11:48784     ESTABLISHED 44905/nginx: worker 
tcp        0      0 192.168.80.14:45070     192.168.80.10:6443      ESTABLISHED 44905/nginx: worker 
tcp        0      0 192.168.80.100:6443     192.168.80.11:48794     ESTABLISHED 44905/nginx: worker 
tcp        0      0 192.168.80.100:6443     192.168.80.12:46968     ESTABLISHED 44905/nginx: worker 

##### 在 master01 节点上操作 ##### 
//测试创建pod
kubectl run nginx --image=nginx

//查看Pod的状态信息
kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-nf9sk   0/1     ContainerCreating   0          33s   #正在创建中

kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-nf9sk   1/1     Running   0          80s            #创建完成,运行中

kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE
nginx-dbddb74b8-26r9l   1/1     Running   0          10m   172.17.36.2   192.168.80.15   <none>
//READY为1/1,表示这个Pod中有1个容器

//在对应网段的node节点上操作,可以直接使用浏览器或者curl命令访问
curl 172.17.36.2

//这时在master01节点上查看nginx日志,发现没有权限查看
kubectl logs nginx-dbddb74b8-nf9sk
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( nginx-dbddb74b8-nf9sk)

//在master01节点上,将cluster-admin角色授予用户system:anonymous
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created

//再次查看nginx日志
kubectl logs nginx-dbddb74b8-nf9sk








以上是关于实验:使用二进制编译部署K8S(v1.20)的主要内容,如果未能解决你的问题,请参考以下文章

kubernetes 二进制安装(v1.20.15)收尾:部署几个仪表盘

kubernetes 二进制安装(v1.20.16)验证 master 部署

kubernetes 二进制安装(v1.20.15)部署 网络插件

Kubernetes(k8s)单Matser集群架构的搭建(v1.20)

kubernetes 二进制安装(v1.20.16)环境准备

[kubernetes] 跨云厂商使用公网IP搭建k8s v1.20.9集群