二进制安装k8s v1.22.8集群

Posted mingxin95

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了二进制安装k8s v1.22.8集群相关的知识,希望对你有一定的参考价值。

一、环境规划

1.1 服务器环境

K8S集群角色Ip主机名安装的组件
控制节点192.168.10.162k8s-master01etcd、docker、kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、kubelet、flanneld、keepalived、nginx
控制节点192.168.10.163k8s-master02etcd、docker、kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、kubelet、flanneld、keepalived、nginx
工作节点192.168.10.190k8s-node01etcd、docker、kubelet、kube-proxy、flanneld、coredns
工作节点192.168.10.191k8s-node02etcd、docker、kubelet、kube-proxy、flanneld、coredns
负载均衡器192.168.10.88k8s-master-lbkeepalived虚拟IP Vip

考虑电脑配置问题,一次性开四台机器会跑不动,
所以搭建这套K8s高可用集群分两部分实施,先部署一套单Master架构(3台),
再扩容为多Master架构(4台或6台),顺便再熟悉下Master扩容流程
k8s-master2 暂不部署
keepalived、nginx 组件非高可用架构也不需要

1.2 系统配置

操作系统:CentOS Linux release 7.9.2009 (Core)
系统用户:root
密码:root
配置: 2Gib内存/2vCPU/20G硬盘
网络:Vmware NAT模式
k8s版本:v1.22.8
etcd版本:v3.5.1
flanneld版本:v0.17.0
docker版本:20.10.9
宿主机网段:10.168.10.0/16
Pod网段:10.88.0.0/16
Service网段:10.99.0.0/16

宿主机网段、K8s Service网段、Pod网段不能重复
VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通才可用。VIP需要和主机在同一个局域网内
公有云上搭建VIP是公有云的负载均衡的IP,比如阿里云的内网SLB的地址,腾讯云内网ELB的地址

二、环境初始化

2.1 配置hosts文件

cat >> /etc/hosts << EOF
192.168.10.162 k8s-master01
192.168.10.163 k8s-master02
192.168.10.88  k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.10.190 k8s-node01
192.168.10.191 k8s-node02

192.168.10.162 etcd-01
192.168.10.190 etcd-02
192.168.10.191 etcd-03
EOF

2.2 配置主机之间无密码登录


# 生成ssh 密钥对,一路回车,不输入密码
ssh-keygen -t rsa

# 把本地的ssh公钥文件安装到远程主机对应的账户
ssh-copy-id -i .ssh/id_rsa.pub k8s-master01
ssh-copy-id -i .ssh/id_rsa.pub k8s-master02
ssh-copy-id -i .ssh/id_rsa.pub k8s-node01
ssh-copy-id -i .ssh/id_rsa.pub k8s-node01

2.3 系统环境初始化

关闭selinux 阿里云ECS默认关闭

sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

时间同步

yum install ntpdate -y
ntpdate time1.aliyun.com
# 把时间同步做成计划任务
crontab -e
#增加以下任务
    * */1 * * * /usr/sbin/ntpdate   time1.aliyun.com
# 重启crond服务
systemctl restart crond

关闭交换分区swap 阿里云ECS默认关闭

#临时关闭
# swapoff -a
#永久关闭 注意:如果是克隆的虚拟机,需要删除UUID一行
mv /etc/fstab /etc/fstab.bak
cat /etc/fstab.bak |grep -v swap >> /etc/fstab

防火墙设置

systemctl disable firewalld
systemctl stop firewalld

安装基础软件包

yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet rsync

安装iptables

yum -y install iptables-services
systemctl enable iptables
systemctl start iptables

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
iptables -P FORWARD ACCEPT
service iptables save

修改内核参数

# 1、加载br_netfilter模块
modprobe br_netfilter
# 2、验证模块是否加载成功
lsmod |grep br_netfilter
# 网桥过滤
cat >> /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 4、使刚才修改的内核参数生效
sysctl -p /etc/sysctl.d/k8s.conf

说明

问题一:sysctl是做什么的?
    # 在运行时配置内核参数
    -p   从指定的文件加载系统参数,如不指定即从/etc/sysctl.conf中加载
问题二:为什么要执行modprobe br_netfilter?

    修改/etc/sysctl.d/k8s.conf文件,增加如下三行参数:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1

    # sysctl -p /etc/sysctl.d/k8s.conf出现报错:
    sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
    sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

    # 解决方法:
    modprobe br_netfilter
问题三:为什么开启net.bridge.bridge-nf-call-iptables内核参数?
    # 在centos下安装docker,执行docker info出现如下警告:
    WARNING: bridge-nf-call-iptables is disabled
    WARNING: bridge-nf-call-ip6tables is disabled

    # 解决办法:
    vim  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
问题四:为什么要开启net.ipv4.ip_forward = 1参数?
    kubeadm初始化k8s如果报错如下,说明没有开启ip_forward,需要开启
    /proc/sys/net/ipv4/ip_forward contents are not set to 1
    # net.ipv4.ip_forward是数据包转发:
    1)出于安全考虑,Linux系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,
    根据数据包的目的ip地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。
    2)要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指定了Linux系统
    当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发功能已经打开。

配置阿里云repo源

暂未验证

# 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 下载新的CentOS-Base.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 生成缓存
yum clean all && yum makecache

开启ipvs

不开启ipvs将会使用iptables进行数据包转发,但是效率低,所以官网推荐需要开通ipvs。
暂未验证

#安装 conntrack-tools
yum install ipvsadm ipset sysstat conntrack libseccomp -y

cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
systemctl restart systemd-modules-load.service

lsmod | grep -e ip_vs -e nf_conntrack
    ip_vs_sh               16384  0
    ip_vs_wrr              16384  0
    ip_vs_rr               16384  0
    ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack          176128  1 ip_vs
    nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
    nf_defrag_ipv4         16384  1 nf_conntrack
    libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

安装docker

sudo yum update -y
sudo yum remove docker  docker-common docker-selinux docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
#sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#yum-config-manager --add-repo http://download.docker.com/linux/centos/docker-ce.repo #(中央仓库)
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #(阿里仓库)
# yum list docker-ce --showduplicates | sort -r
# https://download.docker.com/linux/static/stable/x86_64/docker-20.10.9.tgz
sudo yum -y install docker-ce-20.10.9-3.el7

# kubelet Cgroup Driver默认使用systemd, 若使用默认的,则需要修改docker文件驱动为systemd 默认为cgroupfs, 两者必须一致才可以
mkdir /etc/docker/ -p
touch /etc/docker/daemon.json
#"exec-opts": ["native.cgroupdriver=systemd"]
cat > /etc/docker/daemon.json <<EOF

"registry-mirrors": ["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"log-driver": "json-file",
"storage-driver": "overlay2"

EOF

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

目录生成

mkdir -pv /data/apps/etcd/ssl,bin,etc,data && cd /data/apps/etcd/ssl
mkdir -pv /data/apps/kubernetes/pki,log,etc,certs
mkdir -pv /data/apps/kubernetes/log/apiserver,controller-manager,scheduler,kubelet,kube-proxy

三、etcd集群部署

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,
所以先准备一个Etcd数据库,为解决Etcd单点故障,
应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,
当然,你也可以使用5台组建集群,可容忍2台机器故障。

3.1 环境说明

集群环境

为了节省机器,这里与K8s节点机器复用。
也可以独立于k8s集群之外部署,只要apiserver能连接到就行

节点名称ipaddr
etcd-01192.168.10.162
etcd-02192.168.10.190
etcd-03192.168.10.191

准备证书生成工具

只需要在一台机器执行 etcd-01 机器执行

#etcd-01安装签发证书工具cfssl
mkdir /root/cfssl -p && cd /root/cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

3.2 生成证书

准备工作

mkdir -pv /root/etcd-ssl/ && cd /root/etcd-ssl/

生成 CA 证书

expiry 为证书过期时间(10 年)

cat > ca-config.json << EOF

  "signing": 
    "default": 
      "expiry": "87600h"
    ,
    "profiles": 
      "kubernetes": 
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      
    
  

EOF


# 生成 CA 证书请求文件, ST/L/字段可自行修改
cat > etcd-ca-csr.json << EOF

  "CN": "etcd",
  "key": 
    "algo": "rsa",
    "size": 2048
  ,
  "names": [
    
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    
  ]

EOF

# 生成证书请求文件,ST/L/字段可自行修改
# 上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
cat > etcd-csr.json << EOF

  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.10.162",
    "192.168.10.163",
    "192.168.10.164",
    "192.168.10.190",
    "192.168.10.191",
    "192.168.10.192",
    "192.168.10.193"
  ],
  "key": 
    "algo": "rsa",
    "size": 2048
  ,
  "names": [
    
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "etcd",
      "OU": "Etcd Security"
    
  ]

EOF
#生成证书
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
#生成 server.pem server-key.pem
cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json  -profile=kubernetes  etcd-csr.json | cfssljson -bare etcd

###复制证书到部署目录
所有etcd集群节点
/data/apps/目录需要提前在其他etcd节点行创建

mkdir -pv /data/apps/etcd/ssl,bin,etc,data
cp etcd*.pem /data/apps/etcd/ssl
scp -r /data/apps/etcd 192.168.10.190:/data/apps/
scp -r /data/apps/etcd 192.168.10.191:/data/apps/

3.3 安装etcd集群

下载etcd二进制包

下载地址 https://github.com/etcd-io/etcd/releases/

cd ~
wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
tar zxf etcd-v3.5.1-linux-amd64.tar.gz
cp etcd-v3.5.1-linux-amd64/etcd* /data/apps/etcd/bin/
#拷贝到其他节点
scp -r etcd-v3.5.1-linux-amd64/etcd* 192.168.10.190:/data/apps/etcd/bin/
scp -r etcd-v3.5.1-linux-amd64/etcd* 192.168.10.191:/data/apps/etcd/bin/

创建etcd配置文件

# 这里的 etcd 虚拟机都有两个网卡,一个用于提供服务,另一个用于集群通信
#0.0.0.0 后续需要替换为当前节点内网ip
cat > /data/apps/etcd/etc/etcd.conf << EOF
#[Member]
ETCD_NAME="ename"
ETCD_DATA_DIR="/data/apps/etcd/data/default.etcd"
# 修改此处,修改此处为当前服务器IP
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
# 修改此处,修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://0.0.0.0:2379"
#
#[Clustering]
# 修改此处为当前服务器IP
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://0.0.0.0:2380"
# 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://0.0.0.0:2379"
ETCD_INITIAL_CLUSTER="etcd-01=https://192.168.10.162:2380,etcd-02=https://192.168.10.190:2380,etcd-03=https://192.168.10.191:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#kube-apiserver 使用 Etcd v3接口,而 flannel 使用 v2接口, 
#Etcd v3.4 发布说明,从 3.4 版本开始,默认已经关闭 v2 接口协议。建议直接在 Etcd 启动参数添加 --enable_v2 'true'
ETCD_ENABLE_V2="true"

#[Security]
ETCD_CERT_FILE="/data/apps/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/data/apps/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/data/apps/etcd/ssl/etcd-ca.pem"
ETCD_PEER_CERT_FILE="/data/apps/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/data/apps/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/data/apps/etcd/ssl/etcd-ca.pem"
#
[Logging]
ETCD_DEBUG="false"
ETCD_LOG_OUTPUT="default"
EOF

相关参数说明

ETCD_NAME="etcd-01"  定义本服务器的etcd名称
etcd-01,etcd-02,etcd-03 分别为三台服务器上对应ETCD_NAME的值
ETCD_INITIAL_CLUSTER_TOKEN,ETCD_INITIAL_CLUSTER_STATE的值各个etcd节点相同

拷贝到其他节点

scp -r /data/apps/etcd/etc/etcd.conf 192.168.10.190:/data/apps/etcd/etc/
scp -r /data/apps/etcd/etc/etcd.conf 192.168.10.191:/data/apps/etcd/etc/

拷贝完后,修改相关ip地址

# 162服务器
sed -i "s/0.0.0.0/192.168.10.162/g" /data/apps/etcd/etc/etcd.conf
sed -i "s/ename/etcd-01/g" /data/apps/etcd/etc/etcd.conf
# 190服务器
sed -i "s/0.0.0.0/192.168.10.190/g" /data/apps/etcd/etc/etcd.conf
sed -i "s/ename/etcd-02/g" /data/apps/etcd/etc/etcd.conf
# 191服务器
sed -i "s/0.0.0.0/192.168.10.191/g" /data/apps/etcd/etc/etcd.conf
sed -i "s/ename/etcd-03/g" /data/apps/etcd/etc/etcd.conf

创建etcd.service


cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
#
[Service]
Type=notify
EnvironmentFile=/data/apps/etcd/etc/etcd.conf
ExecStart=/data/apps/etcd/bin/etcd
# ETCD3.4版本会自动读取环境变量中以ETCD开头的参数,所以EnvironmentFile文件中有的参数,
# 不需要再次在ExecStart启动参数中添加,二选一,如同时配置,会触发报错
#--name=\\$ETCD_NAME \\\\
#--data-dir=\\$ETCD_DATA_DIR \\\\
#--listen-peer-urls=\\$ETCD_LISTEN_PEER_URLS \\\\
#--listen-client-urls=\\$ETCD_LISTEN_CLIENT_URLS \\\\
#--advertise-client-urls=\\$ETCD_ADVERTISE_CLIENT_URLS \\\\
#--initial-advertise-peer-urls=\\$ETCD_INITIAL_ADVERTISE_PEER_URLS \\\\
#--initial-cluster=\\$ETCD_INITIAL_CLUSTER \\\\
#--initial-cluster-token=\\$ETCD_INITIAL_CLUSTER_TOKEN \\\\
#--initial-cluster-state=\\$ETCD_INITIAL_CLUSTER_STATE
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
#
[Install]
WantedBy=multi-user.target
EOF

拷贝到其他节点

scp -r /usr/lib/systemd/system/etcd.service 192.168.10.190:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/etcd.service 192.168.10.191:/usr/lib/systemd/system/

启动服务

useradd -r etcd && chown etcd.etcd -R /data/apps/etcd
systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd && systemctl status etcd

设置环境变量

echo "PATH=$PATH:/data/apps/etcd/bin/" >> /etc/profile.d/etcd.sh
chmod +x /etc/profile.d/etcd.sh
source /etc/profile.d/etcd.sh

查看集群状态

    # etcd 默认使用api3
    etcdctl --cacert=/data/apps/etcd/ssl/etcd-ca.pem --cert=/data/apps/etcd/ssl/etcd.pem --key=/data/apps/etcd/ssl/etcd-key.pem --endpoints="https://192.168.10.162:2379,https://192.168.10.190:2379,https://192.168.10.191:2379" endpoint health --write-out=table

结果

+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.10.191:2379 |   true |   16.6952ms |       |
| https://192.168.10.190:2379 |   true | 16.693779ms |       |
| https://192.168.10.162:2379 |   true | 16.289445ms |       |
+-----------------------------+--------+-------------+-------+
cluster is degrade(只要有一台有问题就是这种)
cluster is healthy(所以etcd节点都正常)

查看集群成员

etcdctl --cacert=/data/apps/etcd/ssl/etcd-ca.pem --cert=/data/apps/etcd/ssl/etcd.pem --key=/data/apps/etcd/ssl/etcd-key.pem --endpoints="https://192.168.10.162:2379,https://192.168.10.190:2379,https://192.168.10.191:2379" member list

# ETCDCTL_API=3 etcdctl --cacert=/data/apps/etcd/ssl/etcd-ca.pem --cert=/data/apps/etcd/ssl/etcd.pem --key=/data/apps/etcd/ssl/etcd-key.pem member list

结果

+------------------+---------+---------+-----------------------------+----------------------------------------------------+------------+
|        ID        | STATUS  |  NAME   |         PEER ADDRS          |                    CLIENT ADDRS                    | IS LEARNER |
+------------------+---------+---------+-----------------------------+----------------------------------------------------+------------+
| 4b6699de1466051a | started | etcd-03 | https://192.168.10.191:2380 | https://127.0.0.1:2379,https://192.168.10.191:2379 |      false |
| 7d643d2a75dfeb32 | started | etcd-02 | https://192.168.10.190:2380 | https://127.0.0.1:2379,https://192.168.10.190:2379 |      false |
| b135df4790d40e52 | started | etcd-01 | https://192.168.10.162:2380 | https://127.0.0.1:2379,https://192.168.10.162:2379 |      false |
+------------------+---------+---------+-----------------------------+----------------------------------------------------+------------+

注意:如果没有设置环境变量ETCDCTL_API,则默认使用ETCDCTL_API=3的api
ETCDCTL_API=2与ETCDCTL_API=3对应的命令参数有所不同

集群启动后出现的错误日志

the clock difference against peer 97feb1a73a325656 is too high
集群各个节点时钟不同步,通过ntpdate time1.aliyun.com命令可以同步时钟
注意防火墙,selinux的关闭

四、安装kubernetes组件

4.1 环境说明

集群服务器配置

K8S集群角色Ip主机名安装的组件
控制节点192.168.10.162k8s-master01etcd、docker、kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、kubelet、flanneld、keepalived、nginx
控制节点192.168.10.163k8s-master02etcd、docker、kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、kubelet、flanneld、keepalived、nginx
工作节点192.168.10.190k8s-node01etcd、docker、kubelet、kube-proxy、flanneld、coredns
工作节点192.168.10.191k8s-node02etcd、docker、kubelet、kube-proxy、flanneld、coredns
负载均衡器192.168.10.88k8s-master-lbkeepalived虚拟IP Vip

考虑电脑配置问题,一次性开四台机器会跑不动,
所以搭建这套K8s高可用集群分两部分实施,先部署一套单Master架构(3台),
再扩容为多Master架构(4台或6台),顺便再熟悉下Master扩容流程
k8s-master2 暂不部署
keepalived、nginx 组件非高可用架构也不需要

k8s网络环境规划

 1. k8s版本:v1.22.8
 2. Pod网段:10.88.0.0/16
 3. Service网段:10.99.0.0/16

4.2 生成集群CA证书

只需在在masetr机器执行

ca证书

mkdir /root/k8s-ssl && cd /root/k8s-ssl
cat > ca-config.json << EOF

  "signing": 
    "default": 
      "expiry": "87600h"
    ,
    "profiles": 
      "kubernetes": 
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      
    
  

EOF
cat > ca-csr.json << EOF

  "CN": "kubernetes",
  "key": 
    "algo": "rsa",
    "size": 2048
  ,
  "names": [
    
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    
  ],
  "ca": 
    "expiry": "87600h"
  

EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

kube-apiserver 证书

注意:如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。
由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,
同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver
指定的 service-cluster-ip-range 网段的第一个IP,如 10.99.0.1)
负载均衡器的ip也需要指定

cat > kube-apiserver-csr.json  << EOF

  "CN": "kube-apiserver",
  "hosts": [
    "127.0.0.1",
    "192.168.10.162",
    "192.168.10.163",
    "192.168.10.164",
    "192.168.10.165",
    "192.168.10.88",
    "10.99.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    "kubernetes.default.svc.lgh",
    "kubernetes.default.svc.lgh.work"
  ],
  "key": 
    "algo": "rsa",
    "size": 2048
  ,
  "names": [
    
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    
  ]

EOF
#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=/root/etcd-ssl/ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

kube-controller-manager 证书

host 可以为空

cat > kube-controller-manager-csr.json << EOF

  "CN": "system:kube-controller-manager",
  "hosts": [
    "127.0.0.1",
    "192.168.10.162",
    "192.168.10.163",
    "192.168.10.164",
    "192.168.10.165",
    "192.168.10.88",
    "10.88.0.1",
    "10.99.0.1"
  ],
  "key": 
    "algo": "rsa",
    "size": 2048
  ,
  "names": [
    
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:kube-controller-manager",
      "OU": "System"
    
  ]

EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=/root/etcd-ssl/ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

kube-scheduler证书

hosts 列表包含所有 kube-scheduler 节点 IP 即为master节点ip,可以多写几个;
CN 为 system:kube-scheduler、
O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限

cat > kube-scheduler-csr.json << EOF

  "CN": "system:kube-scheduler",
  "hosts": [
    "127.0.0.1",
    "192.168.10.162",
    "192.168.10.163",
    "192.168.10.164",
    "192.168.10.165"
  ],
  "key": 
    "algo": "rsa",
    "size": 2048
  ,
  "names": [
    
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:kube-scheduler",
      "OU": "System"
    
  ]

EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem 二进制安装k8s v1.22.8集群

云原生之kubernetes实战kubernetes集群的HPA弹性伸缩

企业运维实战-k8s学习笔记17.k8s集群+Prometheus监控部署基于prometheus实现k8s集群的hpa动态伸缩虚拟机部署prometheus监控

K8s 从懵圈到熟练-集群伸缩原理

二进制安装K8s集群

二进制安装K8s集群