二进制安装多master节点的k8s集群-1.23.4(高可用架构)--未完待续
Posted 风干工程师肉要不要
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了二进制安装多master节点的k8s集群-1.23.4(高可用架构)--未完待续相关的知识,希望对你有一定的参考价值。
二进制部署k8s高可用集群-v1.23.4
一、部署规划:
- Pod网段:10.0.0.0/16
- Service网段:10.255.0.1/16
- 操作系统:Centos7.6
- 配置:4G/6C/100G
- 网络模式:静态IP
主机名 | ip | docker | calico | Keepalived | nginx | 主机配置 | 安装组件 |
---|---|---|---|---|---|---|---|
master1 | 192.168.225.138 | 20.10.13 | v0.11.0 | v1.3.5 | v1.18.0 | 6C4G | apiserver、controller-manager、scheduler、etcd、docker、keeplived、nginx |
master2 | 192.168.225.139 | 20.10.13 | v0.11.0 | v1.3.5 | v1.18.0 | 6C4G | apiserver、controller-manager、scheduler、etcd、docker、keeplived、nginx |
master3 | 192.168.225.140 | 20.10.13 | v0.11.0 | v1.3.5 | v1.18.0 | 6C4G | apiserver、controller-manager、scheduler、etcd、docker、keeplived、nginx |
work1 | 192.168.225.141 | 20.10.13 | v0.11.0 | / | / | 6C4G | kubelet、kube-proxy、docker、calico、coredns |
work2 | 192.168.225.142 | 20.10.13 | v0.11.0 | / | / | 6C4G | kubelet、kube-proxy、docker、calico、coredns |
work3 | 192.168.225.143 | 20.10.13 | v0.11.0 | / | / | 6C4G | kubelet、kube-proxy、docker、calico、coredns |
VIP | 192.168.225.150 | / | / | v1.3.5 | v1.5.18 | / | / |
client | 192.168.225.145 | / | / | / | 2C2G | kubectl |
二、高可用架构:
- 主备模式高可用架构说明:
核心组件 | 高可用模式 | 高可用实现方式 |
---|---|---|
apiserver | 主备 | keeplived |
controller-manager | 主备 | leader election |
scheduler | 主备 | leader election |
etcd | 集群 | keeplived |
- apiserver 通过haproxy+keepalived实现高可用,当某个节点故障时触发keepalived vip 转移;
- controller-manager k8s内部通过选举方式产生领导者(由–leader-elect 选型控制,默认为true),同一时刻集群内只有一个controller-manager组件运行;
- scheduler k8s内部通过选举方式产生领导者(由–leader-elect 选型控制,默认为true),同一时刻集群内只有一个scheduler组件运行;
- etcd 通过运行kubeadm方式自动创建集群来实现高可用,部署的节点数为奇数。如果剩余可用节点数量超过半数,集群可以几乎没有影响的正常工作(3节点方式最多容忍一台机器宕机)
三、kubeadm和二进制安装k8s适用场景分析:
-
kubeadm:kubeadm是官方提供的开源工具,是一个开源项目,用于快速搭建kubernetes集群,目前是比较方便和推荐使用的。kubeadm init 以及 kubeadm join 这两个命令可以快速创建 kubernetes 集群。Kubeadm初始化k8s,所有的组件都是以pod形式运行的,具备故障自恢复能力。kubeadm是工具,可以快速搭建集群,也就是相当于用程序脚本帮我们装好了集群,属于自动部署,简化部署操作,自动部署屏蔽了很多细节,使得对各个模块感知很少,如果对k8s架构组件理解不深的话,遇到问题比较难排查。kubeadm适合需要经常部署k8s,或者对自动化要求比较高的场景下使用。
-
二进制:在官网下载相关组件的二进制包,如果手动安装,对kubernetes理解也会更全面。
四、初始化集群:
4.1 安装规划配置静态IP、主机名
4.2 配置hosts文件:
- 该操作需要在集群中所有节点(master和work)全部执行,添加如下内容:
[root@master1 ~]# vim /etc/hosts
192.168.225.138 master1
192.168.225.139 master2
192.168.225.140 master3
192.168.225.141 work1
192.168.225.142 work2
192.168.225.143 work3
4.3 配置主机之间无密码登录:
- 该操作需要在集群中所有节点(master和work)全部执行:
#生成ssh密钥对:
[root@master1 ~]# ssh-keygen -t rsa #一路回车,不输入密码
#把本地的ssh公钥文件安装到远程主机对应的账号中:
[root@master1 ~]# ssh-copy-id -i .ssh/id_rsa.pub master2
[root@master1 ~]# ssh-copy-id -i .ssh/id_rsa.pub master3
[root@master1 ~]# ssh-copy-id -i .ssh/id_rsa.pub work1
[root@master1 ~]# ssh-copy-id -i .ssh/id_rsa.pub work2
[root@master1 ~]# ssh-copy-id -i .ssh/id_rsa.pub work3
#其他几个节点的操作方式跟master1一致,在此只展示master1的操作方式;
4.4 关闭firewalld:
- 该操作需要在集群中所有节点(master和work)全部执行:
[root@master1 ~]# systemctl stop firewalld ; systemctl disable firewalld
4.5 关闭selinux:
- 该操作需要在集群中所有节点(master和work)全部执行:
#修改selinux配置文件之后,重启机器,selinux配置才能永久生效重启之后登录机器验证是否修改成功:
[root@master1 ~]# sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
#重启之后登录机器验证是否修改成功:
[root@master1 ~]# getenforce
Disabled
#显示Disabled说明selinux已经关闭
4.6 关闭swap交换分区:
- 该操作需要在集群中所有节点(master和work)全部执行:
#临时关闭
swapoff -a
#永久关闭:注释swap挂载,给swap这行开头加一下注释
vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
#如果是克隆的虚拟机,需要删除UUID
4.7 修改内核参数:
- 该操作需要在集群中所有节点(master和work)全部执行:
#加载br_netfilter模块
modprobe br_netfilter
#验证模块是否加载成功:
lsmod |grep br_netfilter
#修改内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#使刚才修改的内核参数生效
sysctl -p /etc/sysctl.d/k8s.conf
4.8 配置阿里云repo源:
- 该操作需要在集群中所有节点(master和work)全部执行:
#安装openssh-clients和lrzsz
[root@master1 ~]# yum install openssh-clients lrzsz -y
#备份基础repo源:
[root@master1 ~]# mkdir /root/repo.bak
[root@master1 ~]# mv /etc/yum.repos.d/* ./repo.bak/
#配置阿里云repo源:
[root@master1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#配置Centos-Base.repo源:
cat > /etc/yum.repos.d/CentOS-Base.repo <<EOF
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#released updates
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
4.9 配置时间同步:
- 该操作需要在集群中所有节点(master和work)全部执行:
#安装ntpdate命令,
#yum install ntpdate -y
#跟网络源做同步
ntpdate cn.pool.ntp.org
#把时间同步做成计划任务
crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
#重启crond服务
service crond restart
#修改时区为中国时区
[root@master1 yum.repos.d]# timedatectl set-timezone Asia/Shanghai
4.10 安装iptables:
- 该操作需要在集群中所有节点(master和work)全部执行:
#安装iptables
[root@master1 ~]# yum install iptables-services -y
#禁用iptables
[root@master1 ~]# service iptables stop && systemctl disable iptables
#清空防火墙规则
[root@master1 ~]# iptables -F
4.11 开启ipvs:
- 该操作需要在集群中所有节点(master和work)全部执行:
#安装部署ipvs:
[root@master1 modules]# yum install ipset ipvsadm.x86_64 -y
#将需要加载的模块写入脚本文件:
[root@master1 modules]# vim /etc/sysconfig/modules/ipvs.modules
#!bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
#为上面的文件添加执行权限:
[root@master1 modules]# chmod u+x /etc/sysconfig/modules/ipvs.modules
#执行上面的配置文件:
[root@master1 modules]# /bin/bash /etc/sysconfig/modules/ipvs.modules
#查看对应模块是否加载成功:
[root@master1 modules]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 15053 1
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 133095 6 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
4.12 安装docker-ce:
- 该操作需要在集群中所有节点(master和work)全部执行:
[root@master1 modules]# yum install docker-ce docker-ce-cli containerd.io -y
[root@master1 modules]# systemctl start docker && systemctl enable docker.service && systemctl status docker
4.13 配置docker镜像加速器:
- 该操作需要在集群中所有节点(master和work)全部执行:
tee /etc/docker/daemon.json << EOF
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
EOF
systemctl daemon-reload
systemctl restart docker
systemctl status docker
#修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以。
五、搭建etcd集群:
5.1 配置etcd工作目录:
- 该操作需要在集群中所有节点(master)全部执行:
[root@master1 ~]# mkdir -p /etc/etcd
[root@master1 ~]# mkdir -p /etc/etcd/ssl
5.2 安装签发证书工具cfssl:
- 该操作需要在集群节点(master1)执行:
[root@master1 ~]# mkdir /data/work -p
[root@master1 ~]# cd /data/work/
#cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64上传到/data/work/目录下
[root@master1 work]# ls
cfssl-certinfo_linux-amd64 cfssljson_linux-amd64 cfssl_linux-amd64
#把文件变成可执行权限
[root@master1 work]# chmod +x *
[root@master1 work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@master1 work]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@master1 work]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
5.3 生成ca根证书(用于后续给集群etcd、apiserver等颁发证书):
- 生成ca证书请求文件,该操作仅需要在集群节点(master1)执行:
[root@master1 ca]# vim etcd-server-ca-csr.json
"CN": "kubernetes",
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "system"
],
"ca":
"expiry": "87600h"
[root@master1 ca]# cfssl gencert -initca /data/ca/ca-csr.json | cfssljson -bare ca
[root@master1 ca]# ll
total 16
-rw-r--r-- 1 root root 1001 Mar 15 17:58 ca.csr ca证书申请签署文件
-rw-r--r-- 1 root root 256 Mar 15 17:57 ca-csr.json 生成ca证书申请签署文件的文件
-rw------- 1 root root 1679 Mar 15 17:58 ca-key.pem ca证书私钥
-rw-r--r-- 1 root root 1359 Mar 15 17:58 ca.pem ca证书
#注:
#CN:Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端证书则为证书申请者的姓名。
#O:Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端单位证书则为证书申请者所在单位名称。
#L 字段:所在城市
#S 字段:所在省份
#C 字段:只能是国家字母缩写,如中国:CN
5.4 生成etcd-server-ca证书:
- 生成etcd-server-ca证书配置文件,该操作需要在master1执行:
[root@master1 etcd-ssl]# vim etcd-server-ca-config.json
"signing":
"default":
"expiry": "87600h"
,
"profiles":
"kubernetes":
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
- 生成etcd-server-ca申请签署文件,该操作在集群节点(master1)执行:
[root@master1 etcd-ssl]# vim etcd-server-csr.json
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.225.138",
"192.168.225.139",
"192.168.225.140",
"192.168.225.150"
],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "system"
]
#上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,可以预留几个,做扩容用。
- 生成etcd-server-ca证书:
[root@master1 etcd-ssl]# cfssl gencert -ca=/data/ca/ca.pem -ca-key=/data/ca/ca-key.pem -config=/data/etcd-work/etcd-ssl/etcd-server-ca-config.json -profile=kubernetes /data/etcd-work/etcd-ssl/etcd-server-csr.json | cfssljson -bare etcd-server
[root@master1 etcd-ssl]# ls etcd-server*.pem
etcd-server-key.pem etcd-server.pem
#至此,etcd client证书已经颁发完成;
5.5 部署etcd集群:
5.5.1 master1部署etcd:
- 将etcd-v3.4.13-linux-amd64.tar.gz上传到/data/work目录下:
[root@master1 etcd-v3.4.13]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
[root@master1 etcd-v3.4.13]# ll etcd-v3.4.13-linux-amd64.tar.gz
-rw-r--r-- 1 root root 17373136 Mar 15 10:07 etcd-v3.4.13-linux-amd64.tar.gz
[root@master1 etcd-v3.4.13]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[root@master1 etcd-v3.4.13]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
- 创建配置文件:
[root@master1 work]# vim etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.225.138:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.225.138:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.225.138:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.225.138:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.225.138:2380,etcd2=https://192.168.225.139:2380,etcd3=https://192.168.225.140:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#参数详解:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
- 移动刚才创建的证书、配置文件:
[root@master1 ca]# cp ca*.pem /etc/etcd/ssl/
[root@master1 etcd-ssl]# cp etcd-server*.pem /etc/etcd/ssl/
[root@master1 etcd-ssl]# cp etcd.conf /etc/etcd/
- 创建启动服务文件:
[root@master1 work]# vim etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
--cert-file=/etc/etcd/ssl/etcd-server.pem \\ #etcd-server证书
--key-file=/etc/etcd/ssl/etcd-server-key.pem \\ #etcd-server私钥
--trusted-ca-file=/etc/etcd/ssl/ca.pem \\ #ca根证书
--peer-cert-file=/etc/etcd/ssl/etcd-server.pem \\ #etcd-server证书
--peer-key-file=/etc/etcd/ssl/etcd-server-key.pem \\ #etcd-server私钥
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\ #ca根证书
--peer-client-cert-auth \\
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 将etcd服务启动文件cp至/usr/lib/systemd/system/下:
[root@master1 etcd-work]# cp etcd.service /usr/lib/systemd/system
- 创建etcd数据目录:
[root@master1 work]# mkdir -p /var/lib/etcd/default.etcd
5.5.2 master2部署etcd:
- 从master1拷贝安装包、证书、etcd启动文件、etcd配置文件
[root@master2 ~]# mkdir /etc/etcd/ssl/ -p
[root@master2 ssl]# scp root@192.168.225.138:/etc/etcd/ssl/* .
[root@master2 ssl]# ll
total 16
-rw------- 1 root root 1679 Mar 15 18:23 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 15 18:23 ca.pem
-rw------- 1 root root 1679 Mar 15 18:23 etcd-server-key.pem
-rw-r--r-- 1 root root 1444 Mar 15 18:23 etcd-server.pem
[root@master2 etcd]# scp root@192.168.225.138:/etc/etcd/etcd.conf .
[root@master2 etcd]# ll
total 4
-rw-r--r-- 1 root root 540 Mar 15 14:30 etcd.conf
[root@master2 system]# scp root@192.168.225.138:/usr/lib/systemd/system/etcd.service .
[root@master2 system]# ll etcd.service
-rw-r--r-- 1 root root 686 Mar 15 14:31 etcd.service
- 创建etcd数据目录:
[root@master2 etcd-work]# mkdir -p /var/lib/etcd/default.etcd
- 根据实际情况修改配置文件中的参数:
[root@master2 system]# vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.225.139:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.225.139:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.225.139:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.225.139:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.225.138:2380,etcd2=https://192.168.225.139:2380,etcd3=https://192.168.225.140:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
5.5.3 master3部署etcd:
- 从master1拷贝安装包、证书、etcd启动文件、etcd配置文件
[root@master3 ~]# mkdir /etc/etcd/ssl/ -p
[root@master3 ssl]# scp root@192.168.225.138:/etc/etcd/ssl/* .
[root@master3 ssl]# ll
total 16
-rw------- 1 root root 1679 Mar 15 18:26 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 15 18:26 ca.pem
-rw------- 1 root root 1679 Mar 15 18:26 etcd-server-key.pem
-rw-r--r-- 1 root root 1444 Mar 15 18:26 etcd-server.pem
[root@master3 etcd]# scp root@192.168.225.138:/etc/etcd/etcd.conf .
[root@master3 etcd]# ll
total 4
-rw-r--r-- 1 root root 540 Mar 15 14:30 etcd.conf
[root@master3 system]# scp root@192.168.225.138:/usr/lib/systemd/system/etcd.service .
[root@master3 system]# ll etcd.service
-rw-r--r-- 1 root root 686 Mar 15 14:31 etcd.service
- 创建etcd数据目录:
[root@master3 etcd-work]# mkdir -p /var/lib/etcd/default.etcd
- 根据实际情况修改配置文件中的参数:
[root@master3 system]# vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.225.140:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.225.140:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.225.140:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.225.140:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.225.138:2380,etcd2=https://192.168.225.139:2380,etcd3=https://192.168.225.140:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
5.5.4 依次启动3个节点的etcd服务:
[root@master1 work]# systemctl daemon-reload
[root@master1 work]# systemctl enable etcd.service
[root@master1 work]# systemctl start etcd.service
[root@master2 work]# systemctl daemon-reload
[root@master2 work]# systemctl enable etcd.service
[root@master2 work]# systemctl start etcd.service
[root@master3 work]# systemctl daemon-reload
[root@master3 work]# systemctl enable etcd.service
[root@master3 work]# systemctl start etcd.service
#查看etcd服务状态:
[root@master1]# systemctl status etcd
[root@master2]# systemctl status etcd
[root@master3]# systemctl status etcd
5.5.5 查看etcd集群:
[root@master1 etcd]# ETCDCTL_API=3 /usr/local/bin/etcdctl --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd-server.pem --key=/etc/etcd/ssl/etcd-server-key.pem --endpoints="https://192.168.225.138:2379,https://192.168.225.139:2379,https://192.168.225.140:2379" endpoint health --write-out=table
+------------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+------------------------------+--------+-------------+-------+
| https://192.168.225.138:2379 | true | 19.377865ms | |
| https://192.168.225.140:2379 | true | 20.559902ms | |
| https://192.168.225.139:2379 | true | 24.452918ms | |
+------------------------------+--------+-------------+-------+
六、安装kubernetes组件:
- apiserver
- controller manager
- scheduler
- kubelet
- kube-proxy
6.1 下载安装包:
#把kubernetes-server-linux-amd64.tar.gz上传到xianchaomaster1上的/data/kubernetes-work目录下:
[root@master1 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@master1 work]# cd kubernetes/server/bin/
[root@master1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
#将这4个组件分别cp到master2和master3的/usr/local/bin目录下:
[root@master1 kubernetes-work]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@192.168.225.139:/usr/local/bin
[root@master1 kubernetes-work]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@192.168.225.140:/usr/local/bin
#将kubelet和kube-proxy分别cp到work1-work3的/usr/local/bin目录下:
[root@master1 kubernetes-work]# scp kubelet kube-proxy work1:/usr/local/bin
[root@master1 kubernetes-work]# scp kubelet kube-proxy work2:/usr/local/bin
[root@master1 kubernetes-work]# scp kubelet kube-proxy work3:/usr/local/bin
#创建kubernetes的目录、ssl证书和日志的存放目录(需要在集群中所有节点全部创建master和work节点):
[root@master1 kubernetes-work]# mkdir -p /etc/kubernetes/ssl
[root@master1 kubernetes-work]# mkdir /var/log/kubernetes
6.2 部署apiserver组件:
6.2.1 TLS Bootstrapping机制原理详解:
- 启动TLS Bootstrapping机制:
apiVersion: v1
clusters: null
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences:
users:
- name: kubelet-bootstrap
user:
-
TLS Bootstrapping具体引导过程:
- TLS的作用:TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向apiserver请求指定内容。
- RBAC:当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组。
- 以上说明:第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。不直接给用户授权,而且给角色去授权,将用户绑定给角色。
- kubelet首次启动流程:
6.2.2 master1部署kube-apiserver
- 创建token.csv文件:
[root@master1 kubernetes-work]# cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
#格式:token,用户名,UID,用户组
- 创建csr请求文件,替换为自己机器的IP地址:
[root@master1 kubernetes-work]# vim kube-apiserver-csr.json
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.225.138",
"192.168.225.139",
"192.168.225.140",
"192.168.225.141",
"192.168.225.142",
"192.168.225.143",
"192.168.225.144",
"192.168.225.150",
"10.255.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "system"
]
#注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)
- 生成kube-apiserver证书:
[root@master1 kubernetes-ssl]# cfssl gencert -ca=/data/ca/ca.pem -ca-key=/data/ca/ca-key.pem -config=/data/etcd-work/etcd-ssl/etcd-server-ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
[root@master1 kubernetes-ssl]# ll
total 16
-rw-r--r-- 1 root root 1301 Mar 15 21:15 kube-apiserver.csr
-rw-r--r-- 1 root root 600 Mar 15 20:32 kube-apiserver-csr.json
-rw------- 1 root root 1679 Mar 15 21:15 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1667 Mar 15 21:15 kube-apiserver.pem
- 创建apiserver配置文件,并替换为自己的IP:
[root@master1 kubernetes-work]# vim kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--anonymous-auth=false \\
--bind-address=192.168.225.138 \\
--secure-port=6443 \\
--advertise-address=192.168.225.138 \\
--insecure-port=0 \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all=true \\
--enable-bootstrap-token-auth \\
--service-cluster-ip-range=10.255.0.0/16 \\
--token-auth-file=/etc/kubernetes/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd-server.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-server-key.pem \\
--etcd-servers=https://192.168.225.138:2379,https://192.168.225.139:2379,https://192.168.225.140:2379 \\
--enable-swagger-ui=true \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kube-apiserver-audit.log \\
--event-ttl=1h \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=4"
#参数详解:
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书 –
-audit-log-xxx:审计日志
- 创建apiserver启动配置文件:
[root@master1 kubernetes-work]# vim kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 移动刚才创建的证书、配置文件、启动配置文件:
[root@master1 ca]# cp ca*.pem /etc/kubernetes/ssl
[root@master1 kubernetes-ssl]# cp kube-apiserver*.pem /etc/kubernetes/ssl/
[root@master1 kubernetes-work]# cp token.csv /etc/kubernetes/
[root@master1 kubernetes-work]# cp kube-apiserver.conf /etc/kubernetes/
[root@master1 kubernetes-work]# cp kube-apiserver.service /usr/lib/systemd/system/
- 启动kube-apiserver服务:
[root@master1 ~]# systemctl daemon-reload
[root@master1 ~]# systemctl enable kube-apiserver
[root@master1 ~]# systemctl start kube-apiserver
[root@master3 kubernetes]# systemctl is-active kube-apiserver.service
active
6.2.3 master2部署kube-apiserver:
- 从master1拷贝证书、kube-apiserver启动文件、kube-apiserver配置文件和token文件:
[root@master2 ~]# mkdir /etc/kubernetes/ssl/ -p
[root@master2 ssl]# scp root@192.168.225.138:/etc/kubernetes/ssl/* .
[root@master2 ssl]# ll
total 16
-rw------- 1 root root 1679 Mar 15 21:47 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 15 21:47 ca.pem
-rw------- 1 root root 1679 Mar 15 21:47 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1667 Mar 15 21:47 kube-apiserver.pem
[root@master2 kubernetes]# scp root@192.168.225.138:/etc/kubernetes/kube-apiserver.conf .
[root@master2 kubernetes]# ll
total 4
-rw-r--r-- 1 root root 1635 Mar 15 21:49 kube-apiserver.conf
[root@master2 system]# scp root@192.168.225.138:/usr/lib/systemd/system/kube-apiserver.service .
[root@master2 system]# ll kube-apiserver.service
-rw-r--r-- 1 root root 686 Mar 15 14:31 kube-apiserver.service
[root@master2 kubernetes]# scp root@192.168.225.138:/etc/kubernetes/token.csv .
[root@master2 kubernetes]# ll
total 8
-rw-r--r-- 1 root root 1635 Mar 15 21:52 kube-apiserver.conf
drwxr-xr-x 2 root root 94 Mar 15 21:47 ssl
-rw-r--r-- 1 root root 84 Mar 15 21:55 token.csv
- 修改配置kube-apiserver.conf配置文件:
#修改监听地址和通告地址为当前节点的地址:
--bind-address=192.168.225.139 \\
--advertise-address=192.168.225.139 \\
- 启动kube-apiserver服务:
[root@master2 ~]# systemctl daemon-reload
[root@master2 ~]# systemctl enable kube-apiserver
[root@master2 ~]# systemctl start kube-apiserver
[root@master3 kubernetes]# systemctl is-active kube-apiserver.service
active
6.2.4 master3部署kube-apiserver:
- 从master1拷贝证书、kube-apiserver启动文件、kube-apiserver配置文件和token文件:
[root@master3 ~]# mkdir /etc/kubernetes/ssl/ -p
[root@master3 ssl]# scp root@192.168.225.138:/etc/kubernetes/ssl/* .
[root@master3 ssl]# ll
total 16
-rw------- 1 root root 1679 Mar 15 21:47 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 15 21:47 ca.pem
-rw------- 1 root root 1679 Mar 15 21:47 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1667 Mar 15 21:47 kube-apiserver.pem
[root@master3 kubernetes]# scp root@192.168.225.138:/etc/kubernetes/kube-apiserver.conf .
[root@master3 kubernetes]# ll
total 4
-rw-r--r-- 1 root root 1635 Mar 15 21:49 kube-apiserver.conf
[root@master3 system]# scp root@192.168.225.138:/usr/lib/systemd/system/kube-apiserver.service .
[root@master3 system]# ll kube-apiserver.service
-rw-r--r-- 1 root root 686 Mar 15 14:31 kube-apiserver.service
[root@master3 kubernetes]# scp root@192.168.225.138:/etc/kubernetes/token.csv .
[root@master3 kubernetes]# ll
total 8
-rw-r--r-- 1 root root 1635 Mar 15 21:52 kube-apiserver.conf
drwxr-xr-x 2 root root 94 Mar 15 21:47 ssl
-rw-r--r-- 1 root root 84 Mar 15 21:55 token.csv
- 修改配置kube-apiserver.conf配置文件:
#修改监听地址和通告地址为当前节点的地址:
--bind-address=192.168.225.140 \\
--advertise-address=192.168.225.140 \\
- 启动kube-apiserver服务:
[root@master3 ~]# systemctl daemon-reload
[root@master3 ~]# systemctl enable kube-apiserver
[root@master3 ~]# systemctl start kube-apiserver
[root@master3 kubernetes]# systemctl is-active kube-apiserver.service
active
6.2.4 测试kube-apiserver节点状态:
- 登录到任何一个work节点进行测试:
[root@work1 ~]# curl --insecure https://192.168.225.138:6443/
"kind": "Status",
"apiVersion": "v1",
"metadata": ,
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
#上面看到401,这个是正常的的状态,还没认证;
七、部署kubectl组件:
- 可以设置一个环境变量KUBECONFIG:
[root@ master1 ~]# export KUBECONFIG =/etc/kubernetes/admin.conf
#这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了
- 也可以按照下面方法,这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法:
[root@ master1 ~]# cp /etc/kubernetes/admin.conf /root/.kube/config
#这样我们在执行kubectl,就会加载/root/.kube/config文件,去操作k8s资源了
7.1 创建csr请求文件:
[root@master1 kubectl]# vim admin-csr.json
"CN": "admin",
"hosts": [],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "system"
]
7.2 生成证书:
[root@master1 kubectl]# cfssl gencert -ca=/data/ca/ca.pem -ca-key=/data/ca/ca-key.pem -config=/data/etcd-work/etcd-ssl/etcd-server-ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@master1 kubectl]# cp admin*.pem /etc/kubernetes/ssl/
7.3 配置安全上下文:
- 设置集群参数:
[root@master1 kubectl]# kubectl config set-cluster kubernetes --certificate-authority=/data/ca/ca.pem --embed-certs=true --server=https://192.168.225.138:6443 --kubeconfig=/opt/kube.config
- 设置客户端认证参数:
[root@master1 kubectl]# kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --client-key=/etc/kubernetes/ssl/admin-key.pem --embed-certs=true --kubeconfig=/opt/kube.config
- 设置上下文参数:
[root@master1 kubectl]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=/opt/kube.config
- 设置当前上下文:
[root@master1 kubectl]# kubectl config use-context kubernetes --kubeconfig=/opt/kube.config
[root@master1 kubectl]# mkdir ~/.kube -p
[root@master1 kubectl]# cp /opt/kube.config ~/.kube/config
- 授权kubernetes证书访问kubelet api权限:
[root@master1 kubectl]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
7.4 查看集群组件状态:
[root@master1 kubectl]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.225.138:6443
[root@master2 .kube]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
controller-manager Unhealthy Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
etcd-0 Healthy "health":"true"
etcd-2 Healthy "health":"true"
etcd-1 Healthy "health":"true"
[root@master1 kubectl]# kubectl get all --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.255.0.1 <none> 443/TCP
7.5 将kubectl用到的config文件同步到其他2的master上面:
[root@master2 kubectl]# mkdir /root/.kube
[root@master3 kubectl]# mkdir /root/.kube
[root@master1 kubectl]# scp ~/.kube/config root@192.168.225.139:/root/.kube
[root@master1 kubectl]# scp ~/.kube/config root@192.168.225.140:/root/.kube
7.6 配置kubectl子命令补全:
#在集群中3个master节点上执行:
[root@master1 ~]# yum install -y bash-completion
[root@master1 ~]# source /usr/share/bash-completion/bash_completion
[root@master1 ~]# source <(kubectl completion bash)
[root@master1 ~]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@master1 ~]# source /root/.kube/completion.bash.inc
[root@master1 ~]# source $HOME/.bash_profile
#Kubectl官方备忘单:
https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/
八、部署kube-controller-manager组件:
以上是关于二进制安装多master节点的k8s集群-1.23.4(高可用架构)--未完待续的主要内容,如果未能解决你的问题,请参考以下文章