k8s高可用二进制集群安装
Posted a317418365
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s高可用二进制集群安装相关的知识,希望对你有一定的参考价值。
环境:centos7.7
集群节点:
10.33.250.164 | k8s-master01 |
10.33.250.165 | k8s-master02 |
10.33.250.167 | k8s-master03 |
10.33.250.171 | k8s-node01 |
10.33.250.166 | k8s-node02 |
10.33.250.168 | k8s-node03 |
10.33.250.199 | vip |
配置全为4C8G
采用keepalived+nginx实现高可用
集群版本为1.20.4
组件版本:
ETCD:etcd-v3.4.13-linux-amd64.tar.gz
初始化环境
1.在k8s-master01上安装ansible
yum install epel-release -y
yum install ansible -y
2.设置系统host和ansible-host
cat /etc/hosts
10.33.250.164 k8s-master01
10.33.250.165 k8s-master02
10.33.250.167 k8s-master03
10.33.250.171 k8s-node01
10.33.250.166 k8s-node02
10.33.250.168 k8s-node03
cat hosts
[ALL]
k8s-master02
k8s-master03
k8s-node01
k8s-node02
k8s-node03
[MASTER]
k8s-master02
k8s-master03
[NODE]
k8s-node01
k8s-node02
k8s-node03
3.关闭防火墙、selinux、交换分区,以及修改内核参数
ansible ALL -m shell -a "systemctl stop firewalld &&systemctl disable firewalld && setenforce 0&&swapoff -a" -i hosts
ansible ALL -m shell -a "modprobe br_netfilter && echo modprobe br_netfilter >> /etc/profile " -i hosts
vi /etc/fstab #给 swap 这行开头加一下注释#
cat > /etc/sysctl.d/k8s.conf <<EOF #每台机器都要执行
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
4.配置阿里云repo源
配置阿里云base源
ansible ALL -m shell -a "mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup && wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo" -i hosts
配置阿里云docker源
ansible ALL -m shell -a "yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo" -i hosts
如果提示没有yum-config-manager,安装
yum -y install yum-utils
安装iptables
yum install iptables-services -y
#禁用并且清空规则
ansible ALL -m shell -a "systemctl stop iptables && systemctl disable iptables&& iptables -F" -i hosts
开启IPVS
#不开启 ipvs 将会使用 iptables 进行数据包转发,但是效率低,所以官网推荐需要开通 ipvs。
cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in $ipvs_modules; do
/sbin/modinfo -F filename $kernel_module > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe $kernel_module
fi
done
ansible ALL -m copy -a "src=/etc/sysconfig/modules/ipvs.modules dest=/etc/sysconfig/modules/ipvs.modules" -i hosts
ansible ALL -m shell -a "chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs" -i hosts
5.安装基础软件包和docker
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet rsync
yum install docker-ce docker-ce-cli containerd.io -y &&systemctl start docker && systemctl enable docker.service && systemctl status docker
配置docker镜像加速器
配置 docker 镜像加速器
tee /etc/docker/daemon.json << EOF
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com","http://qtid6917.mirror.aliyuncs.com",
"https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
EOF
systemctl daemon-reload &&systemctl restart docker &&systemctl status docker
ansible ALL -m copy -a "src=/etc/docker/daemon.json dest=/etc/docker/daemon.json" -i hosts
ansible ALL -m shell -a "systemctl daemon-reload &&systemctl restart docker &&systemctl status docker" -i hosts
安装ETCD集群
配置etcd工作目录
创建配置文件和证书文件存放目录
所有master:
mkdir -p /etc/etcd/ssl
mkdir /data/work -p
ansible MASTER -m shell -a "mkdir -p /etc/etcd/ssl && mkdir /data/work -p" -i hosts
安装签发证书工具cfssl
主节点操作,证书生成完后会同步到其他master节点
cd /data/work
#cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64 上传到/data/work/目录下
授权chmod +x *
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
配置CA证书
生成ca证书请求文件
vim ca-csr.json
"CN": "kubernetes",
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "k8s",
"OU": "System"
],
"ca":
"expiry": "876000h"
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
注:
CN: Common Name(公用名称), kube-apiserver 从证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站是否合法; 对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端证书则为证书申请者的姓名。
O: Organization(单位名称), kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group); 对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端单位证书则为证书申请者所在单位名称。
L 字段:所在城市
S 字段:所在省份
C 字段: 只能是国家字母缩写,如中国: CN
#生成ca证书文件
vim ca-config.json
"signing":
"default":
"expiry": "87600h"
,
"profiles":
"kubernetes":
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
生成ETCD证书
#配置 etcd 证书请求, hosts 的 ip 变成自己 etcd 所在节点的 ip
vim etcd-csr.json
"CN": "etcd",
"hosts": [
"10.33.250.164",
"10.33.250.165",
"10.33.250.167",
"127.0.0.1",
"10.33.250.199"
],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "k8s",
"OU": "system"
]
#上述文件 hosts 字段中 IP 为所有 etcd 节点的集群内部通信 IP,可以预留几个,做扩容用。
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
能看到生成了2个文件
部署ETCD集群
tar -xvzf etcd-v3.4.13-linux-amd64.tar.gz
cp -p etcd-v3.4.13-linux-amd64/etcd*
cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
有2个文件
把这两个文件传到其他MASTER节点上的同样位置
ansible MASTER -m copy -a "src=etcd-v3.4.13-linux-amd64/etcdctl dest=/usr/local/bin/" -i /root/hosts
ansible MASTER -m copy -a "src=etcd-v3.4.13-linux-amd64/etcd dest=/usr/local/bin/" -i /root/hosts
#创建配置文件
vi etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.33.250.164:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.33.250.164:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.33.250.164:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.33.250.164:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.33.250.164:2380,etcd2=https://10.33.250.165:2380,etcd3=https://10.33.250.167:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new
#ETCD_NAME:节点名称,集群中唯一
#ETCD_DATA_DIR:数据目录
#ETCD_LISTEN_PEER_URLS:集群通信监听地址
#ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
#ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
#ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
#ETCD_INITIAL_CLUSTER:集群节点地址
#ETCD_INITIAL_CLUSTER_TOKEN:集群 Token
#ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群
拷贝给其他节点,需要修改etcd_name、ETCD_LISTEN_PEER_URLS、ETCD_LISTEN_CLIENT_URLS、ETCD_INITIAL_ADVERTISE_PEER_URLS、ETCD_ADVERTISE_CLIENT_URLS 五个地方为自己节点的IP
#创建启动服务文件
vim etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
--cert-file=/etc/etcd/ssl/etcd.pem \\
--key-file=/etc/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-cert-file=/etc/etcd/ssl/etcd.pem \\
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#拷贝证书、配置文件、启动服务给其他节点
cp ca*.pem /etc/etcd/ssl/
cp etcd*.pem /etc/etcd/ssl/
cp etcd.conf /etc/etcd/
cp etcd.service /usr/lib/systemd/system/
for i in k8s-node01 k8s-master02;do rsync -vaz etcd.conf $i:/etc/etcd/;done
for i in k8s-node01 k8s-master02;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done
for i in k8s-master02 k8s-node01;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done
#启动ETCD集群
所有节点mkdir -p /var/lib/etcd/default.etcd
systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service && systemctl status etcd.service
启动 etcd 的时候,先启动 master01 的 etcd 服务,会一直卡住在启动的状态,然后接着再启动master02 的 etcd,这样 master03这个节点 etcd 才会正常起来
#检查ETCD集群
ETCDCTL_API=3
/usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://10.33.250.164:2379,https://10.33.250.165:2379,https://10.33.250.167:2379 endpoint health
以上是关于k8s高可用二进制集群安装的主要内容,如果未能解决你的问题,请参考以下文章
K8s二进制安装(k8s1.17.4集群+keepalive-haproxy高可用)
二进制安装多master节点的k8s集群-1.23.4(高可用架构)--未完待续