快速搭建kubernetes高可用集群(3master+3worker+负载均衡)
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了快速搭建kubernetes高可用集群(3master+3worker+负载均衡)相关的知识,希望对你有一定的参考价值。
参考技术Akubeadm 是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,通过kubeadm的方式安装集群比二进制的方式安装高效不少。建议初次使用k8s使用此方式安装,二进制的方式会很快令人失去信心。
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
dnsmasq安装可参考我的另一篇 文章
ha1节点配置
ha2节点配置
在两台ha节点都执行
启动后查看ha的网卡信息(有一台可看到vip)
两台ha节点的配置均相同,配置中声明了后端代理的两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口
两台ha都启动
检查端口
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。kubelet控制容器,kubeadm控制加入平面。
镜像加速
由于版本更新频繁,这里指定版本号部署:
在master1操作
按照提示配置环境变量,使用kubectl工具:
按照提示保存以下内容,一会要使用:
查看集群状态
从官方地址获取到flannel的yaml,在master1上执行
安装flannel网络
检查
从master1复制密钥及相关文件到master2
master3操作同上
执行在master1上init后输出的join命令,需要带上参数 --control-plane 表示把master控制节点加入集群
检查状态
在node1、2、3上执行
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
检查状态
在Kubernetes集群中创建一个pod,验证是否正常运行:
访问地址: http://NodeIP:Port
高可用k8s集群搭建1.17.0
高可用k8s集群搭建1.17.0(ipvs网络)
借鉴文档:https://jimmysong.io/kubernetes-handbook/practice/create-tls-and-secret-key.html
安装环境
Master 高可用
- 官网介绍
- 借鉴文档:https://blog.51cto.com/ylw6006/2164981 和 https://yq.aliyun.com/articles/679600
安装前的系统配置
所有要加入k8s集群的机器都要执行
- 设置集群所有机器免密钥登录
# ssh-keygen
# 将id_rsa.pub内容复制到其他机器的~/.ssh/authorized_keys中
- 设置hosts
(设置永久主机名称,然后重新登录)
# 设置hostsname主机名
# hostnamectl set-hostname kube-master
# cat /etc/hosts
kube-node1 10.2.33.5 nginx.btcexa.com test.btcexa.com k8s.grafana.btcexa.com
kube-node2 10.2.33.127 nginx.btcexa.com test.btcexa.com
kube-node3 10.2.33.65 nginx.btcexa.com test.btcexa.com
10.2.33.5 nginx.btcexa.com test.btcexa.com test-ning.btcexa.com k8s.grafana.btcexa.com k8s.prometheus.btcexa.com traefik-admin.btcexa.com traefik-nginx.btcexa.com
内核配置
- 升级CentOS软件包及内核
yum -y update
yum -y install yum-plugin-fastestmirror
yum install -y epel-release
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum -y --enablerepo=elrepo-kernel install kernel-ml
- 设置默认启动内核为最新安装版本
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
- 设置关闭防火墙及SELINUX
systemctl stop firewalld && systemctl disable firewalld
sed -i "s/SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
setenforce 0
- 关闭Swap
swapoff -a
sed -i /^.*swap.*/d /etc/fstab
- 系统内核配置
- 修改iptables的内核参数
# modprobe overlay
# modprobe br_netfilter
# Setup required sysctl params, these persist across reboots.
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
或者执行下面
# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
# echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl -p /etc/sysctl.d/k8s.conf
[ -f /proc/sys/fs/may_detach_mounts ] && sed -i "/fs.may_detach_mounts/ d" /etc/sysctl.conf
[ -f /proc/sys/fs/may_detach_mounts ] && echo "fs.may_detach_mounts=1" >> /etc/sysctl.conf
sysctl -p
# 非常重要
sysctl -w net.ipv6.conf.all.disable_ipv6=0
- 设置limits.conf
cat >> /etc/security/limits.conf << EOF
* soft nproc 1024000
* hard nproc 1024000
* soft nofile 1024000
* hard nofile 1024000
* soft core 1024000
* hard core 1024000
######big mem ########
#* hard memlock unlimited
#* soft memlock unlimited
EOF
- 设置20-nproc.conf
sed -i s/4096/1024000/ /etc/security/limits.d/20-nproc.conf
- 设置 journal 日志大小及存储路径
echo SystemMaxUse=600M >>/etc/systemd/journald.conf
mkdir -p /var/log/journal
chown root:systemd-journal /var/log/journal
chmod 2755 /var/log/journal
systemctl restart systemd-journald
开启ipvs (kube-proxy)
- 安装依赖命令行(工具集)
# yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp socat fuse fuse-libs nfs-utils nfs-utils-lib pciutils ebtables ethtool
- 临时生效ipvs
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
- 永久生效ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# sysctl -p
- 查看ipvs是否成功
# lsmod|grep ip_vs
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 133095 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
# 确认br_netfilter模块
# lsmod | grep br_netfilter
# 启用此内核模块,以便遍历桥的数据包由iptables进行处理以进行过滤和端口转发,并且群集中的kubernetes可以相互通信
modprobe br_netfilter
# 若kube-proxy需要开启ipvs,则下述模块需要存在
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
可选:
若kube-proxy需要开启ipvs,则下述模块需要存在, 在所有的Kubernetes节点
kubernetes 安装
- 安装全局环境变量
# mkdir -p /opt/k8s/bin,ssl,cfg
- 生成apiserver的token文件
# date|sha1sum |awk print $1
b681138df1a8e0c2ddb8daff35490435caa5ff7a
# cd /opt/k8s/ssl
# cat > /opt/k8s/ssl/token.csv <<EOF
b681138df1a8e0c2ddb8daff35490435caa5ff7a,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
- 配置(可以忽略)
cat > /opt/k8s/ssl/basic-auth.csv <<EOF
admin,admin,1
readonly,readonly,2
EOF
# vim /opt/k8s/env.sh
export BOOTSTRAP_TOKEN=b681138df1a8e0c2ddb8daff35490435caa5ff7a
# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段
# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 和 ipvs 保证)
SERVICE_CIDR="10.254.0.0/16"
# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)
CLUSTER_CIDR="10.10.0.0/16"
# 服务端口范围 (NodePort Range)
export NODE_PORT_RANGE="30000-50000"
# 集群各机器 IP 数组
export NODE_IPS=(10.2.33.5 10.2.33.127 10.2.33.65)
# 集群各 IP 对应的 主机名数组
export NODE_NAMES=(kube-node1 kube-node2 kube-node3)
# kube-apiserver 节点 IP
export MASTER_IP=0.0.0.0
# 内网访问kube-apiserver https 地址
export KUBE_APISERVER="https://kubernetes.exa.local:6443"
# 外网访问kube-apiserver https 地址
export KUBE_PUBLIC_APISERVER="https://kubernetes.btcexa.com:6443"
# etcd 集群服务地址列表
export ETCD_ENDPOINTS="https://10.2.33.5:2379,https://10.2.33.127:2379,https://10.2.33.65:2379"
# flanneld 网络配置前缀
export FLANNEL_ETCD_PREFIX="/kubernetes/network"
# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"
# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
export CLUSTER_DNS_SVC_IP="10.254.0.2"
# 集群 DNS 域名
export CLUSTER_DNS_DOMAIN="cluster.local."
安装cfssl工具,用具签发证书。
- 主节点安装就可以
cd /opt/k8s/
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/bin/cfssl
mv cfssljson_linux-amd64 /usr/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
安装docker,并配置docker的镜像仓库
# 安装DOCKER
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \\
--add-repo \\
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y
# 设置docker镜像仓库
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
# 重启docker
(
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
)
生成etcd证书
- 创建一个存入证书的目录,并且进入该目录。
cd /opt/k8s/ssl
- 创建生成ca证书的json文件,内容如下。expiry设置长一点,要不然证书失效很麻烦。
cat > ca-config.json <<EOF
"signing":
"default":
"expiry": "876000h"
,
"profiles":
"kubernetes":
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
EOF
- 创建证书签名请求的json文件
cat > ca-csr.json <<EOF
"CN": "etcd CA",
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"L": "Shanghai",
"ST": "Shanghai"
]
EOF
- 生成CA证书(ca.pem)和密钥(ca-key.pem)
# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2019/12/26 09:33:53 [INFO] generating a new CA key and certificate from CSR
2019/12/26 09:33:53 [INFO] generate received request
2019/12/26 09:33:53 [INFO] received CSR
2019/12/26 09:33:53 [INFO] generating key: rsa-2048
2019/12/26 09:33:53 [INFO] encoded CSR
2019/12/26 09:33:53 [INFO] signed certificate with serial number 76090837348387020865481584188520719234232827929
- 生成结果如下
ls ./
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
- 为etcd生成证书
cat > etcd-csr.json <<EOF
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.254.0.1",
"kubernetes.exa.local",
"kubernetes.btcexa.com",
"harbor.btcexa.com",
"10.2.33.5",
"10.2.33.127",
"10.2.33.65"
],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai"
]
EOF
重点说明下,由于flanneld用到了etcd证书,这里要把规划网段的第一个地址加进去。否则,内部访问http://10.254.0.1:443 报证书错误。
- 生成
cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2019/12/26 09:34:26 [INFO] generate received request
2019/12/26 09:34:26 [INFO] received CSR
2019/12/26 09:34:26 [INFO] generating key: rsa-2048
2019/12/26 09:34:26 [INFO] encoded CSR
2019/12/26 09:34:26 [INFO] signed certificate with serial number 680872829262173782320244647098818402787647586534
2019/12/26 09:34:26 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
安装etcd
- 下载并解压,然后复制到安装目录下。
官方网址:https://github.com/etcd-io/etcd/releases
# cd /opt/k8s && wget https://github.com/coreos/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
# tar xf etcd-v3.3.13-linux-amd64.tar.gz
- 用以下etcd安装脚本,生成etcd的配置文件和 systemd 的服务配置,内容如下。
# vim init-etcd.sh
#!/bin/bash
source /opt/env.sh
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="$ETCD_NAME"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://$ETCD_IP:2380"
ETCD_LISTEN_CLIENT_URLS="https://$ETCD_IP:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://$ETCD_IP:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://$ETCD_IP:2379"
ETCD_INITIAL_CLUSTER="$ETCD_CLUSTER"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=$WORK_DIR/cfg/etcd
ExecStart=$WORK_DIR/bin/etcd \\
--name=\\$ETCD_NAME \\
--data-dir=\\$ETCD_DATA_DIR \\
--listen-peer-urls=\\$ETCD_LISTEN_PEER_URLS \\
--listen-client-urls=\\$ETCD_LISTEN_CLIENT_URLS,http://127.0.0.1:2379 \\
--advertise-client-urls=\\$ETCD_ADVERTISE_CLIENT_URLS \\
--initial-advertise-peer-urls=\\$ETCD_INITIAL_ADVERTISE_PEER_URLS \\
--initial-cluster=\\$ETCD_INITIAL_CLUSTER \\
--initial-cluster-token=\\$ETCD_INITIAL_CLUSTER_TOKEN \\
--initial-cluster-state=new \\
--cert-file=$WORK_DIR/ssl/etcd.pem \\
--key-file=$WORK_DIR/ssl/etcd-key.pem \\
--peer-cert-file=$WORK_DIR/ssl/etcd.pem \\
--peer-key-file=$WORK_DIR/ssl/etcd-key.pem \\
--trusted-ca-file=$WORK_DIR/ssl/ca.pem \\
--peer-trusted-ca-file=$WORK_DIR/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
- 部署安装 etcd
vim etcd_install.sh
#!/bin/bash
cp -avr /opt/k8s/env.sh /opt/env.sh
source /opt/env.sh
for node_ip in $NODE_IPS[@]
do
echo ">>> $node_ip"
#####etcd
# 创建etcd目录
ssh root@$node_ip "mkdir -p /opt/etcd/cfg,bin,ssl"
# 拷贝执行程序
scp /opt/k8s/etcd-v3.3.13-linux-amd64/etcd,etcdctl root@$node_ip:/opt/etcd/bin/
scp /opt/k8s/env.sh root@$node_ip:/opt/
# 拷贝配置文件生成脚本
scp /opt/k8s/init-etcd.sh root@$node_ip:/opt/
# 拷贝证书
cd /opt/k8s/ssl/
scp etcd*pem ca*.pem root@$node_ip:/opt/etcd/ssl/
#####
done
ssh root@10.2.33.5 "cd /opt/ && sh init-etcd.sh etcd01 10.2.33.5 etcd01=https://10.2.33.5:2380,etcd02=https://10.2.33.127:2380,etcd03=https://10.2.33.65:2380"
ssh root@10.2.33.127 "cd /opt && sh init-etcd.sh etcd02 10.2.33.127 etcd01=https://10.2.33.5:2380,etcd02=https://10.2.33.127:2380,etcd03=https://10.2.33.65:2380"
ssh root@10.2.33.65 "cd /opt/ && sh init-etcd.sh etcd03 10.2.33.65 etcd01=https://10.2.33.5:2380,etcd02=https://10.2.33.127:2380,etcd03=https://10.2.33.65:2380"
# sh etcd_install.sh
先在主节点启动etcd,然后终端会被占用。需要到另外两个节点也启动etcd后,主节点的终端才可以释放。 三台节点都执行 systemctl start etcd
systemctl daemon-reload && systemctl enable etcd && systemctl start etcd
# 测试, 三台etcd都启动以后测试。正确输出如下。
/opt/etcd/bin/etcdctl --endpoints=https://10.2.33.5:2379 --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/etcd.pem --key-file=/opt/etcd/ssl/etcd-key.pem cluster-health
member 255b6ed818720e20 is healthy: got healthy result from https://10.2.33.65:2379
member cbc6185ed5ac53ae is healthy: got healthy result from https://10.2.33.127:2379
member ccdbf5bbe09e862d is healthy: got healthy result from https://10.2.33.5:2379
cluster is healthy
安装kubernetes
Master节点安装
生成Master 证书
- 生成apiserver的证书
/opt/k8s/ssl # 在生成etcd的证书时也用的这套ca
cd /opt/k8s/ssl
cat > /opt/k8s/ssl/kubernetes-csr.json <<EOF
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.2.33.5",
"10.2.33.127",
"10.2.33.65",
"10.254.0.1",
"kubernetes.exa.local",
"kubernetes.btcexa.com",
"harbor.btcexa.com",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "k8s",
"OU": "System"
]
EOF
解释:所有master相关的IP,都要添加到上面。并需要把集群第一个地址10.254.0.1添加。
# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2019/12/26 09:40:28 [INFO] generate received request
2019/12/26 09:40:28 [INFO] received CSR
2019/12/26 09:40:28 [INFO] generating key: rsa-2048
2019/12/26 09:40:29 [INFO] encoded CSR
2019/12/26 09:40:29 [INFO] signed certificate with serial number 79307740170237095958081306786566929940321574452
2019/12/26 09:40:29 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
- 查看文件
ls ./
ca-config.json ca-csr.json ca.pem etcd.pem kubernetes.csr kubernetes-key.pem
ca.csr ca-key.pem etcd-key.pem init-etcd.sh kubernetes-csr.json kubernetes.pem
- 生成kubectl证书
cd /opt/k8s/ssl
# cat > /opt/k8s/ssl/admin-csr.json <<EOF
"CN": "admin",
"hosts": [],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "system:masters",
"OU": "System"
]
EOF
- 生成kubectl的管理工具证书
# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2019/12/26 09:40:53 [INFO] generate received request
2019/12/26 09:40:53 [INFO] received CSR
2019/12/26 09:40:53 [INFO] generating key: rsa-2048
2019/12/26 09:40:53 [INFO] encoded CSR
2019/12/26 09:40:53 [INFO] signed certificate with serial number 232498819813658091378247501835328406476549876286
2019/12/26 09:40:53 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
下载文件,解压,复制文件
下载地址:https://kubernetes.io/docs/setup/release/notes/
cd /opt/k8s && wget https://storage.googleapis.com/kubernetes-release/release/v1.16.4/kubernetes-server-linux-amd64.tar.gz
cd /opt/k8s && wget https://storage.googleapis.com/kubernetes-release/release/v1.17.0/kubernetes-server-linux-amd64.tar.gz
tar -xf kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/cfg,bin,ssl
- 复制执行程序文件到安装目录下
cd /opt/k8s/kubernetes/server/bin/
\\cp -avr kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin/
- 复制执行程序文件到高可用master安装目录下
scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@10.2.33.127:/opt/kubernetes/bin/
- 复制证书文件到kubernetes的ssl目录下
cd /opt/k8s/ssl
\\cp -avr kubernetes*pem ca*pem adm* token.csv token.csv /opt/kubernetes/ssl/
scp kubernetes*pem ca*pem adm* token.csv root@10.2.33.127:/opt/kubernetes/ssl/
安装 apiserver
- 执行以下脚本安装apiserver
cd /opt/k8s
vim install-apiserver.sh
#!/bin/bash
source /opt/k8s/env.sh
#MASTER_ADDRESS=$1:-"10.2.33.5"
#ETCD_SERVERS=$2:-"http://127.0.0.1:2379"
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\\\
--v=4 \\\\
--etcd-servers=$ETCD_ENDPOINTS \\\\
--insecure-bind-address=127.0.0.1 \\\\
--bind-address=$MASTER_IP \\\\
--insecure-port=8080 \\\\
--secure-port=6443 \\\\
--advertise-address=$MASTER_IP \\\\
--allow-privileged=true \\\\
--service-cluster-ip-range=$SERVICE_CIDR \\\\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\\\
--authorization-mode=RBAC,Node \\\\
--kubelet-https=true \\\\
--enable-bootstrap-token-auth \\\\
--token-auth-file=/opt/kubernetes/ssl/token.csv \\\\
--service-node-port-range=$NODE_PORT_RANGE \\\\
--tls-cert-file=/opt/etcd/ssl/etcd.pem \\\\
--tls-private-key-file=/opt/etcd/ssl/etcd-key.pem \\\\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\\\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\\\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\\\
--etcd-certfile=/opt/etcd/ssl/etcd.pem \\\\
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \\$KUBE_APISERVER_OPTS
Restart=on-failure
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
- 执行上面的脚本
sh install-apiserver.sh
- 复制配置文件和启动脚本到其他master节点
# scp /opt/kubernetes/cfg/kube-apiserver root@10.2.33.127:/opt/kubernetes/cfg/
# scp /usr/lib/systemd/system/kube-apiserver.service root@10.2.33.127:/usr/lib/systemd/system/
安装controller-manager
- 使用下面的安装脚本安装
# vim install-controller-manager.sh
#!/bin/bash
source /opt/k8s/env.sh
MASTER_ADDRESS=$1:-"127.0.0.1"
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\\\
--v=4 \\\\
--master=$MASTER_ADDRESS:8080 \\\\
--leader-elect=true \\\\
--address=127.0.0.1 \\\\
--service-cluster-ip-range=$SERVICE_CIDR \\\\
--cluster-name=kubernetes \\\\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\\\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\\\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\\\
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \\$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
- 执行安装
# sh install-controller-manager.sh 127.0.0.1
- 复制配置文件和启动脚本到其他master节点
# scp /opt/kubernetes/cfg/kube-controller-manager root@10.2.33.127:/opt/kubernetes/cfg/
# scp /usr/lib/systemd/system/kube-controller-manager.service root@10.2.33.127:/usr/lib/systemd/system/
安装scheduler
- 使用以下脚本安装 kube-scheduler 服务
# vim install_kube-scheduler.sh
#!/bin/bash
#
MASTER_ADDRESS=$1:-"127.0.0.1"
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\\\
--v=4 \\\\
--master=$MASTER_ADDRESS:8080 \\\\
--leader-elect=true"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \\$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
- 安装
# sh install_kube-scheduler.sh 127.0.0.1
- 复制配置文件和启动脚本到其他master节点
# scp /opt/kubernetes/cfg/kube-scheduler root@10.2.33.127:/opt/kubernetes/cfg/
# scp /usr/lib/systemd/system/kube-scheduler.service root@10.2.33.127:/usr/lib/systemd/system/
- 启动Master 节点程序
(
systemctl daemon-reload
systemctl enable kube-apiserver && systemctl restart kube-apiserver && systemctl status kube-apiserver
systemctl enable kube-controller-manager && systemctl restart kube-controller-manager && systemctl status kube-controller-manager
systemctl enable kube-scheduler && systemctl restart kube-scheduler && systemctl status kube-scheduler
)
(
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
)
安装kubectl
- 1)使用以下脚本安装 kubectl服务(生成内网kubeconfig)
内网IP: awsDns(kubernetes.exa.local)---> aws(内网alb)TCP模式 --> 目标组TCP模式--->k8sMaster节点(6443端口)
cat > /opt/k8s/kubectl_private_install.sh << EOF
# 获取环境变量
source /opt/k8s/env.sh
# 设置apiserver访问地址
#KUBE_APISERVER=https://kubernetes.exa.local:6443
# 设置集群参数
/opt/kubernetes/bin/kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=\\$KUBE_APISERVER --kubeconfig=admin_private.kubeconfig
# 设置客户端认证参数
/opt/kubernetes/bin/kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem --kubeconfig=admin_private.kubeconfig
# 设置上下文件参数
# /opt/kubernetes/bin/kubectl config set-context kubernetes --cluster=kubernetes --user=admin --namespace=kube-system --kubeconfig=admin_private.kubeconfig
/opt/kubernetes/bin/kubectl config set-context kubernetes --cluster=kubernetes --user=admin --namespace=default --kubeconfig=admin_private.kubeconfig
# 设置默认上下文
/opt/kubernetes/bin/kubectl config use-context kubernetes --kubeconfig=admin_private.kubeconfig
EOF
配置kubectl服务(只需要在一遍即可(在一个master节点上就ok...))
# sh /opt/k8s/kubectl_private_install.sh
Cluster "kubernetes" set.
User "admin" set.
Context "kubernetes" created.
Switched to context "kubernetes".
#将admin_private.kubeconfig拷贝到/root/.kube/config文件
cp /opt/k8s/admin_private.kubeconfig /root/.kube/config
- 2)使用以下脚本安装 kubectl服务(生成外网kubeconfig)
外放访问:aws(Dns)---> aws(外网alb)TCP模式 --> 目标组TCP模式--->k8sMaster节点(6443端口)
cat > /opt/k8s/kubectl_public_install.sh << EOF
# 获取环境变量
#source /opt/k8s/env.sh
# 设置apiserver访问地址
KUBE_APISERVER=https://kubernetes.btcexa.com:6443
# 设置集群参数
/opt/kubernetes/bin/kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=\\$KUBE_APISERVER --kubeconfig=admin_public.kubeconfig
# 设置客户端认证参数
/opt/kubernetes/bin/kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem --kubeconfig=admin_public.kubeconfig
# 设置上下文件参数
# /opt/kubernetes/bin/kubectl config set-context kubernetes --cluster=kubernetes --user=admin --namespace=kube-system --kubeconfig=admin_public.kubeconfig
/opt/kubernetes/bin/kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=admin_public.kubeconfig
# 设置默认上下文
/opt/kubernetes/bin/kubectl config use-context kubernetes --kubeconfig=admin_public.kubeconfig
EOF
配置kubectl服务(只需要在一遍即可(在一个master节点上就ok...))
# sh /opt/k8s/kubectl_public_install.sh
Cluster "kubernetes" set.
User "admin" set.
Context "kubernetes" created.
Switched to context "kubernetes".
#将admin_public.kubeconfig拷贝到/root/.kube/config文件
cp /opt/k8s/admin_public.kubeconfig /root/.kube/config
# 如果需要通过公网管理集群(测试下来公网操作慢!!)
scp /opt/k8s/admin_public.kubeconfig root@10.2.33.127:/root/.kube/config
- 所有master节点添加环境变量
- 把kubernetes命令添加到环境变量(所有master节点上)
cat > /etc/profile.d/k8s.sh <<EOF
#!/bin/bash
export PATH=\\$PATH:/opt/kubernetes/bin/
EOF
source /etc/profile.d/k8s.sh
- 使用kubectl命令检查多master是否安装成功,在每个master节点上都执行,检查是否正常。
# kubectl get cs //(unknown问题暂时未解决1.16.0和1.16.4都有这样问题,1.17.0没有问题赞)
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy "health":"true"
etcd-2 Healthy "health":"true"
etcd-0 Healthy "health":"true"
# kubectl cluster-info
Kubernetes master is running at https://kubernetes.exa.local:6443
To further debug and diagnose cluster problems, use kubectl cluster-info dump.
安装node节点、添加node节点(机器初始化docker安装等)
安装 kubelet、kube-porxy、flannel插件
- 生成kube-proxy证书
cat > /opt/k8s/ssl/kube-proxy-csr.json << EOF
"CN": "system:kube-proxy",
"hosts": [],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "k8s",
"OU": "System"
]
EOF
# 生成证书
# cd /opt/k8s/ssl/
# cfssl gencert -ca=/opt/k8s/ssl/ca.pem -ca-key=/opt/k8s/ssl/ca-key.pem -config=/opt/k8s/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/12/26 09:59:43 [INFO] generate received request
2019/12/26 09:59:43 [INFO] received CSR
2019/12/26 09:59:43 [INFO] generating key: rsa-2048
2019/12/26 09:59:43 [INFO] encoded CSR
2019/12/26 09:59:43 [INFO] signed certificate with serial number 157028017693635972642773375308791716823103748513
2019/12/26 09:59:43 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
- 生成flannel证书
cat > flanneld-csr.json <<EOF
"CN": "flanneld",
"hosts": [],
"key":
"algo": "rsa",
"size": 2048
,
"names": [
"C": "CN",
"ST": "Shanghai",
"L": "shanghai",
"O": "system:masters",
"OU": "System"
]
EOF
- 生成证书
# cd /opt/k8s/ssl
# cfssl gencert -ca=/opt/k8s/ssl/ca.pem \\
-ca-key=/opt/k8s/ssl/ca-key.pem \\
-config=/opt/k8s/ssl/ca-config.json \\
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
2019/12/26 10:00:09 [INFO] generate received request
2019/12/26 10:00:09 [INFO] received CSR
2019/12/26 10:00:09 [INFO] generating key: rsa-2048
2019/12/26 10:00:09 [INFO] encoded CSR
2019/12/26 10:00:09 [INFO] signed certificate with serial number 113796707096533245041379767771722538790347756007
2019/12/26 10:00:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
- 创建角色绑定
# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
- 在master节点上,生成bootstrap.kubeconfig kube-proxy.kubeconfig 证书。
cd /opt/k8s/
vim gen-kubeconfig.sh
# 读取环境变量
source /opt/k8s/env.sh
#---------创建kubelet bootstrapping kubeconfig------------
#BOOTSTRAP_TOKEN=c76835f029914e3693a9834295bb840910211916 # 要与/opt/kubernetes/ssl/token.csv一致
# 设置集群参数
kubectl config set-cluster kubernetes \\
--certificate-authority=/opt/kubernetes/ssl/ca.pem \\
--embed-certs=true \\
--server=$KUBE_APISERVER \\
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \\
--token=$BOOTSTRAP_TOKEN \\
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \\
--cluster=kubernetes \\
--user=kubelet-bootstrap \\
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#---------创建kubelet bootstrapping kubeconfig-------------
# 设置集群参数
kubectl config set-cluster kubernetes \\
--certificate-authority=/opt/kubernetes/ssl/ca.pem \\
--embed-certs=true \\
--server=$KUBE_APISERVER \\
--kubeconfig=kube-proxy.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kube-proxy \\
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \\
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \\
--embed-certs=以上是关于快速搭建kubernetes高可用集群(3master+3worker+负载均衡)的主要内容,如果未能解决你的问题,请参考以下文章利用local nginx搭建k8s-1.17.4高可用kubernetes集群
第156天学习打卡(Kubernetes 搭建监控平台 高可用集群部署 )