linux12k8s --> 12kubeadm部署高可用k8s
Posted FikL-09-19
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了linux12k8s --> 12kubeadm部署高可用k8s相关的知识,希望对你有一定的参考价值。
文章目录
KubeAdmin安装k8s
1、集群类型
# kubernetes集群大体上分为两类: 一主多从和多主多从
# 1、一主多从:
一台 Master节点和多台Node节点,搭建简单,有单机故障分析,适合于测试环境
# 2、多主多从:
多台 Master节点和多台Node节点,搭建麻烦,安全性比较高,适合于生产环境
2、安装方式
官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
# 方式一: kubeadm
Kubeadm 是一个K8s 部署工具,提供kubeadm init 和kubeadm join,用于快速部署Kubernetes 集群。
# 方式二:二进制包
从github 下载发行版的二进制包,手动部署每个组件,组成Kubernetes 集群。
Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。
3、高可用架构图
一、准备环境 (电脑系统16G+)
1、部署软件、系统要求
软件 | 版本 |
---|---|
Centos | CentOS Linux release 7.5及以上 |
Docker | 19.03.12 |
Kubernetes | V0.13.0 |
Flannel | V1.19.1 |
Kernel-lm | kernel-lt-4.4.245-1.el7.elrepo.x86_64.rpm |
Kernel-lm-deve | kernel-lt-devel-4.4.245-1.el7.elrepo.x86_64.rpm |
2、节点规划
- IP建议采用192网段,避免与kubernetes内网冲突
准备机器 | IP | 配置 | 系统内核版本 |
---|---|---|---|
k8s-master1 | 192.168.15.111 | 2核2G | 4.4+ |
k8s-master2 | 192.168.15.112 | 2核2G | 4.4+ |
**k8s-master3 | 192.168.15.113 | 2核2G | 4.4+ |
k8s-node1 | **192.168.15.114 | 2核2G | 4.4+ |
k8s-node2 | **192.168.15.115 | 2核2G | 4.4+ |
二、kubeadm安装k8s
服务器配置至少是2G2核的。如果不是则可以在集群初始化后面增加 --ignore-preflight-errors=NumCPU
1、内核优化脚本(所有机器)
[root@k8s-m-01 ~]# vim base.sh
#!/bin/bash
# 1、修改主机名和网卡
hostnamectl set-hostname $1 &&\\
sed -i "s#111#$2#g" /etc/sysconfig/network-scripts/ifcfg-eth[01] &&\\
systemctl restart network &&\\
# 2、关闭selinux和防火墙和ssh连接
setenforce 0 &&\\
sed -i 's#enforcing#disabled#g' /etc/selinux/config &&\\
systemctl disable --now firewalld &&\\
# 如果iptables没有安装就不需要执行
# systemctl disable --now iptables &&\\
sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config &&\\
systemctl restart sshd &&\\
# 3、关闭swap分区
# 一旦触发 swap,会导致系统性能急剧下降,所以一般情况下,K8S 要求关闭 swap
# cat /etc/fstab
# 注释最后一行swap,如果没有安装swap就不需要
swapoff -a &&\\
#忽略swap
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet &&\\
# 4、修改本机hosts文件
cat >>/etc/hosts <<EOF
192.168.15.111 k8s-m-01 m1
192.168.15.112 k8s-n-01 n1
192.168.15.113 k8s-n-02 n2
EOF
# 5、配置镜像源(国内源)
# 默认情况下,CentOS 使用的是官方 yum 源,所以一般情况下在国内使用是非常慢,所以我们可以替换成 国内的一些比较成熟的 yum 源,例如:清华大学镜像源,网易云镜像源等等
rm -rf /ect/yum.repos.d/* &&\\
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo &&\\
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo &&\\
yum clean all &&\\
yum makecache &&\\
# 6、更新系统
#查看内核版本,若内核高于4.0,可不加--exclud选项
yum update -y --exclud=kernel* &&\\
# 由于 Docker 运行需要较新的系统内核功能,例如 ipvs 等等,所以一般情况下,我们需要使用 4.0+以上版 本的系统内核要求是 4.18+,如果是 CentOS 8 则不需要内核系统更新
# 7、安装基础常用软件,是为了方便我们的日常使用
yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp ntpdate -y &&\\
# 8、更新系统内核
#如果是centos8则不需要升级内核
cd /opt/ &&\\
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.137-1.el7.elrepo.x86_64.rpm &&\\
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.137-1.el7.elrepo.x86_64.rpm &&\\
# 如果内核低于4.0会有一些bug,导致生产环境中,如果流量很大的时候,会出现流量抖动现象
# 官网https://elrepo.org/linux/kernel/el7/x86_64/RPMS/
# 9、安装系统内容
yum localinstall /opt/kernel-lt* -y &&\\
# 10、调到默认启动
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg &&\\
# 11、查看当前默认启动的内核
grubby --default-kernel &&\\
reboot
# 安装完成就是5.4内核
2、 免密脚本(所有机器)
# 1、免密
[root@k8s-master-01 ~]# ssh-keygen -t rsa
[root@k8s-master-01 ~]# for i in m1 m2 m3 n1 n2;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i;done
# 在集群当中,时间是一个很重要的概念,一旦集群当中某台机器时间跟集群时间不一致,可能会导致集群面 临很多问题。所以,在部署集群之前,需要同步集群当中的所有机器的时间
方式一:时间同步ntpdate
# 2、时间同步写入定时任务 crontab -e
# 每隔5分钟刷新一次
*/5 * * * * /usr/sbin/ntpdate ntp.aliyun.com &> /dev/null
方式二:时间同步chrony
[root@k8s-m-01 ~]# yum -y install chrony
[root@k8s-m-01 ~]# systemctl enable --now chronyd
[root@k8s-m-01 ~]# date #三台机器时间是否一样
Mon Aug 2 10:44:18 CST 2021
3、安装IPVS和内核优化(所有机器)
kubernetes中service有两种代理模式,一种是iptables,一种是ipvs
两者相比,ipvs性能高,但是如果使用,需要手动加载ipvs模块
# 1、安装 IPVS 、加载 IPVS 模块 (所有节点)
[root@k8s-m-01 ~]# yum install ipset ipvsadm #如果没有下载这2个命令
ipvs 是系统内核中的一个模块,其网络转发性能很高。一般情况下,我们首选 ipvs
[root@k8s-n-01 ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
# 2、授权(所有节点)
[root@k8s-n-01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
# 3、内核参数优化(所有节点)
加载IPVS 模块、生效配置
内核参数优化的主要目的是使其更适合 kubernetes 的正常运行
[root@k8s-n-01 ~]# vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1 # 可以之间修改这两个
net.bridge.bridge-nf-call-ip6tables = 1 # 可以之间修改这两个
fs.may_detach_mounts = 1
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_watches=89100
fs.file-max=52706963 开启 OOM
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
# 立即生效
sysctl --system
4、安装Docker(所有机器)
1、docker安装脚本
方式一:华为源
[root@k8s-m-01 ~]# vim docker.sh
# 1、清空已安装的docker
sudo yum remove docker docker-common docker-selinux docker-engine &&\\
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\\
# 2、安装doceker源
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo &&\\
# 3、软件仓库地址替换
sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\\
# 4、重新生成源
yum clean all &&\\
yum makecache &&\\
# 5、安装docker
sudo yum makecache fast &&\\
sudo yum install docker-ce -y &&\\
# 6、设置docker开机自启动
systemctl enable --now docker.service
# 7、创建docker目录、启动服务(所有节点) ------ 单独执行加速docekr运行速度
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF
方式二:阿里云
[root@k8s-n-01 ~]# vim docker.sh
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\\
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo &&\\
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\\
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast &&\\
sudo yum -y install docker-ce &&\\
# Step 4: 开启Docker服务
systemctl enable --now docker.service &&\\
# Step 5: Docker加速优化服务
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF
2、docker卸载
# 1、卸载旧的版本
sudo yum remove docker \\
docker-client \\
docker-client-latest \\
docker-common \\
docker-latest \\
docker-latest-logrotate \\
docker-logrotate \\
docker-engine
#2.卸载依赖
yum remove docker-ce docker-ce-cli containerd.io -y
#3.删除目录
rm -rf /var/lib/docker #docker默认的工作路径
#4.镜像加速器(docker优化)
- 登录阿里云找到容器镜像服务
- 找到镜像加速地址
- 配置使用
5、安装kubernetes(所有机器)
#1、阿里源kubernetes
[root@k8s-n-02 yum.repos.d]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 2、下载最新版本 yum install -y kubelet kubeadm kubectl
# 版本是kubelet-1.21.3
yum install kubectl-1.21.3 kubeadm-1.21.3 kubelet-1.21.3 -y
# 3、此时只需开机自启,无需启动,因为还未初始化
systemctl enable --now kubelet.service
# 4、查看版本
[root@k8s-m-01 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3",
6、对kubeadmin做高可用
1、安装高可用软件(所有master节点)
# 负载均衡器有很多种,只要能实现api-server高可用都行
# 官方推荐: keeplived + haproxy
[root@k8s-m-01 ~]# yum install -y keepalived haproxy
2、修改keepalived配置文件(所有master节点)
# 1、根据节点的不同,修改的配置也不同
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
cd /etc/keepalived
KUBE_APISERVER_IP=`hostname -i`
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_kubernetes {
script "/etc/keepalived/check_kubernetes.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER # m2、m3节点改成BACKUP
interface eth1
mcast_src_ip ${KUBE_APISERVER_IP}
virtual_router_id 51
priority 100 # 权重 m2改成90 m3改成80
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
172.16.1.116
}
}
EOF
# 2、加载keepalived并启动
[root@k8s-m-01 keepalived]# systemctl daemon-reload
[root@k8s-m-01 /etc/keepalived]# systemctl enable --now keepalived
# 4、验证keepalived是否启动
[root@k8s-m-01 keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2021-08-01 14:48:23 CST; 27s ago
[root@k8s-m-01 keepalived]# ip a |grep 116
inet 172.16.1.116/32 scope global eth1
3、修改haproxy配置文件(所有master节点)
# 1、高可用软件 --->是做负载均衡 向后负载均衡会用SLB
[root@k8s-m-01 keepalived]# vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
listen stats
bind *:8006
mode http
stats enable
stats hide-version
stats uri /stats
stats refresh 30s
stats realm Haproxy\\ Statistics
stats auth admin:admin
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-m-01 172.16.1.111:6443 check inter 2000 fall 2 rise 2 weight 100
server k8s-m-02 172.16.1.112:6443 check inter 2000 fall 2 rise 2 weight 100
server k8s-m-03 172.16.1.113:6443 check inter 2000 fall 2 rise 2 weight 100
# 2、启动haproxy
[root@k8s-m-01 keepalived]# systemctl daemon-reload
[root@k8s-m-01 /etc/keepalived]# systemctl enable --now haproxy.service
# 3、检查集群状态
[root@k8s-m-01 keepalived]# systemctl status haproxy.service
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-07-16 21:12:00 CST; 27s ago
Main PID: 4997 (haproxy-systemd)
7、m01主节点初始化配置
1、查看kubernetes所需要的镜像
# 1、查看镜像列表
[root@k8s-m-01 ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
quay.io/coreos/flannel:v0.14.0
# 2、查看阿里云镜像列表
[root@k8s-m-01 ~]# kubeadm config images list --image-repository=registry.cn-shanghai.aliyuncs.com/mmk8s
registry.cn-shanghai.aliyuncs.com/mmk8s/kube-apiserver:v1.21.3
registry.cn-shanghai.aliyuncs.com/mmk8s/kube-controller-manager:v1.21.3
registry.cn-shanghai.aliyuncs.com/mmk8s/kube-scheduler:v1.21.3
registry.cn-shanghai.aliyuncs.com/mmk8s/kube-proxy:v1.21.3
registry.cn-shanghai.aliyuncs.com/mmk8s/pause:3.4.1
registry.cn-shanghai.aliyuncs.com/mmk8s/etcd:3.4.13-0
registry.cn-shanghai.aliyuncs.com/mmk8s/coredns:v1.8.0
2、部署m01主节点
# 1、生成初始化配置文件
[root@k8s-m-01 ~]# kubeadm config print init-defaults >init-config.yaml
# 2、修改init-config.yaml文件
[root@k8s-m-01 ~]# vim init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef # token每个人都不一样
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.1.111 # 当前的主机ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-m-01 # 对应的主机名
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
cerSANs:
- 172.16.1.116 # 高可用的虚拟IP
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: 172.16.1.116:8443 # 高可用的虚拟IP
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-shanghai.aliyuncs.com/baim0os # 可以写自己的镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.21.3 # 版本号
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # 网络路由
serviceSubnet: 10.96.0.0/12
scheduler: {}
# 3、初始化集群
[root@k8s-m-01 ~]# kubeadm init --config init-config.yaml --upload-certs
You can now join any number of the control-plane node running the following command on each as root:
# 主节点命令复制下来
kubeadm join 172.16.1.116:8443 --token abcdef.0123456789abcdef \\
--discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d \\
--control-plane --certificate-key 1e852aa82be85e8b1b4776cce3a0519b1d0b1f76e5633e5262e2436e8f165993
# 从节点命令复制下来
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.1.116:8443 --token abcdef.0123456789abcdef \\
--discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d
# 4、主节点创建集群
node节点要查看token,主节点生成token可重复执行查看,不会改变
[root@k8s-m-01 ~]# kubeadm token create --print-join-command
kubeadm join 172.16.1.116:8443 --token pfu0ek.ndis39t916v9clq1 --discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d
# 5、 初始化完成查看kubernetes
[root@k8s-m-01 ~]# systemctl restart kubelet.service
# 6、配置 kubernetes 用户信息(master01节点执行)
[root@k8s-m-01 ~]# kubectl label nodes k8s-n-01 node-role.kubernetes.io/node=n01
node/k8s-n-01 labeled
[root@k8s-m-01 ~]# kubectl label nodes k8s-n-02 node-role.kubernetes.io/node=n02
node/k8s-n-02 labeled
[root@k8s-m-01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-m-01 Ready control-plane,master 73m v1.21.3
k8s-m-02 Ready control-plane,master 63m v1.21.3
k8s-m-03 Ready control-plane,master 63m v1.21.3
k8s-n-01 Ready node 2m40s v1.21.3
k8s-n-02 Ready node 62m v1.21.3
# 6、建立用户集群权限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 7、如果使用root用户,则添加至环境变量 (选做)
# 临时生效
[root@k8s-m-01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
# 永久生效
[root@k8s-m-01 ~]# vim /etc/profile.d/kubernetes.sh
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-m-01 ~]# source /etc/profile
# 8、增加命令提示 (所以节点都执行)
所有节点执行
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
3、故障排除
# 1、从节点加入集群可能会出现如下报错:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
PS:前提安装Docker+启动,再次尝试加入节点!
# 1、报错原因:
swap没关,一旦触发 swap,会导致系统性能急剧下降,所以一般情况下,所以K8S 要求关闭 swap
# 2、解决方法:
1> 执行以下三条命令后再次执行添加到集群命令:
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
# 2、STATUS 状态是Healthy
[root@k8s-m-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
1、解决方式
[root@k8s-m-01 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
#- --port=0
[root@k8s-m-01 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
#- --port=0
[root@k8s-m-01 ~]# systemctl restart kubelet.service
2、查看状态
[root@k8s-m-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
8、kubernetes网络插件calico
Calico是一个纯三层的协议,为OpenStack虚机和Docker容器提供多主机间通信。Calico不使用重叠网络比如flannel和libnetwork重叠网络驱动,它是一个纯三层的方法,使用虚拟路由代替虚拟交换,每一台虚拟路由通过BGP协议传播可达信息(路由)到剩余数据中心。
1、安装集群网络插件(主节点)
2、安装 Calico 网络清单
# 1、下载并生成网络插件
[root@k8s-m-01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
[root@k8s-m-01 ~]# kubectl apply -f calico.yaml
3、检查集群初始化状态
# 方式一:查看nodes节点
[root@k8s-m-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-m-01 Ready control-plane,master 14m v1.21.3
k8s-m-02 Ready control-plane,master 4m43s v1.21.3
k8s-m-03 Ready control-plane,master 4m36s v1.21.3
k8s-n-02 Ready control-plane,node 3m2s v1.21.3
k8s-n-02 Ready control-plane,node 3m2s v1.21.3
# 方式二:NDS测试
[root@k8s-m-01 ~]# kubectl run test -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes #输入这条命令,成功后就是以下内容
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ #
#出现以上界面成功
三、安装集群图形化界面(Dashboard )
Dashboard 是 基 于 网 页 的 Kubernetes 用 户 界 面 。 您 可 以 使 用 Dashboard 将 容 器 应 用 部 署 到Kubernetes 集群中,也可以对容器应用排错,还能管理集群本身及其附属资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如Deployment,Job,DaemonSet等等)。
1、安装图形化界面
可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
# 1、下载资源清单并生成
方式一:giitubx下载
[root@k8s-m-01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
方式二:自己网站下载并生成
[root@k8s-m-01 ~]# wget http://www.mmin.xyz:81/package/k8s/recommended.yaml
[root@k8s-m-01 ~]# kubectl apply -f recommended.yaml
方式三:一步生成并安装
[root@k8s-m-01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
# 2、查看端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.109.68.74 <none> 8000/TCP 30s
kubernetes-dashboard ClusterIP 10.105.125.10 <none> 443/TCP 34s
# 3、开一个端口,用于访问
[root@k8s-m-01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
type: ClusterIP => type: NodePort #修改成NodePort
# 4、重新查看端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.44.119 <none> 8000/TCP 12m
kubernetes-dashboard NodePort 10.96.42.127 <none> 443:40927/TCP 12m
# 5、创建token配置文件
[root@k8s-m-01 ~]# vim token.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
# 6、部署token到集群
[root@k8s-m-01 ~]# kubectl apply -f token.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 7、获取token
[root@k8s-m-01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token: | awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1NeTJxSDZmaFc1a00zWVRXTHdQSlZlQnNjWUdQMW1zMjg5OTBZQ1JxNVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpxMm56Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyN2Q4MjIzYi1jYmY1LTQ5ZTUtYjAxMS1hZTAzMzM2MzVhYzQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Q4gC_Kr_Ltl_zG0xkhSri7FQrXxdA5Zjb4ELd7-bVbc_9kAe292w0VM_fVJky5FtldsY0XOp6zbiDVCPkmJi9NXT-P09WvPc9g-ISbbQB_QRIWrEWF544TmRSTZJW5rvafhbfONtqZ_3vWtMkCiDsf7EAwDWLLqA5T46bAn-fncehiV0pf0x_X16t72Qqa-aizHBrVcMsXQU0wnYC7jt373pnhnFHYdcJXx_LgHaC1LgCzx5BfkuphiYOaj_dVB6tAlRkQo3QkFP9GIBW3LcVfhOQBmMQl8KeHvBW4QC67PQRv55IUaUDJ_lRC2QKbeJzaUto-ER4YxFwr4tncBwZQ
# 8、验证集群是否成功
[root@k8s-m-01 kubernetes]# kubectl run test01 -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/
# 9、通过token访问
192.168.15.111:40927 # 第五步查看端口
以上是关于linux12k8s --> 12kubeadm部署高可用k8s的主要内容,如果未能解决你的问题,请参考以下文章