关于Kubernetes-v1.23.6-初始化时报错[kubelet-check] It seems like the kubelet isn't running or healthy

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了关于Kubernetes-v1.23.6-初始化时报错[kubelet-check] It seems like the kubelet isn't running or healthy相关的知识,希望对你有一定的参考价值。

笔者今天在对k8s,v1.23.6版本的的master节点使用如下命令进行初始化时

[root@k8s-master qq-5201351]# kubeadm init \\
> --apiserver-advertise-address 192.18.106.87 \\
> --image-repository registry.aliyuncs.com/google_containers \\
> --kubernetes-version v1.23.6 \\
> --service-cidr=10.96.0.0/12 \\
> --pod-network-cidr=10.224.0.0/16

遇到如下报错:

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn\'t running or healthy.
[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn\'t running or healthy.
[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn\'t running or healthy.
[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn\'t running or healthy.
[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn\'t running or healthy.
[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - \'systemctl status kubelet\'
                - \'journalctl -xeu kubelet\'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - \'docker ps -a | grep kube | grep -v pause\'
                Once you have found the failing container, you can inspect its logs with:
                - \'docker logs CONTAINERID\'

error execution phase wait-control-plane: couldn\'t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

其核心报错,也就如下几条,最为主要的就是说,kubelet isn\'t running or healthy.

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifes ts". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn\'t running or healthy.
[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248 /healthz": dial tcp [::1]:10248: connect: connection refused.

刚看到这个报错时,还是有一点懵的,但仔细一看,还好下面提出了一些排查方法,和可能的原因 ,其中有一点看起来很是有用

- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

这个与docker的一个配置有关系,即要求将cgroups disabled,于是笔者尝试如下解决方法

到/etc/docker/daemon.json配置文件中,至少需要添加上如下一段内容(如果有其他配置选项也可以添加到花括号之中)


"exec-opts":["native.cgroupdriver=systemd"]

说明:docker默认使用的Cgroup Driver是cgroupfs,我们上面是将其修改成systemd,这些通过docker info可以看出

然后我们重启docker让配置生效,因为这才刚开始从master节点搭建,于是笔者再使用kubeadm reset重置

最后我们再进行kubadm初始化master节点时,就不会有报错了~

 

 

 

尊重别人的劳动成果 转载请务必注明出处:https://www.cnblogs.com/5201351/p/17378926.html

 

采用KubeSphere的kk,部署安装多节点服务的kubernetes-v1.18.6和kubesphere-v3.0.0的踩坑过程记录,及反思

前言

KubeSphere® 是经 CNCF 认证的 Kubernetes 主流开源发行版之一,在 Kubernetes 之上提供多种以容器为资源载体的业务功能模块,如多租户管理、集群运维、应用管理、DevOps、微服务治理等功能。

最近微服务,要部署到k8s,采用KubeSphere应用为中心的容器管理平台,于是捣鼓怎样去部署,第一次部署成功,好像不稳定,再次恢复四台服务器镜像,重新部署,其中遇到很多的问题及挫折,在此记录一下,以供大家参考。

准备服务器

master:172.16.7.12 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 200G ( data)
node5:172.16.7.15 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)
node3:172.16.7.16 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)
node4:172.16.7.17 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)

系统优化及安装准备软件

系统自动优化

参考“ansible安装部署CDH集群,与手动安装部署CDH集群,及CM配置和用户权限配置”,先进行对系统进行优化:

sh deploy_robot.sh init_ssh
sh deploy_robot.sh init_sys

关于deploy_robot.sh的脚本在https://github.com/fleapx/cdh-deploy-robot.git

ansible工具软件安装 

# 控制机器上安装ansible
yum install -y ansible

配置修改 /etc/ansible/hosts ,对需要管理的主机进行配置。

[all]
172.16.7.12  master
172.16.7.15  node5
172.16.7.16  node3
172.16.7.17  node4

默认配置,需要修改编辑 /etc/ansible/ansible.cfg。

# uncomment this to disable SSH key host checking
host_key_checking = False

其他系统软件安装

ansible all -m shell -a "wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo"
ansible all -m shell -a "sed -i 's/^.*aliyuncs*/#&/g' /etc/yum.repos.d/CentOS-Base.repo"
ansible all -m shell -a "wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo"
ansible all -m shell -a "yum -y install ebtables socat ipset conntrack nfs-utils rpcbind"
ansible all -m shell -a "yum install -y vim wget yum-utils device-mapper-persistent-data lvm2"

集群时间校准

ansible all -m shell -a "yum install chrony -y"
ansible all -m shell -a "systemctl  start chronyd"
ansible all -m shell -a "sed -i -e '/^server/s/^/#/' -e '1a server ntp.aliyun.com iburst' /etc/chrony.conf"
ansible all -m shell -a "systemctl  restart chronyd"
ansible all -m shell -a "timedatectl set-timezone Asia/Shanghai"

系统其他优化

ansible all -m shell -a "echo '* soft nofile 655360' >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* hard nofile 655360' >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* soft nproc 655360'  >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* hard nproc 655360'  >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* soft  memlock  unlimited'  >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* hard memlock  unlimited'  >> /etc/security/limits.conf"
ansible all -m shell -a "echo 'DefaultLimitNOFILE=1024000'  >> /etc/systemd/system.conf" 
ansible all -m shell -a "echo 'DefaultLimitNPROC=1024000'  >> /etc/systemd/system.conf"

开放所有端口

ansible all -m shell -a "iptables -P INPUT ACCEPT"
ansible all -m shell -a "iptables -P FORWARD ACCEPT"
ansible all -m shell -a "iptables -P OUTPUT ACCEPT"
ansible all -m shell -a "iptables -F"

安装docker

CentOS 7系统下docker安装,及配置阿里云加速,及所有配置请参加此文。

ansible all -m shell -a "yum remove docker  docker-common docker-selinux docker-engine"
ansible all -m shell -a "yum -y install docker-ce-19.03.8-3.el7"

安装k8s的镜像源

tee /etc/yum.repos.d/kubernetes.repo <<-'EOF'
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubernetes-v1.18.6和kubesphere-v3.0

官方https://kubesphere.com.cn/多节点安装地址:https://kubesphere.com.cn/docs/multicluster-management/enable-multicluster/direct-connection/

下载kk安装文件

# 在国内先添加一个环境变量
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -

然后,使kk成为全局可执行文件

mv ./kk /usr/local/bin 

生成多节点集群的配置

# 创建一个配置文件模版
kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0 -f ./config-kubesphere.yaml

修改配置文件

修改配置文件config-kubesphere.yaml

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - name: master, address: 172.16.7.12, internalAddress: 172.16.7.12, user: root, password: azdebug_it
  - name: node3, address: 172.16.7.16, internalAddress: 172.16.7.16, user: root, password: azdebug_it
  - name: node4, address: 172.16.7.17, internalAddress: 172.16.7.17, user: root, password: azdebug_it
  - name: node5, address: 172.16.7.15, internalAddress: 172.16.7.15, user: root, password: azdebug_it
  roleGroups:
    etcd:
    - node3
    master:
    - master
    worker:
    - node4
    - node3
    - node5
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: "172.16.7.12"
    port: "6443"
  kubernetes:
    version: v1.18.6
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
    #privateRegistry: dockerhub.kubekey.local
  addons: []


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0
spec:
  local_registry: ""
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  etcd:
    monitoring: true
    endpointIps: 172.16.7.16
    port: 2379
    tlsEnable: true
  common:
    es:
      elasticsearchDataVolumeSize: 20Gi
      elasticsearchMasterVolumeSize: 4Gi
      elkPrefix: logstash
      logMaxAge: 7
    mysqlVolumeSize: 20Gi
    minioVolumeSize: 20Gi
    etcdVolumeSize: 20Gi
    openldapVolumeSize: 2Gi
    redisVolumSize: 2Gi
  console:
    enableMultiLogin: true  # enable/disable multi login
    port: 30880
  alerting:
    enabled: true
  auditing:
    enabled: true
  devops:
    enabled: true
    jenkinsMemoryLim: 5Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 1024m
    jenkinsJavaOpts_Xmx: 1024m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: true
    ruler:
      enabled: true
      replicas: 2
  logging:
    enabled: true
    logsidecarReplicas: 2
  metrics_server:
    enabled: true
  monitoring:
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 20Gi
  multicluster:
    clusterRole: none  # host | member | none
  networkpolicy:
    enabled: true
  notification:
    enabled: true
  openpitrix:
    enabled: true
  servicemesh:
    enabled: true

执行及安装kubernetes v1.18.6和kubesphere v3.0.0

# 修改配置文件,添加上节点信息(节点名称,ip等)
kk create cluster -f ./config-kubesphere.yaml

正确执行结果

[root@master ~]# sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"
W0105 22:45:15.009277   22248 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0105 22:45:15.009521   22248 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory
	[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist
	[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@master ~]# cd /home/k
k8s-script/                                       kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz  
[root@master ~]# cd /home/k
k8s-script/                                       kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz  
[root@master ~]# cd /home/k8s-script/
[root@master k8s-script]# export KKZONE=cn
[root@master k8s-script]# ./kk create cluster -f ./k8s-config.yaml
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name   | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node5  | y    | y    | y       | y        | y     | y     | y         | y      | y          |             |                  | CST 22:55:35 |
| node4  | y    | y    | y       | y        | y     | y     | y         | y      | y          |             |                  | CST 22:55:35 |
| node3  | y    | y    | y       | y        | y     | y     | y         | y      | y          |             |                  | CST 22:55:35 |
| master | y    | y    | y       | y        | y     | y     | y         | y      | y          |             |                  | CST 22:55:35 |
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[22:57:10 CST] Downloading Installation Files               
INFO[22:57:10 CST] Downloading kubeadm ...                      
INFO[22:57:10 CST] Downloading kubelet ...                      
INFO[22:57:10 CST] Downloading kubectl ...                      
INFO[22:57:11 CST] Downloading helm ...                         
INFO[22:57:11 CST] Downloading kubecni ...                      
INFO[22:57:11 CST] Configurating operating system ...           
[node5 172.16.7.15] MSG:
vm.swappiness = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
fs.file-max = 655350
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_fin_timeout = 2
net.core.somaxconn = 32768
kernel.threads-max = 655360
kernel.pid_max = 655360
vm.max_map_count = 393210
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node4 172.16.7.17] MSG:
vm.swappiness = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
fs.file-max = 655350
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_fin_timeout = 2
net.core.somaxconn = 32768
kernel.threads-max = 655360
kernel.pid_max = 655360
vm.max_map_count = 393210
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master 172.16.7.12] MSG:
vm.swappiness = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
fs.file-max = 655350
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_fin_timeout = 2
net.core.somaxconn = 32768
kernel.threads-max = 655360
kernel.pid_max = 655360
vm.max_map_count = 393210
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node3 172.16.7.16] MSG:
vm.swappiness = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
fs.file-max = 655350
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_fin_timeout = 2
net.core.somaxconn = 32768
kernel.threads-max = 655360
kernel.pid_max = 655360
vm.max_map_count = 393210
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[22:57:25 CST] Installing docker ...                        
INFO[22:57:35 CST] Start to download images on all nodes        
[node5] Downloading image: kubesphere/pause:3.2
[master] Downloading image: kubesphere/pause:3.2
[node3] Downloading image: kubesphere/etcd:v3.3.12
[node4] Downloading image: kubesphere/pause:3.2
[node5] Downloading image: kubesphere/kube-proxy:v1.18.6
[node4] Downloading image: kubesphere/kube-proxy:v1.18.6
[node3] Downloading image: kubesphere/pause:3.2
[master] Downloading image: kubesphere/kube-apiserver:v1.18.6
[node5] Downloading image: coredns/coredns:1.6.9
[node4] Downloading image: coredns/coredns:1.6.9
[master] Downloading image: kubesphere/kube-controller-manager:v1.18.6
[node5] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: kubesphere/kube-proxy:v1.18.6
[master] Downloading image: kubesphere/kube-scheduler:v1.18.6
[node5] Downloading image: calico/kube-controllers:v3.15.1
[node3] Downloading image: coredns/coredns:1.6.9
[node4] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master] Downloading image: kubesphere/kube-proxy:v1.18.6
[node5] Downloading image: calico/cni:v3.15.1
[node4] Downloading image: calico/kube-controllers:v3.15.1
[master] Downloading image: coredns/coredns:1.6.9
[node5] Downloading image: calico/node:v3.15.1
[node4] Downloading image: calico/cni:v3.15.1
[node3] Downloading image: calico/kube-controllers:v3.15.1
[node3] Downloading image: calico/cni:v3.15.1
[master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node5] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[node4] Downloading image: calico/node:v3.15.1
[master] Downloading image: calico/kube-controllers:v3.15.1
[node3] Downloading image: calico/node:v3.15.1
[node4] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[node3] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[master] Downloading image: calico/cni:v3.15.1
[master] Downloading image: calico/node:v3.15.1
[master] Downloading image: calico/pod2daemon-flexvol:v3.15.1
INFO[23:01:26 CST] Generating etcd certs                        
INFO[23:01:32 CST] Synchronizing etcd certs                     
INFO[23:01:36 CST] Creating etcd service                        
INFO[23:01:49 CST] Starting etcd cluster                        
[node3 172.16.7.16] MSG:
Configuration file already exists
Waiting for etcd to start
INFO[23:01:58 CST] Refreshing etcd configuration                
INFO[23:01:59 CST] Backup etcd data regularly                   
INFO[23:02:00 CST] Get cluster status                           
[master 172.16.7.12] MSG:
Cluster will be created.
INFO[23:02:01 CST] Installing kube binaries                     
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.16:/tmp/kubekey/kubeadm   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.15:/tmp/kubekey/kubeadm   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.17:/tmp/kubekey/kubeadm   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.12:/tmp/kubekey/kubeadm   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.16:/tmp/kubekey/kubelet   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.12:/tmp/kubekey/kubelet   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.15:/tmp/kubekey/kubelet   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.17:/tmp/kubekey/kubelet   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.16:/tmp/kubekey/kubectl   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.17:/tmp/kubekey/kubectl   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.15:/tmp/kubekey/kubectl   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.12:/tmp/kubekey/kubectl   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.12:/tmp/kubekey/helm   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.16:/tmp/kubekey/helm   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.15:/tmp/kubekey/helm   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.17:/tmp/kubekey/helm   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.16:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.17:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.12:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[23:02:50 CST] Initializing kubernetes cluster              
[master 172.16.7.12] MSG:
W0105 23:02:51.587457   23847 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0105 23:02:51.587685   23847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master master.cluster.local node3 node3.cluster.local node4 node4.cluster.local node5 node5.cluster.local] and IPs [10.233.0.1 172.16.7.12 127.0.0.1 172.16.7.12 172.16.7.16 172.16.7.17 172.16.7.15 10.233.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0105 23:03:00.466175   23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0105 23:03:00.474746   23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0105 23:03:00.476002   23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 32.002873 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6zsarg.gxg5eijglkupq85j
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token 6zsarg.gxg5eijglkupq85j \\
    --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9 \\
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token 6zsarg.gxg5eijglkupq85j \\
    --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9
[master 172.16.7.12] MSG:
service "kube-dns" deleted
[master 172.16.7.12] MSG:
service/coredns created
[master 172.16.7.12] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[master 172.16.7.12] MSG:
configmap/nodelocaldns created
[master 172.16.7.12] MSG:
I0105 23:04:05.247536   26174 version.go:252] remote version is much newer: v1.20.1; falling back to: stable-1.18
W0105 23:04:06.468801   26174 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
13a993ef56fb292d7ecb9947a3095a0eca6c419dfa148569699c474a8d6c28df
[master 172.16.7.12] MSG:
secret/kubeadm-certs patched
[master 172.16.7.12] MSG:
secret/kubeadm-certs patched
[master 172.16.7.12] MSG:
secret/kubeadm-certs patched
[master 172.16.7.12] MSG:
W0105 23:04:08.563212   26292 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join lb.kubesphere.local:6443 --token cfoibt.jdyzk3oc1aze53ri     --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9
[master 172.16.7.12] MSG:
NAME     STATUS     ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master   NotReady   master   39s   v1.18.6   172.16.7.12   <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.9
INFO[23:04:09 CST] Deploying network plugin ...                 
[master 172.16.7.12] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[23:04:13 CST] Joining nodes to cluster                     
[node5 172.16.7.15] MSG:
W0105 23:04:14.035833   52825 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0105 23:04:20.030576   52825 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node4 172.16.7.17] MSG:
W0105 23:04:14.432448   53936 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0105 23:04:19.870838   53936 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node3 172.16.7.16] MSG:
W0105 23:04:14.376894   57091 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0105 23:04:20.949568   57091 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node3 172.16.7.16] MSG:
node/node3 labeled
[node5 172.16.7.15] MSG:
node/node5 labeled
[node4 172.16.7.17] MSG:
node/node4 labeled
[master 172.16.7.12] MSG:
storageclass.storage.k8s.io/local created
serviceaccount/openebs-maya-operator created
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
configmap/openebs-ndm-config created
daemonset.apps/openebs-ndm created
deployment.apps/openebs-ndm-operator created
deployment.apps/openebs-localpv-provisioner created
INFO[23:04:51 CST] Deploying KubeSphere ...                     
v3.0.0
[master 172.16.7.12] MSG:
namespace/kubesphere-system created
namespace/kubesphere-monitoring-system created
[master 172.16.7.12] MSG:
secret/kube-etcd-client-certs created
[master 172.16.7.12] MSG:
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created


INFO[23:10:23 CST] Installation is complete.

Please check the result using the command:

       kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='.items[0].metadata.name') -f

遇到问题及解决方案

Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.8.35:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.8.36:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[16:20:31 CST] Initializing kubernetes cluster              
[master 172.16.8.36] MSG:
[preflight] Running pre-flight checks
W0105 16:20:38.445541   19396 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0105 16:20:38.453323   19396 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master 172.16.8.36] MSG:
[preflight] Running pre-flight checks
W0105 16:20:40.305612   19617 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0105 16:20:40.310273   19617 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[16:20:41 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml" 
W0105 16:20:40.826437   19657 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0105 16:20:40.826682   19657 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory
	[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist
	[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1  node=172.16.8.36
WARN[16:20:41 CST] Task failed ...                              
WARN[16:20:41 CST] error: interrupted by error                  
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
  kk create cluster [flags]

Flags:
  -f, --filename string          Path to a configuration file
  -h, --help                     help for cluster
      --skip-pull-images         Skip pre pull images
      --with-kubernetes string   Specify a supported version of kubernetes
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)
  -y, --yes                      Skip pre-check of the installation

Global Flags:
      --debug   Print detailed information (default true)

Failed to init kubernetes cluster: interrupted by error

在master节点执行

 sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"

提取关键错误

[ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory
	[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist
	[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist

在master节点执行,再次查看

[root@master ~]# ls -lh /etc/ssl/etcd/ssl/
总用量 32K
-rw-r--r--. 1 root root 1.7K 1月   5 19:59 admin-node3-key.pem
-rw-r--r--. 1 root root 1.4K 1月   5 19:59 admin-node3.pem
-rw-r--r--. 1 root root 1.7K 1月   5 19:59 ca-key.pem
-rw-r--r--. 1 root root 1.1K 1月   5 19:59 ca.pem
-rw-r--r--. 1 root root 1.7K 1月   5 19:59 member-node3-key.pem
-rw-r--r--. 1 root root 1.4K 1月   5 19:59 member-node3.pem
-rw-r--r--. 1 root root 1.7K 1月   5 19:59 node-master-key.pem
-rw-r--r--. 1 root root 1.4K 1月   5 19:59 node-master.pem

原来在/etc/ssl/etcd/ssl/中真不存在node-node3-key.pem、node-node3.pem,真么办?原来是我选择的是member模式。

解决方法

[root@master ~]# cp  /etc/ssl/etcd/ssl/member-node3-key.pem /etc/ssl/etcd/ssl/node-node3-key.pem
[root@master ~]# cp /etc/ssl/etcd/ssl/member-node3.pem /etc/ssl/etcd/ssl/node-node3.pem

再次执行

export KKZONE=cn
./kk create cluster -f ./k8s-config.yaml

查看安装KubeSphere日志:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='.items[0].metadata.name') -f

成功安装KubeSphere3.0

登陆kubesphere

#ip:30880
#用户名:admin
#默认密码:P@88w0rd

卸载集群

[root@master k8s-script]# ./kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer

Usage:
  kk [command]

Available Commands:
  add         Add nodes to kubernetes cluster
  create      Create a cluster or a cluster configuration file
  delete      Delete nodes or cluster
  help        Help about any command
  init        Initializes the installation environment
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     print the client version information

Flags:
      --debug   Print detailed information (default true)
  -h, --help    help for kk

Use "kk [command] --help" for more information about a command.
[root@master k8s-script]# ./kk delete  cluster -f k8s-config.yaml
Are you sure to delete this cluster? [yes/no]: yes
INFO[11:04:30 CST] Resetting kubernetes cluster ...             
[master 172.16.7.12] MSG:
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0111 11:04:31.302465  473615 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks
[reset] Removing info for node "master" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0111 11:04:31.309786  473615 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master 172.16.7.12] MSG:
sudo -E /bin/sh -c "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns"
[node5 172.16.7.35] MSG:
[preflight] Running pre-flight checks
W0111 11:04:33.406508   39593 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node5 172.16.7.35] MSG:
sudo -E /bin/sh -c "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns"
[node3 172.16.7.33] MSG:
[preflight] Running pre-flight checks
W0111 11:04:32.718406  132582 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node3 172.16.7.33] MSG:
sudo -E /bin/sh -c "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns"
[node4 172.16.7.34] MSG:
[preflight] Running pre-flight checks
W0111 11:04:33.340200   28825 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node4 172.16.7.34] MSG:
sudo -E /bin/sh -c "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns"
INFO[11:05:28 CST] Successful. 

优秀参考

推荐这个博文:https://blog.csdn.net/weixin_43141746/article/details/110261158

 

以上是关于关于Kubernetes-v1.23.6-初始化时报错[kubelet-check] It seems like the kubelet isn't running or healthy的主要内容,如果未能解决你的问题,请参考以下文章

关于Kubernetes-v1.23.6-初始化时报错[kubelet-check] It seems like the kubelet isn't running or healthy

关于Kubernetes-v1.23.6-网络组件-calico的安装部署...

采用KubeSphere的kk,部署安装多节点服务的kubernetes-v1.18.6和kubesphere-v3.0.0的踩坑过程记录,及反思

关于C++传递数组时初始化注意事项(允悲)

关于C++传递数组时初始化注意事项(允悲)

关于JAVA final关键字修饰类变量时,类变量初始化的问题