k8sKubeCube,网易开源的k8s管理平台,非常方便,但是尝试安装没有成功,打算换个电脑再试试,没有看到启动界面。

Posted freewebsys

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8sKubeCube,网易开源的k8s管理平台,非常方便,但是尝试安装没有成功,打算换个电脑再试试,没有看到启动界面。相关的知识,希望对你有一定的参考价值。

KubeCube

产品介绍

KubeCube 是一个开源的企业级容器平台,为企业提供 Kubernetes 资源可视化管理以及统一的多集群多租户管理功能。KubeCube 可以简化应用部署、管理应用的生命周期和提供丰富的监控界面和日志审计功能,帮助企业快速构建一个强大和功能丰富的容器云管理平台。

项目特点

  • 开箱即用
    • 学习曲线平缓,集成统一认证鉴权、多集群管理、监控、日志、告警等功能,释放生产力
    • 运维友好,提供 Kubernetes 资源可视化管理和统一运维,具备全面的自监控能力
    • 快速部署,提供一键式 All in One 部署模式,提供生产级高可用部署
  • 多租户管理
    • 提供租户、项目、空间多级模型,以满足企业内资源隔离和软件项目管理需求
    • 基于多租户模型,提供权限控制、资源共享/隔离等能力
  • 统一的多K8s集群管理
    • 提供多K8s集群的中央管理面板,支持集群导入
    • 在多K8s集群中提供统一的身份认证和拓展 Kubernetes 原生 RBAC 能力实现访问控制
    • 通过 WebConsole、CloudShell 快速管理集群资源
  • 集群自治
    • 当 KubeCube 管理集群停机维护时,各业务集群可保持自治,保持正常的访问控制,业务 Pod 无感知
  • 功能热插拔
    • 提供最小化安装,用户可以根据需求随时开关功能
    • 可热插拔,无需重启服务
  • 多种接入方式
    • 支持 Open API:方便对接用户现有系统
    • 兼容 Kubernetes 原生 API:无缝兼容现有 Kubernetes 工具链,如 kubectl
  • 无供应商锁定
    • 可导入任意标准 Kubernetes 集群,更好的支持多云/混合云
  • 其他功能
    • 操作审计
    • 丰富的可观测性功能

解决的问题

  • 企业上云:简化学习曲线,帮助企业以较小的成本完成容器云平台搭建,实现应用快速上云需求,辅助企业推动应用上云。
  • 资源隔离:多租户管理提供租户、项目和空间三个层级的资源隔离、配额管理和权限控制,完全适配企业级私有云建设的资源和权限管控需求。
  • 集群规模限制:统一的容器云管理平台,可以管理多个业务 Kubernetes 集群,数量不设上限。既能通过横向扩容新增 Kubernetes 集群的方式解决单个 Kubernetes 集群规模的限制,又可以满足不同业务条线要求独占集群的需求。
  • 丰富的可观测性:支持监控告警和日志采集能力,提供丰富的工作负载监控指标界面,提供集群维度的监控界面,提供灵活的日志查询能力。

然后报错:

2022-11-24 09:15:42 INFO environment checking
|---------------------------------------------------------------------|
|     sshpass     |    conntrack    |      unzip    |  libseccomp     |
|---------------------------------------------------------------------|
|     ✓           |    ✓            |      ✓        |         ✓       |
|---------------------------------------------------------------------|
2022-11-24 09:15:42 INFO downloading manifests for kubecube
--2022-11-24 09:15:42--  https://kubecube.nos-eastchina1.126.net/kubecube-installer/v1.4/manifests.tar.gz
正在解析主机 kubecube.nos-eastchina1.126.net (kubecube.nos-eastchina1.126.net)... 59.111.35.1, 59.111.35.2
正在连接 kubecube.nos-eastchina1.126.net (kubecube.nos-eastchina1.126.net)|59.111.35.1|:443... 已连接。
错误: 无法验证 kubecube.nos-eastchina1.126.net 的由 “CN=GeoTrust RSA CN CA G2,O=DigiCert Inc,C=US” 颁发的证书:
  无法本地校验颁发者的权限。
要以不安全的方式连接至 kubecube.nos-eastchina1.126.net,使用“--no-check-certificate”。

把文件下载下来,然后修改  70 行代码,wget 增加--no-check-certificate 参数。

     66 if [ -e "./manifests" ]; then
     67   echo -e "$(date +'%Y-%m-%d %H:%M:%S') \\033[32mINFO\\033[0m manifests already exist"
     68 else
     69   echo -e "$(date +'%Y-%m-%d %H:%M:%S') \\033[32mINFO\\033[0m downloading manifests for kubecube"
     70   wget --no-check-certificate https://kubecube.nos-eastchina1.126.net/kubecube-installer/v1.4/manifests.tar.gz -O manifests.tar.gz
     71 
     72   tar -xzvf manifests.tar.gz > /dev/null
     73 fi

然后就可以下载安装了。顺序执行,从网易的服务器上面下载容器镜像。

# bash entry.sh 
2022-11-24 09:35:59 INFO environment checking
|---------------------------------------------------------------------|
|     sshpass     |    conntrack    |      unzip    |  libseccomp     |
|---------------------------------------------------------------------|
|     ✓           |    ✓            |      ✓        |         ✓       |
|---------------------------------------------------------------------|
2022-11-24 09:35:59 INFO manifests already exist
-------------System Infomation-------------
 System running time:11 days,19 hours, 19 minutes 
 IP: 192.168.1.110 
 CPU model: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz 
 CPU arch:x86_64  
 CPU cores: 4 
  
  
 CPU load: 0.31 0.55 0.75 
--------------------------------------------
2022-11-24 09:35:59 WARN docker is already running.
2022-11-24 09:35:59 WARN docker is already running.
2022-11-24 09:35:59 WARN kubernetes binaries existed
2022-11-24 09:35:59 INFO downloading images
v3.19.1-m: Pulling from kubecube/calico/node
Digest: sha256:b70bd93db80365b4c57014bdc2ccaaac9d8c09edae94443c396f2174742650dd
Status: Image is up to date for hub.c.163.com/kubecube/calico/node:v3.19.1-m
hub.c.163.com/kubecube/calico/node:v3.19.1-m
v3.19.1-m: Pulling from kubecube/calico/cni
Digest: sha256:b6c282aca28c1da56d607af6d692394bcec3a34d569a71139cf0286a54f5ca69
Status: Image is up to date for hub.c.163.com/kubecube/calico/cni:v3.19.1-m
hub.c.163.com/kubecube/calico/cni:v3.19.1-m
v3.19.1-m: Pulling from kubecube/calico/pod2daemon-flexvol
Digest: sha256:cecf91b6c518bb25f46ff1c44d1c99c1b5ade250485ab9c116f7b4cc14ae59e9
Status: Image is up to date for hub.c.163.com/kubecube/calico/pod2daemon-flexvol:v3.19.1-m
hub.c.163.com/kubecube/calico/pod2daemon-flexvol:v3.19.1-m
v3.19.1-m: Pulling from kubecube/calico/kube-controllers
Digest: sha256:df8155aa54e5f72abe6d8618c9ea5246a0dd2cfa006df79a6e41cf8b4a7a3486
Status: Image is up to date for hub.c.163.com/kubecube/calico/kube-controllers:v3.19.1-m
hub.c.163.com/kubecube/calico/kube-controllers:v3.19.1-m
v1.23.5: Pulling from google_containers/kube-apiserver
Digest: sha256:ddf5bf7196eb534271f9e5d403f4da19838d5610bb5ca191001bde5f32b5492e
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.5
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.5
v1.23.5: Pulling from google_containers/kube-controller-manager
Digest: sha256:cca0fb3532abedcc95c5f64268d54da9ecc56cc4817ff08d0128941cf2b0e1a4
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5
v1.23.5: Pulling from google_containers/kube-scheduler
Digest: sha256:489efb65da9edc40bf0911f3e6371e5bb6b8ad8fde1d55193a6cc84c2ef36626
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.5
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.5
v1.23.5: Pulling from google_containers/kube-proxy
Digest: sha256:c1f625d115fbd9a12eac615653fc81c0edb33b2b5a76d1e09d5daed11fa557c1
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.5
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.5
3.6: Pulling from google_containers/pause
Digest: sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
3.5.1-0: Pulling from google_containers/etcd
Digest: sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
v1.8.6: Pulling from google_containers/coredns
Digest: sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
v1.2.0: Pulling from kubecube/audit
Digest: sha256:a1e4352507ec7ef1d005c18645f6004077373848f942653e905a23a3808ae8a9
Status: Image is up to date for hub.c.163.com/kubecube/audit:v1.2.0
hub.c.163.com/kubecube/audit:v1.2.0
v1.2.1: Pulling from kubecube/cloudshell
Digest: sha256:8a8c335f34c142444d7d11b1b88af41e722634b02abe65fa2366a3f4e8f1b371
Status: Image is up to date for hub.c.163.com/kubecube/cloudshell:v1.2.1
hub.c.163.com/kubecube/cloudshell:v1.2.1
v1.2.4: Pulling from kubecube/webconsole
Digest: sha256:51618de04401628435a17fcc0dba450e0c97a94706cc23141b6b2614f54bbc45
Status: Image is up to date for hub.c.163.com/kubecube/webconsole:v1.2.4
hub.c.163.com/kubecube/webconsole:v1.2.4
v1.2.0: Pulling from kubecube/frontend
Digest: sha256:8102081b357e430dd3f9c12ebfa3ffb1d13942cb37f75513aa9a56defe1a4722
Status: Image is up to date for hub.c.163.com/kubecube/frontend:v1.2.0
hub.c.163.com/kubecube/frontend:v1.2.0
v1.4.1: Pulling from kubecube/cube
Digest: sha256:919b061f7a46f9f0e95d2d99c6ed8351ba9270e30b7ab22c51e1b1e728ac6e10
Status: Image is up to date for hub.c.163.com/kubecube/cube:v1.4.1
hub.c.163.com/kubecube/cube:v1.4.1
v1.4.1: Pulling from kubecube/warden
Digest: sha256:4ecfcdfefa81729cd46724c5f522ab2653ffe40e9450639e3cc2d2c6247e1f81
Status: Image is up to date for hub.c.163.com/kubecube/warden:v1.4.1
hub.c.163.com/kubecube/warden:v1.4.1
v1.4.1: Pulling from kubecube/warden-init
Digest: sha256:186205272e5d3c0b7fa7a7015db7a68246b081376ffc5c3141664b2c34454fd0
Status: Image is up to date for hub.c.163.com/kubecube/warden-init:v1.4.1
hub.c.163.com/kubecube/warden-init:v1.4.1
v1.2.0: Pulling from kubecube/warden-dependence
Digest: sha256:7ae4bd0008197ec141a5e305968d015725901561d708d0d178bfd7c5a38df2f0
Status: Image is up to date for hub.c.163.com/kubecube/warden-dependence:v1.2.0
hub.c.163.com/kubecube/warden-dependence:v1.2.0
v0.21.0: Pulling from kubecube/alertmanager
Digest: sha256:702e01b4c96e4721927a7ae93afefc9b7fa5bc1cd5dcfbd23e70e50cfff2795e
Status: Image is up to date for hub.c.163.com/kubecube/alertmanager:v0.21.0
hub.c.163.com/kubecube/alertmanager:v0.21.0
v0.47.0: Pulling from kubecube/prometheus-config-reloader
Digest: sha256:0029252e7cf8cf38fc58795928d4e1c746b9e609432a2ee5417a9cab4633b864
Status: Image is up to date for hub.c.163.com/kubecube/prometheus-config-reloader:v0.47.0
hub.c.163.com/kubecube/prometheus-config-reloader:v0.47.0
1.10.7: Pulling from kubecube/k8s-sidecar
Digest: sha256:ac60db5cfb11c84f23c81a717463d668c7db9134f9a2283d38e13455f8481a6c
Status: Image is up to date for hub.c.163.com/kubecube/k8s-sidecar:1.10.7
hub.c.163.com/kubecube/k8s-sidecar:1.10.7
7.5.5: Pulling from kubecube/grafana
Digest: sha256:58ea68c27090cee44872800fd15c55592905b1ab86daa8ffbb42fd6cbdfbe3e2
Status: Image is up to date for hub.c.163.com/kubecube/grafana:7.5.5
hub.c.163.com/kubecube/grafana:7.5.5
v1.9.8: Pulling from kubecube/kube-state-metrics
Digest: sha256:de088703b8faab6f293bb2a16931cd814b1e2ddfe786074457946004e81e6fa7
Status: Image is up to date for hub.c.163.com/kubecube/kube-state-metrics:v1.9.8
hub.c.163.com/kubecube/kube-state-metrics:v1.9.8
v0.47.0: Pulling from kubecube/prometheus-operator
Digest: sha256:89a2d121b1a8f9a4a45dd20fdcf081a4468a0a0ad4e0cbe1aa7fd289e5a85cb3
Status: Image is up to date for hub.c.163.com/kubecube/prometheus-operator:v0.47.0
hub.c.163.com/kubecube/prometheus-operator:v0.47.0
v1.1.2: Pulling from kubecube/node-exporter
Digest: sha256:4239af7a8cffcfa003ff624398ff6e78dfea81794f686186282d4ebe99d4a8a1
Status: Image is up to date for hub.c.163.com/kubecube/node-exporter:v1.1.2
hub.c.163.com/kubecube/node-exporter:v1.1.2
v2.26.1: Pulling from kubecube/prometheus
Digest: sha256:fb5ef0e43748499f9803a7806782a4fd358637216f1eab28e315b2f279d70331
Status: Image is up to date for hub.c.163.com/kubecube/prometheus:v2.26.1
hub.c.163.com/kubecube/prometheus:v2.26.1
v0.22.0: Pulling from kubecube/thanos
Digest: sha256:6680c5a66cf4228a961efa31594e25be6a80bda67901633534a918f457392597
Status: Image is up to date for hub.c.163.com/kubecube/thanos:v0.22.0
hub.c.163.com/kubecube/thanos:v0.22.0
1.21.0: Pulling from kubecube/kubectl-tools
Digest: sha256:a8841469c637b699a60ea12e9853dc29624fe3c36742ee00e7bf107ac67e0737
Status: Image is up to date for hub.c.163.com/kubecube/kubectl-tools:1.21.0
hub.c.163.com/kubecube/kubectl-tools:1.21.0
v1.0.0: Pulling from kubecube/hnc/hnc-manager
Digest: sha256:8ecd6af56dfd845257801f1fa3cea8119b78956d6e737352e7d874f1d80daa1f
Status: Image is up to date for hub.c.163.com/kubecube/hnc/hnc-manager:v1.0.0
hub.c.163.com/kubecube/hnc/hnc-manager:v1.0.0
v0.46.0-m: Pulling from kubecube/ingress-nginx/controller
Digest: sha256:710ad4d51a680011d48381fb5b9bb97f3dda0e45cc2c4c73358d86e4c23617a1
Status: Image is up to date for hub.c.163.com/kubecube/ingress-nginx/controller:v0.46.0-m
hub.c.163.com/kubecube/ingress-nginx/controller:v0.46.0-m
v1.2.0: Pulling from kubecube/ingress-nginx/controller
Digest: sha256:314435f9465a7b2973e3aa4f2edad7465cc7bcdc8304be5d146d70e4da136e51
Status: Image is up to date for hub.c.163.com/kubecube/ingress-nginx/controller:v1.2.0
hub.c.163.com/kubecube/ingress-nginx/controller:v1.2.0
v1.1.1: Pulling from kubecube/ingress-nginx/kube-webhook-certgen
Digest: sha256:78351fc9d9b5f835e0809921c029208faeb7fbb6dc2d3b0d1db0a6584195cfed
Status: Image is up to date for hub.c.163.com/kubecube/ingress-nginx/kube-webhook-certgen:v1.1.1
hub.c.163.com/kubecube/ingress-nginx/kube-webhook-certgen:v1.1.1
v1.5.1-m: Pulling from kubecube/jettech/kube-webhook-certgen
Digest: sha256:ead5a540eb86b8e6f82de7394902b427c2856224b5bb98f7335c9d03ce5dd38c
Status: Image is up to date for hub.c.163.com/kubecube/jettech/kube-webhook-certgen:v1.5.1-m
hub.c.163.com/kubecube/jettech/kube-webhook-certgen:v1.5.1-m
v1.5.1: Pulling from kubecube/jettech/kube-webhook-certgen
Digest: sha256:ead5a540eb86b8e6f82de7394902b427c2856224b5bb98f7335c9d03ce5dd38c
Status: Image is up to date for hub.c.163.com/kubecube/jettech/kube-webhook-certgen:v1.5.1
hub.c.163.com/kubecube/jettech/kube-webhook-certgen:v1.5.1
v0.0.19-m: Pulling from kubecube/rancher/local-path-provisioner
Digest: sha256:6bb91f85457463f733b2140ff4fe12afe1b443dc9abee7ca6a231c76ddd2d374
Status: Image is up to date for hub.c.163.com/kubecube/rancher/local-path-provisioner:v0.0.19-m
hub.c.163.com/kubecube/rancher/local-path-provisioner:v0.0.19-m
v0.4.1-m: Pulling from kubecube/rancher/metrics-server
Digest: sha256:fa30c9576d6545a193cd7fe97af450cdaf11f9eda31c76396af4a5e0737f92b8
Status: Image is up to date for hub.c.163.com/kubecube/rancher/metrics-server:v0.4.1-m
hub.c.163.com/kubecube/rancher/metrics-server:v0.4.1-m
latest-m: Pulling from kubecube/busybox
Digest: sha256:bacfbf3788dc26694339403484c710771635dd7c11472f652b31bb30b224b097
Status: Image is up to date for hub.c.163.com/kubecube/busybox:latest-m
hub.c.163.com/kubecube/busybox:latest-m
2022-11-24 09:36:46 INFO doing previous preparation
2022-11-24 09:36:46 DEBUG closing swap
2022-11-24 09:36:46 DEBUG config kernel params, passing bridge flow of IPv4 to iptables chain
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
2022-11-24 09:36:46 INFO enable kubelet service
2022-11-24 09:36:51 INFO make configuration for kubeadm
2022-11-24 09:36:51 DEBUG vip not be set, use node ip
2022-11-24 09:36:51 INFO installing node MODE: master
2022-11-24 09:36:51 INFO init kubernetes, version: 1.23.5
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-ethtool]: ethtool not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
2022-11-24 09:38:47 ERROR install kubernetes failed

没有启动成功,电脑的linux系统是 xubuntu 18 版本,找个22 版本的机器再试试。

以上是关于k8sKubeCube,网易开源的k8s管理平台,非常方便,但是尝试安装没有成功,打算换个电脑再试试,没有看到启动界面。的主要内容,如果未能解决你的问题,请参考以下文章

Rancher开源Fleet:业界首个海量K8S集群管理项目

Rancher 使用介绍(可以通过界面管理 K8s 平台)

K8S 正当时

K8s在LinuxONE上搭建

K8s

k8s摘要