Kubernetes仪表板问题
Posted
技术标签:
【中文标题】Kubernetes仪表板问题【英文标题】:Kubernetes dashboard issue 【发布时间】:2020-01-23 04:10:09 【问题描述】:无法访问 Kubernetes 仪表板。 执行以下步骤:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
kubectl 代理 --address="192.168.56.12" -p 8001 --accept-hosts='^*$'
现在尝试从 url 访问: http://192.168.56.12:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
"kind": "Status",
"apiVersion": "v1",
"metadata":
,
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
```
Output of a few commands that will required:
[root@k8s-master ~]# kubectl logs kubernetes-dashboard-6bb65fcc49-zn2c2 --namespace=kubernetes-dashboard
来自服务器的错误:Get https://192.168.56.14:10250/containerLogs/kubernetes-dashboard/kubernetes-dashboard-6bb65fcc49-7wz6q/kubernetes-dashboard: dial tcp 192.168.56.14:10250: connect: no route to host [root@k8s-master ~]#
$kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
ATES
kube-system coredns-5c98db65d4-89c9p 1/1 Running 0 76m 10.244.0.14 k8s-master
kube-system coredns-5c98db65d4-ggqfj 1/1 Running 0 76m 10.244.0.13 k8s-master
kube-system etcd-k8s-master 1/1 Running 0 75m 192.168.56.12 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 0 75m 192.168.56.12 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 1 75m 192.168.56.12 k8s-master
kube-system kube-flannel-ds-amd64-74zrn 1/1 Running 1 74m 192.168.56.14 node1
kube-system kube-flannel-ds-amd64-hgcp8 1/1 Running 0 75m 192.168.56.12 k8s-master
kube-system kube-proxy-2lczb 1/1 Running 0 74m 192.168.56.14 node1
kube-system kube-proxy-8dxdm 1/1 Running 0 76m 192.168.56.12 k8s-master
kube-system kube-scheduler-k8s-master 1/1 Running 1 75m 192.168.56.12 k8s-master
kubernetes-dashboard dashboard-metrics-scraper-fb986f88d-d49sw 1/1 Running 0 71m 10.244.1.21 node1
kubernetes-dashboard kubernetes-dashboard-6bb65fcc49-7wz6q 0/1 CrashLoopBackOff 18 71m 10.244.1.20 node1
=========================================
[root@k8s-master ~]# kubectl describe pod kubernetes-dashboard-6bb65fcc49-7wz6q -n kubernetes-dashboard
Name: kubernetes-dashboard-6bb65fcc49-7wz6q
Namespace: kubernetes-dashboard
Priority: 0
Node: node1/192.168.56.14
Start Time: Mon, 23 Sep 2019 12:56:18 +0530
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=6bb65fcc49
Annotations: <none>
Status: Running
IP: 10.244.1.20
Controlled By: ReplicaSet/kubernetes-dashboard-6bb65fcc49
Containers:
kubernetes-dashboard:
Container ID: docker://2cbbbc9b95a43a5242abe13f8178dc589487abcfccaea06ff4be70781f4c3711
Image: kubernetesui/dashboard:v2.0.0-beta4
Image ID: docker-pullable://docker.io/kubernetesui/dashboard@sha256:a35498beec44376efcf8c4478eebceb57ec3ba39a6579222358a1ebe455ec49e
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
--namespace=kubernetes-dashboard
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Mon, 23 Sep 2019 14:10:27 +0530
Finished: Mon, 23 Sep 2019 14:10:28 +0530
Ready: False
Restart Count: 19
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-q7j4z (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-q7j4z:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-q7j4z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff <invalid> (x354 over 63m) kubelet, node1 Back-off restarting failed container
[root@k8s-master ~]#
【问题讨论】:
我可以看到你已经在你的 kubernetes 集群上安装了 3 个 flannel 和 weave-net,首先我建议删除所有它们并创建 1 个。另外请在使用此命令后提供一个屏幕:kubectl describe pod kubernetes-dashboard-6bb65fcc49-zn2c2 -n kubernetes-dashboard
jt97 他们是恶魔集。这意味着有 3 个节点。
@jt97- 粘贴了 kubectl describe pod kubernetes-dashboard 和 kubectl get pods -o wide --all-namespaces 的输出
@muku 请再提供一件事,日志。使用这个命令kubectl logs kubernetes-dashboard-6bb65fcc49-7wz6q -n kubernetes-dashboard
@jt97 同样的错误:来自服务器的错误:获取192.168.56.14:10250/containerLogs/kubernetes-dashboard/…:拨打 tcp 192.168.56.14:10250:连接:没有到主机的路由
【参考方案1】:
在意识到图表stable/kubernetes-dashboard
已经过时后,我发现你需要应用这个清单:
kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
但是,从 helm chart 迁移到硬编码清单是不可接受的。
经过一番搜索,相关图表现在是under this Git repo subfolder
不再使用stable
repo,而是使用以下代码:
helm repository add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard/kubernetes-dashboard --name my-release
祝你好运!这将解决您的所有问题,因为此图表考虑了所有依赖项。
顺便说一句:
连镜像仓库都没有了k8s.gcr.io/kubernetes-dashboard-amd64
相反,它现在位于 dockerhub kubernetesui/dashboard
下
稳定图表中未定义的指标 scraper 有一个边车。
【讨论】:
以上是关于Kubernetes仪表板问题的主要内容,如果未能解决你的问题,请参考以下文章