K8S系列-2.常用命令

Posted ElfCafe

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了K8S系列-2.常用命令相关的知识,希望对你有一定的参考价值。

K8S系列-2.常用命令

获取当前K8S版本

[root@node1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.9", GitCommit:"4fb7ed12476d57b8437ada90b4f93b17ffaeed99", GitTreeState:"clean", BuildDate:"2020-07-15T16:18:16Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.9", GitCommit:"4fb7ed12476d57b8437ada90b4f93b17ffaeed99", GitTreeState:"clean", BuildDate:"2020-07-15T16:10:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

获取集群信息

[root@node1 ~]# kubectl cluster-info
Kubernetes master is running at https://lb.kubesphere.local:6443
coredns is running at https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

To further debug and diagnose cluster problems, use \'kubectl cluster-info dump\'.

获取node状态

#查看所有node信息
[root@node1 ~]# kubectl get nodes
NAME    STATUS     ROLES           AGE    VERSION
node1   Ready      master,worker   229d   v1.17.9
node2   NotReady   worker          229d   v1.17.9
node3   NotReady   worker          229d   v1.17.9

查看节点详细信息,除以上信息外还包括节点的内部IP、外部IP、宿主机系统版本、内核版本、容器运行时版本

#查看所有node详细信息
[root@node1 ~]# kubectl get nodes -o wide
NAME    STATUS     ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
node1   Ready      master,worker   229d   v1.17.9   192.168.56.108   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.4
node2   NotReady   worker          229d   v1.17.9   192.168.56.109   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.13
node3   NotReady   worker          229d   v1.17.9   192.168.56.110   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.13

获取namespaces

[root@node1 ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   3d10h
kube-node-lease   Active   3d10h
kube-public       Active   3d10h
kube-system       Active   3d10h

查看Pods状态

[root@node2 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-677cbc8557-2f9p8   0/1     Error     12         230d
kube-system   calico-node-55nd4                          0/1     Running   13         230d
kube-system   calico-node-cx6hw                          1/1     Running   6          230d
kube-system   calico-node-q6p47                          0/1     Error     12         230d
kube-system   coredns-79878cb9c9-bpt5x                   0/1     Error     12         230d
kube-system   coredns-79878cb9c9-p9d5b                   0/1     Error     12         230d
kube-system   kube-apiserver-node1                       0/1     Error     12         230d
kube-system   kube-controller-manager-node1              0/1     Error     12         230d
kube-system   kube-proxy-cvh9z                           1/1     Running   26         230d
kube-system   kube-proxy-rcklw                           1/1     Running   12         230d
kube-system   kube-proxy-wnzfw                           0/1     Error     24         230d
kube-system   kube-scheduler-node1                       0/1     Error     12         230d
kube-system   nodelocaldns-92qvv                         1/1     Running   6          230d
kube-system   nodelocaldns-rrhpk                         1/1     Running   13         230d
kube-system   nodelocaldns-ws5tp                         0/1     Error     12         230d

注:status详细信息

status 含义
Succeeded: pod 成功退出, 不再自动启动
Waiting
Running Running: pod 运行中(容器内进程重启中也可能是 Running 状态)
ContainerCreating 创建容器中
PULL 不了国外镜像源, 或者镜像太大导致 PULL 超时
CNI 网络错误, 无法配置 Pod 网络,无法分配 IP 地址
Failed 失败, 此 pod 里至少有一个容器未正常停止
Pending 挂起, 此 pod 因为网络或其他原因, 如正在 pull image
unknown 未知, 无法获取 pod 状态, 可能是 Node 连接不正常
Terminating pod 未正常执行 command, 需要删除 pod 重建
CrashLoopBackOff 正尽力启动 Pod, 但是一个或多个容器已经挂
ErrImagePull ErrImagePull: 镜像错误, pull 镜像失败
ImagePullBackOff 镜像名称配置错误或者镜像的密钥配置错误

查看master组件状态

查看核心服务概要

[root@node1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

这里查询到是k8s核心组件:kube-scheduler、kube-controller-manager、etcd的运行状态,如果status状态为Unhealthy,可以用kubectl describe cs查看更加详细的信息

查看核心服务详情

[root@node1 ~]# kubectl describe cs
Name:         scheduler
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  v1
Conditions:
  Message:  ok
  Status:   True
  Type:     Healthy
Kind:       ComponentStatus
Metadata:
  Creation Timestamp:  <nil>
  Self Link:           /api/v1/componentstatuses/scheduler
Events:                <none>


Name:         controller-manager
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  v1
Conditions:
  Message:  ok
  Status:   True
  Type:     Healthy
Kind:       ComponentStatus
Metadata:
  Creation Timestamp:  <nil>
  Self Link:           /api/v1/componentstatuses/controller-manager
Events:                <none>


Name:         etcd-0
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  v1
Conditions:
  Message:  {"health":"true"}
  Status:   True
  Type:     Healthy
Kind:       ComponentStatus
Metadata:
  Creation Timestamp:  <nil>
  Self Link:           /api/v1/componentstatuses/etcd-0
Events:                <none>

查看核心服务Pod详情

[root@node1 ~]# kubectl describe pod kube-apiserver-node1 -n kube-system
Name:                 kube-apiserver-node1
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 node1/192.168.56.111
Start Time:           Mon, 31 May 2021 05:38:35 -0400
Labels:               component=kube-apiserver
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: 4fec45bb10f3b6900c84d295c90665b8
                      kubernetes.io/config.mirror: 4fec45bb10f3b6900c84d295c90665b8
                      kubernetes.io/config.seen: 2020-10-14T23:57:52.157821836-04:00
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.56.111
IPs:
  IP:           192.168.56.111
Controlled By:  Node/node1
Containers:
  kube-apiserver:
    Container ID:  docker://a8d8f9bdbcfb7937d54bae80520f7bb4be556689191af096250c6203f826b9fc
    Image:         dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
    Image ID:      docker-pullable://dockerhub.kubekey.local/kubesphere/kube-apiserver@sha256:3eb34ba74ad26607f7f20a794771a05d3480e5360fcf3366b7bb8cfcba1de929
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --advertise-address=192.168.56.111
      --allow-privileged=true
      --anonymous-auth=True
      --apiserver-count=1
      --authorization-mode=Node,RBAC
      --bind-address=0.0.0.0
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --enable-admission-plugins=NodeRestriction
      --enable-aggregator-routing=False
      --enable-bootstrap-token-auth=true
      --endpoint-reconciler-type=lease
      --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem
      --etcd-certfile=/etc/ssl/etcd/ssl/node-node1.pem
      --etcd-keyfile=/etc/ssl/etcd/ssl/node-node1-key.pem
      --etcd-servers=https://192.168.56.111:2379
      --feature-gates=CSINodeInfo=true,VolumeSnapshotDataSource=true,ExpandCSIVolumes=true,RotateKubeletClientCertificate=true
      --insecure-port=0
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --profiling=False
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-cluster-ip-range=10.233.0.0/18
      --storage-backend=etcd3
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Mon, 31 May 2021 05:40:06 -0400
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Fri, 28 May 2021 05:49:19 -0400
      Finished:     Mon, 31 May 2021 05:38:22 -0400
    Ready:          True
    Restart Count:  12
    Requests:
      cpu:        250m
    Liveness:     http-get https://192.168.56.111:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/pki from etc-pki (ro)
      /etc/ssl/certs from ca-certs (ro)
      /etc/ssl/etcd/ssl from etcd-certs-0 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-pki:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pki
    HostPathType:  DirectoryOrCreate
  etcd-certs-0:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/etcd/ssl
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute

Events:
  Type    Reason          Age                   From            Message
  ----    ------          ----                  ----            -------
  Normal  SandboxChanged  10d                   kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          10d                   kubelet, node1  Container image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9" already present on machine
  Normal  Created         10d                   kubelet, node1  Created container kube-apiserver
  Normal  Started         10d                   kubelet, node1  Started container kube-apiserver
  Normal  SandboxChanged  10d                   kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          10d                   kubelet, node1  Container image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9" already present on machine
  Normal  Created         10d                   kubelet, node1  Created container kube-apiserver
  Normal  Started         10d                   kubelet, node1  Started container kube-apiserver
  Normal  SandboxChanged  9d (x2 over 9d)       kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          9d                    kubelet, node1  Container image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9" already present on machine
  Normal  Created         9d                    kubelet, node1  Created container kube-apiserver
  Normal  Started         9d                    kubelet, node1  Started container kube-apiserver
  Normal  SandboxChanged  3d                    kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          3d                    kubelet, node1  Container image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9" already present on machine
  Normal  Created         3d                    kubelet, node1  Created container kube-apiserver
  Normal  Started         3d                    kubelet, node1  Started container kube-apiserver
  Normal  SandboxChanged  2d23h                 kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          2d23h                 kubelet, node1  Container image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9" already present on machine
  Normal  Created         2d23h                 kubelet, node1  Created container kube-apiserver
  Normal  Started         2d23h                 kubelet, node1  Started container kube-apiserver
  Normal  SandboxChanged  5m4s (x2 over 6m35s)  kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          5m4s                  kubelet, node1  Container image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9" already present on machine
  Normal  Created         5m4s                  kubelet, node1  Created container kube-apiserver
  Normal  Started         5m4s                  kubelet, node1  Started container kube-apiserver

查看master各项服务日志

kubectl logs --tail 5 -f kube-apiserver-node1 -n kube-system
kubectl logs --tail 5 -f kube-controller-manager-node1 -n kube-system
kubectl logs --tail 5 -f kube-scheduler-node1 -n kube-system

查看worker状态

检查node

参考获取node状态

检查node日志

kubectl/oc logs --tail 100 -f kube-proxy -n kube-system
kubectl/oc logs --tail 100 -f kebelet -n kube-system

检查service

[root@node1 ~]# kubectl get svc -o wide --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
default       kubernetes   ClusterIP   10.233.0.1   <none>        443/TCP                  3d10h   <none>
kube-system   coredns      ClusterIP   10.233.0.3   <none>        53/UDP,53/TCP,9153/TCP   3d10h   k8s-app=kube-dns

以上是关于K8S系列-2.常用命令的主要内容,如果未能解决你的问题,请参考以下文章

K8S系列第十讲:kubectl 命令大全

最全最详细publiccms常用的代码片段

最全最详细publiccms其他常用代码片段(内容站点)

k8s常用命令

k8s-常用命令

k8s --> 18 k8s常用命令