Kubernetes集群Node管理
Posted 123坤
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kubernetes集群Node管理相关的知识,希望对你有一定的参考价值。
Kubernetes集群Node管理
一、查看集群信息
[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.10.100:6443
CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
二、查看节点信息
2.1 查看集群节点信息
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 36d v1.21.10
k8s-master02 Ready <none> 36d v1.21.10
k8s-master03 Ready <none> 36d v1.21.10
k8s-worker02 Ready <none> 36d v1.21.10
2.2 查看集群节点详细信息
[root@k8s-master01 ~]# kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready <none> 36d v1.21.10 192.168.10.101 <none> CentOS Linux 7 (Core) 6.1.0-1.el7.elrepo.x86_64 docker://20.10.22
k8s-master02 Ready <none> 36d v1.21.10 192.168.10.102 <none> CentOS Linux 7 (Core) 6.1.0-1.el7.elrepo.x86_64 docker://20.10.22
k8s-master03 Ready <none> 36d v1.21.10 192.168.10.103 <none> CentOS Linux 7 (Core) 6.1.0-1.el7.elrepo.x86_64 docker://20.10.22
k8s-worker02 Ready <none> 36d v1.21.10 192.168.10.104 <none> CentOS Linux 7 (Core) 6.1.1-1.el7.elrepo.x86_64 docker://20.10.22
2.3 查看节点描述详细信息
[root@k8s-master01 ~]# kubectl describe nodes k8s-master01
Name: k8s-master01
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-master01
kubernetes.io/os=linux
Annotations: node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.10.101/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.32.128
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 24 Dec 2022 23:45:43 +0800
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: k8s-master01
AcquireTime: <unset>
RenewTime: Mon, 30 Jan 2023 11:03:00 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 29 Jan 2023 10:12:40 +0800 Sun, 29 Jan 2023 10:12:40 +0800 CalicoIsUp Calico is running on this node
MemoryPressure False Mon, 30 Jan 2023 11:00:34 +0800 Sat, 24 Dec 2022 23:45:42 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 30 Jan 2023 11:00:34 +0800 Sat, 24 Dec 2022 23:45:42 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 30 Jan 2023 11:00:34 +0800 Sat, 24 Dec 2022 23:45:42 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 30 Jan 2023 11:00:34 +0800 Sun, 25 Dec 2022 00:06:35 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.10.101
Hostname: k8s-master01
Capacity:
cpu: 2
ephemeral-storage: 19466Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3995080Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 18370422344
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3892680Ki
pods: 110
System Info:
Machine ID: 0e0a3ea7d11c4165b5eb28435792ad47
System UUID: d3794d56-6573-8633-b1d0-456a80d8ee9a
Boot ID: 09607e08-716a-4834-847b-534c12d3e5de
Kernel Version: 6.1.0-1.el7.elrepo.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.22
Kubelet Version: v1.21.10
Kube-Proxy Version: v1.21.10
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-d5qw7 250m (12%) 0 (0%) 0 (0%) 0 (0%) 36d
kubernetes-dashboard dashboard-metrics-scraper-c45b7869d-9c8jj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (12%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>
三、worker node节点管理集群
-
如果是kubeasz安装,所有节点(包括master与node)都已经可以对集群进行管理
-
如果是kubeadm安装,在node节点上管理时会报如下错误
[root@k8s-worker1 ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
把master上的管理文件/etc/kubernetes/admin.conf
拷贝到node节点的$HOME/.kube/config
就可以让node节点也可以实现kubectl命令管理
在node节点的用户家目录创建.kube
目录
[root@k8s-worker02 ~]# mkdir /root/.kube
2, 在master节点做如下操作
[root@k8s-worker02 ~]# scp /etc/kubernetes/admin.conf node1:/root/.kube/config
3, 在worker node节点验证
[root@k8s-worker02 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 2d20h v1.21.10
k8s-master02 Ready <none> 2d20h v1.21.10
k8s-master03 Ready <none> 2d20h v1.21.10
k8s-worker02 Ready <none> 2d20h v1.21.10
四、节点标签(label)
- k8s集群如果由大量节点组成,可将节点打上对应的标签,然后通过标签进行筛选及查看,更好的进行资源对象的相关选择与匹配
4.1 查看节点标签信息
[root@k8s-master01 ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master01 Ready <none> 36d v1.21.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux
k8s-master02 Ready <none> 36d v1.21.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=test1,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux
k8s-master03 Ready <none> 36d v1.21.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,bussiness=ad,env=test2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,zone=A
k8s-worker02 Ready <none> 36d v1.21.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux
4.2 设置节点标签信息
4.2.1 设置节点标签
为节点k8s-worker1
打一个region=huanai
的标签
[root@k8s-master01 ~]# kubectl label node k8s-worker01 region=huanai
node/k8s-worker01 labeled
4.2.2 查看所有节点带region的标签
[root@k8s-master01 ~]# kubectl get nodes -L region
NAME STATUS ROLES AGE VERSION REGION
k8s-master01 Ready <none> 2d21h v1.21.10
k8s-master02 Ready <none> 2d21h v1.21.10
k8s-master03 Ready <none> 2d21h v1.21.10
k8s-worker02 Ready <none> 2d21h v1.21.10 huanai
4.3 多维度标签
4.3.1 设置多维度标签
也可以加其它的多维度标签,用于不同的需要区分的场景
如把k8s-master03
标签为华南区,A机房,测试环境,游戏业务
[root@k8s-master01 ~]# kubectl label node k8s-master03 zone=A env=test bussiness=game
node/k8s-master03 labeled
[root@k8s-master01 ~]# kubectl get nodes k8s-master03 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master03 Ready <none> 2d21h v1.21.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,bussiness=game,env=test,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,zone=A
4.3.2 显示节点的相应标签
[root@k8s-master01 ~]# kubectl get nodes -L region,zone
NAME STATUS ROLES AGE VERSION REGION ZONE
k8s-master01 Ready <none> 2d21h v1.21.10
k8s-master02 Ready <none> 2d21h v1.21.10
k8s-master03 Ready <none> 2d21h v1.21.10 A
k8s-worker02 Ready <none> 2d21h v1.21.10 huanai
4.3.3 查找region=huanai
的节点
[root@k8s-master01 ~]# kubectl get nodes -l region=huanai
NAME STATUS ROLES AGE VERSION
k8s-worker02 Ready <none> 2d21h v1.21.10
4.3.4 标签的修改
[root@k8s-master01 ~]# kubectl label node k8s-master03 bussiness=ad --overwrite=true
node/k8s-master03 labeled
加上--overwrite=true覆盖原标签的value进行修改操作
[root@k8s-master01 ~]# kubectl get nodes -L bussiness
NAME STATUS ROLES AGE VERSION BUSSINESS
k8s-master01 Ready <none> 2d21h v1.21.10
k8s-master02 Ready <none> 2d21h v1.21.10
k8s-master03 Ready <none> 2d21h v1.21.10 ad
k8s-worker02 Ready <none> 2d21h v1.21.10
4.3.5 标签的删除
使用key加一个减号的写法来取消标签
[root@k8s-master02 ~]# kubectl label node k8s-worker02 region-
node/k8s-worker02 labeled
4.3.6 标签选择器
标签选择器主要有2类:
- 等值关系: =, !=
- 集合关系: KEY in VALUE1, VALUE2…
[root@k8s-master01 ~]# kubectl label node k8s-master02 env=test1
node/k8s-master02 labeled
[root@k8s-master01 ~]# kubectl label node k8s-master03 env=test2
node/k8s-master03 labeled
[root@k8s-master01 ~]# kubectl get node -l "env in(test1,test2)"
NAME STATUS ROLES AGE VERSION
k8s-master02 Ready <none> 2d21h v1.21.10
k8s-master03 Ready <none> 2d21h v1.21.10
以上是关于Kubernetes集群Node管理的主要内容,如果未能解决你的问题,请参考以下文章
Kubernetes容器集群管理环境 - Node节点的移除与加入