1 node(s) didn‘t match node selector, 2 node(s) didn‘t find available persistent volumes to bind.(代码
Posted Locutus
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了1 node(s) didn‘t match node selector, 2 node(s) didn‘t find available persistent volumes to bind.(代码相关的知识,希望对你有一定的参考价值。
1. 发现问题
- PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ghost
namespace: ghost
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/mydrive/ghost-data/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- rpi-mon-k8-worker
- PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ghost
namespace: ghost
labels:
pv: pv-ghost
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-storage
selector:
matchLabels:
name: pv-ghost
- StorageClass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
- Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-ghost
namespace: ghost
labels:
env: prod
app: ghost-app
spec:
template:
metadata:
name: ghost-app-pod
labels:
app: ghost-app
env: production
spec:
containers:
- name: ghost
image: arm32v7/ghost
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/lib/ghost/content
name: ghost-blog-data
securityContext:
privileged: True
volumes:
- name: ghost-blog-data
persistentVolumeClaim:
claimName: pvc-ghost
nodeSelector:
beta.kubernetes.io/arch: arm
replicas: 2
selector:
matchLabels:
app: ghost-app
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rpi-k8-workernode-2 Ready <none> 92d v1.15.0 192.168.100.50 <none> Raspbian GNU/Linux 9 (stretch) 4.19.42-v7+ docker://18.9.0
rpi-mon-k8-worker Ready <none> 91d v1.15.0 192.168.100.22 <none> Raspbian GNU/Linux 9 (stretch) 4.19.42-v7+ docker://18.9.0
udubuntu Ready master 92d v1.15.1 192.168.100.24 <none> Ubuntu 18.04.3 LTS 4.15.0-55-generic docker://19.3.4
# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
rpi-k8-workernode-2 Ready <none> 93d v1.15.0 beta.kubernetes.io/arch=arm,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm,kubernetes.io/hostname=rpi-k8-workernode-2,kubernetes.io/os=linux
rpi-mon-k8-worker Ready <none> 93d v1.15.0 beta.kubernetes.io/arch=arm,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm,kubernetes.io/hostname=rpi-mon-k8-worker,kubernetes.io/os=linux
udubuntu Ready master 93d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=udubuntu,kubernetes.io/os=linux,node-role.kubernetes.io/master=
# kubectl describe pvc pvc-ghost -n ghost
Name: pvc-ghost
Namespace: ghost
StorageClass: manual
Status: Pending
Volume:
Labels: pv=pv-ghost
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"pv":"pv-ghost"},"name":"pvc-ghost","namespace":"...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 6s (x2 over 21s) persistentvolume-controller waiting for first consumer to be created before binding
2. 分析问题
在deployment-ghost.yaml中,有人设置了一个nodeSelector,但是有一个worker节点没有匹配上这个nodeSelector。如果从deployment-ghost.yaml中删除这个nodeSelector,这样,pod就会部署到创建PV的节点上。据我所知,K8s无法把pod部署到PV与PVC的selector-matchLabels不同的工作节点。
在PVC中,我们可以指定一个label-selector
去绑定包含目标label的PV。只有那些label匹配label-selector
的PV,才能被绑定到PVC。PVC的label-selector
包含以下两个字段:
matchLabels,PV必须含有一个与PVC的matchLabel相同的label。
matchExpressions,由指定key、values和相关算子组成的列表。有效的算子包括In, NotIn, Exists, and DoesNotExist。
3. 解决问题
从deployment-ghost.yaml中删除这个nodeSelector。
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-ghost
namespace: ghost
labels:
env: prod
app: ghost-app
spec:
template:
metadata:
name: ghost-app-pod
labels:
app: ghost-app
env: production
spec:
containers:
- name: ghost
image: arm32v7/ghost
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/lib/ghost/content
name: ghost-blog-data
securityContext:
privileged: True
volumes:
- name: ghost-blog-data
persistentVolumeClaim:
claimName: pvc-ghost
replicas: 2
selector:
matchLabels:
app: ghost-app
在PV的yaml文件中,我们需要指定一个label name: pv-ghost
,这样能使对应的PVC匹配到。
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ghost
labels:
name: pv-ghost
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/mydrive/ghost-data/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- rpi-mon-k8-worker
在PV的yaml文件中,我们没有必要指定metadata.namespace字段,因为PV绑定是独享的,可以不用namespace。然而,在PVC的yaml中,我们可以使用用namespace。
# kubectl get pv
# kubectl get pvc -n <namespace>
# kubectl describe pv <pv_name>
# kubectl describe pv <pv_name> -n <namespace>
以上是关于1 node(s) didn‘t match node selector, 2 node(s) didn‘t find available persistent volumes to bind.(代码的主要内容,如果未能解决你的问题,请参考以下文章
ARouter there‘s no route matched 解决办法
glm.fit Warning Messages in R: algorithm didn’t converge & probabilities 0/1