KubeEdgeKubeEdge部署小指南-Edge节点接入(避坑)
Posted 青花椒
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了KubeEdgeKubeEdge部署小指南-Edge节点接入(避坑)相关的知识,希望对你有一定的参考价值。
实验环境说明
云端环境:
- OS: Ubuntu Server 20.04.1 LTS 64bit
- Kubernetes: v1.19.8
- 网络插件:calico v3.16.3
- Cloudcore: kubeedge/cloudcore:v1.6.1
边缘环境:
- OS: Ubuntu Server 18.04.5 LTS 64bit
- EdgeCore: v1.19.3-kubeedge-v1.6.1
docker:
- version: 20.10.7
- cgroupDriver: systemd
边缘端注册QuikStart:
参考资料:
https://docs.kubeedge.io/en/d...
https://docs.kubeedge.io/en/d...
从cloudcore获取token
kubectl get secret -nkubeedge tokensecret -o=jsonpath=\'{.data.tokendata}\' | base64 -d
配置edgecore
如果使用二进制安装,需要先获取初始的最小化edgecore配置文件:edgecore --minconfig > edgecore.yaml
该配置文件适合刚开始使用KubeEdge的同学,算是最精简的配置。
修改其中关键配置(这里仅列出关键配置):…… modules: edgeHub: …… httpServer: https://cloudcore侧HttpServer监听地址:端口(默认为10002) token: 第一步中获取的token字符串 websocket: …… server: cloudcore侧监听地址:端口(默认为10000) …… edged: cgroupDriver: systemd //和docker所用native.cgroupdriver保持一致 …… hostnameOverride: edge01 //设置该节点注册到cloudcore的名称 nodeIP: 指定该节点IP地址 //默认会取为本机IP地址,多网卡注意检查 …… eventBus: mqttMode: 0 //使用internal的mqtt服务 ……
如果使用keadm安装部署,执行:
keadm join --cloudcore-ipport=cloudcore监听的IP地址:端口(默认为10002) --token=获取到的token字符串
执行后,edgecore节点会自行使用systemctl进行管理,并加入开机启动项,同时启动edgecore节点,此时edgecore节点的运行状态不一定正常。
同样,修改并检查配置文件,配置文件自动生成于/etc/kubeedge/config/edgecore.yaml
启动edgecore服务
如采用二进制安装,则:nohup ./edgecore --config edgecore.yaml 2>&1 > edgecore.log &
如采用keadm安装,则:
systemctl restart edgecore
验证接入
于节点上云端master节点上执行:root@master01:/home/ubuntu# kubectl get nodes NAME STATUS ROLES AGE VERSION edge01 Ready agent,edge 10h v1.19.3-kubeedge-v1.6.1 master01 Ready master 53d v1.19.8 master02 Ready master 53d v1.19.8 master03 Ready master 53d v1.19.8 node01 Ready worker 53d v1.19.8 node02 Ready worker 53d v1.19.8
我cloudcore开了自动注册,此时可见edge节点已经注册上了。
排坑过程
但是查看边缘节点运行的pod时发现边缘节点自动起了calico,kube-proxy,nodelocaldns的pod:
root@master01:/home/ubuntu# kubectl get pod -A -o wide | grep edge01
kube-system calico-node-l2h8l 0/1 Init:Error 2 52s 172.31.100.15 edge01 <none> <none>
kube-system kube-proxy-m6rbk 1/1 Running 0 2m22s 172.31.100.15 edge01 <none> <none>
kube-system nodelocaldns-hr7fk 0/1 Error 2 30s 172.31.100.15 edge01 <none> <none>
其中:
- calico初始化出现Error错误
- nodelocaldns出现Error, 原因: ContainersNotReady,
- kubeproxy部署成功了
Note: 网上有其他文章说网络插件部署不成功暂不影响edge节点的使用,在本文测试环境中,实际上是影响使用的,我测试下发了一个deployment部署nginx,其Pod一直处于Pending状态
原因分析:
- calico初始化错误,查了2020年12月的一个issues,说是CNI支持还在开发中,暂时不支持。
- 待查,预估是兼容性或者网络插件原因
- edge节点上是不能运行kubeproxy的,如果安装有kubeproxy,在启动edgecore时日志会出现报错
Failed to check the running environment: Kube-proxy should not running on edge node when running edgecore
https://github.com/kubeedge/k...
查看发现,这几个pod是使用daemonset部署的:
root@master01:/home/ubuntu# kubectl get daemonset -A NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system calico-node 5 5 5 5 5 kubernetes.io/os=linux 53d kube-system kube-proxy 5 5 5 5 5 kubernetes.io/os=linux 53d kube-system nodelocaldns 5 5 5 5 5 <none> 53d
修改其yaml文件:
kubectl edit daemonset -n kube-system calico-node kubectl edit daemonset -n kube-system kube-proxy kubectl edit daemonset -n kube-system nodelocaldns
新增亲和性配置(affinity):
spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/edge operator: DoesNotExist
停止edgecore服务
root@edge01:/usr/local/edge# systemctl stop edgecore
从k8s集群中清理该edge节点
root@master01:/home/ubuntu# kubectl drain edge01 --delete-local-data --force --ignore-daemonsets root@master01:/home/ubuntu# kubectl delete node edge01
重启edge节点上docker服务
root@edge01:/usr/local/edge# systemctl restart docker
重启edgecore
root@edge01:/usr/local/edge# systemctl start edgecore
此时,edge节点重新注册成功,且edge节点未运行任何pod
root@master01:/home/ubuntu# kubectl get nodes NAME STATUS ROLES AGE VERSION edge01 Ready agent,edge 8m3s v1.19.3-kubeedge-v1.6.1 master01 Ready master 53d v1.19.8 master02 Ready master 53d v1.19.8 master03 Ready master 53d v1.19.8 node01 Ready worker 53d v1.19.8 node02 Ready worker 53d v1.19.8 root@master01:/home/ubuntu# kubectl get pod -A -o wide | grep edge01 root@master01:/home/ubuntu#
Note:
同理,不需要运行在edge节点上的resource,也需要配置其亲和性。新增resource时(特别时daemonset和cronjob),注意选择运行节点,否则会导致pod报错或restartPolicy
为Always
的Pod不断重启。除了手动修改外,可使用以下脚本进行操作(我没有进行验证,个人觉得根据resource类型写脚本好些,改了什么自己心里有个底):
https://github.com/kubesphere...#!/bin/bash NodeSelectorPatchJson=\'{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}\' NoShedulePatchJson=\'{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}\' edgenode="edgenode" if [ $1 ]; then edgenode="$1" fi namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk \'{print $1}\' )) pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk \'{print $2}\' )) length=${#namespaces[@]} for((i=0;i<$length;i++)); do ns=${namespaces[$i]} pod=${pods[$i]} resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk \'{print $3}\') echo "Patching for ns:"${namespaces[$i]}",resources:"$resources kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson" sleep 1 done
尝试在edge节点进行部署
- 编辑deployment,部署nginx进行测试
kind: Deployment apiVersion: apps/v1 metadata: name: nginx-edge namespace: test-ns labels: app: nginx-edge annotations: deployment.kubernetes.io/revision: \'1\' spec: replicas: 1 selector: matchLabels: app: nginx-edge template: metadata: creationTimestamp: null labels: app: nginx-edge spec: containers: - name: nginx-edge01 image: \'nginx:latest\' ports: - name: tcp-80 containerPort: 80 protocol: TCP resources: limits: cpu: 300m memory: 200Mi requests: cpu: 100m memory: 10Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst nodeSelector: kubernetes.io/hostname: edge01 serviceAccountName: default serviceAccount: default securityContext: {} affinity: {} schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600
- 查看部署的nginx
root@master01:/home/ubuntu# kubectl get pod -A -o wide | grep edge01 test-ns nginx-edge-946d96f44-n2h8v 1/1 Running 0 40s 172.17.0.2 edge01 <none> <none>
这时边缘侧部署nginx成功。
以上是关于KubeEdgeKubeEdge部署小指南-Edge节点接入(避坑)的主要内容,如果未能解决你的问题,请参考以下文章