Kubernetes NodePort 连接被拒绝

Posted

技术标签:

【中文标题】Kubernetes NodePort 连接被拒绝【英文标题】:Kubernetes NodePort connection refused 【发布时间】:2020-11-06 10:09:07 【问题描述】:

我在 virtualbox 环境中有一个包含 3 个节点的集群。我用标志创建了集群

kubeadm init --pod-network-cidr=10.244.0.0/16

然后我安装了 flannel 并将其余两个节点添加到集群中。之后,创建了新的虚拟机来托管 docker 映像的私有存储库。接下来,我使用这个 .yaml 来创建我的应用程序的部署:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gunicorn
spec:
  selector:
    matchLabels:
      app: gunicorn
  replicas: 1
  template:
    metadata:
      labels:
        app: gunicorn
    spec:
      imagePullSecrets:
      - name: my-registry-key
      containers:
      - name: ipcheck2
        image: 192.168.2.4:8083/ipcheck2:1
        imagePullPolicy: Always
        command:
        - sleep
        - "infinity"
        ports:
        - containerPort: 8080
          hostPort: 8080

图像是从以下 dockerfile 创建并推送到 repo:

FROM python:3

EXPOSE 8080

ADD /IP_check/ /

WORKDIR /

RUN pip install pip --upgrade

RUN pip install -r requirements.txt

CMD ["gunicorn", "IP_check.wsgi", "-b :8080"]

此时我可以看出,如果我从 docker 引擎端运行容器,则公开此端口,我可以连接到应用程序。

接下来我为我的应用创建了 NodePort 服务:

apiVersion: v1
kind: Service
metadata:
  name: ipcheck
spec:
  selector:
    app: gunicorn
  ports:
  - port: 70
    targetPort: 8080
    nodePort: 30000
  type: NodePort

这就是问题所在。我检查了 kubectl describe pods,哪个节点正在使用我的应用程序运行 pod。然后我尝试使用 curl :30000 访问应用程序,但它不起作用。

curl: (7) Failed connect to 192.168.2.3:30000; Connection refused

我还从kubernetes documentation 安装了 hello-world 应用程序,并使用 NodePort 公开了它。这也没有用。

任何人都知道为什么我无法从集群内部和使用 NodePort 从集群外部访问 pod?

操作系统:Centos7

IP 地址:

Node1 192.168.2.1   -   Master
Node2 192.168.2.2   -   Worker
Node3 192.168.2.3   -   Worker
Node4 192.168.2.4   -   Private repo (outside of cluster)

Pod 描述:

Name:         gunicorn-5f7f485585-wjdnf
Namespace:    default
Priority:     0
Node:         node3/192.168.2.3
Start Time:   Thu, 16 Jul 2020 18:01:54 +0200
Labels:       app=gunicorn
              pod-template-hash=5f7f485585
Annotations:  <none>
Status:       Running
IP:           10.244.1.20
IPs:
  IP:           10.244.1.20
Controlled By:  ReplicaSet/gunicorn-5f7f485585
Containers:
  ipcheck2:
    Container ID:  docker://9aa18f3fff1d13dfc76355dde72554fd3af304435c9b7fc4f7365b4e6ac9059a
    Image:         192.168.2.4:8083/ipcheck2:1
    Image ID:      docker-pullable://192.168.2.4:8083/ipcheck2@sha256:e48469c6d1bec474b32cd04ca5ccbc057da0377dff60acc37e7fa786cbc39526
    Port:          8080/TCP
    Host Port:     8080/TCP
    Command:
      sleep
      infinity
    State:          Running
      Started:      Thu, 16 Jul 2020 18:01:55 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9q77c (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-9q77c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9q77c
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  40m   default-scheduler  Successfully assigned default/gunicorn-5f7f485585-wjdnf to node3
  Normal  Pulling    40m   kubelet, node3     Pulling image "192.168.2.4:8083/ipcheck2:1"
  Normal  Pulled     40m   kubelet, node3     Successfully pulled image "192.168.2.4:8083/ipcheck2:1"
  Normal  Created    40m   kubelet, node3     Created container ipcheck2
  Normal  Started    40m   kubelet, node3     Started container ipcheck2

服务描述:

Name:                     ipcheck
Namespace:                default
Labels:                   <none>
Annotations:              Selector:  app=gunicorn
Type:                     NodePort
IP:                       10.111.7.129
Port:                     <unset>  70/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30000/TCP
Endpoints:                10.244.1.20:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Node3 iptables:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain FORWARD (policy DROP)
target     prot opt source               destination
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  10.244.0.0/16        anywhere
ACCEPT     all  --  anywhere             10.244.0.0/16

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain DOCKER (1 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Chain KUBE-EXTERNAL-SERVICES (1 references)
target     prot opt source               destination
REJECT     tcp  --  anywhere             anywhere             /* default/gunicorn-ipcheck: has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:30384 reject-with icmp-port-unreachable

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination

Chain KUBE-SERVICES (3 references)
target     prot opt source               destination
REJECT     tcp  --  anywhere             10.104.59.152        /* default/gunicorn-ipcheck: has no endpoints */ tcp dpt:webcache reject-with icmp-port-unreachable
REJECT     tcp  --  anywhere             192.168.2.240        /* default/gunicorn-ipcheck: has no endpoints */ tcp dpt:webcache reject-with icmp-port-unreachable

Node3 上的“ip a”:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:a4:1d:ff brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 86181sec preferred_lft 86181sec
    inet6 fe80::1272:64b5:b03b:2b75/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:14:7f:ad brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.3/24 brd 192.168.2.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::2704:2b92:cc02:e88/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:a1:17:41:be brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 6e:c6:9c:0f:ab:55 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::6cc6:9cff:fe0f:ab55/64 scope link
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 4a:66:88:71:56:6a brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::4866:88ff:fe71:566a/64 scope link
       valid_lft forever preferred_lft forever
7: veth0ded1d29@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 22:c2:6b:c7:cc:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::20c2:6bff:fec7:cc7a/64 scope link
       valid_lft forever preferred_lft forever

端点:

ipcheck            10.244.1.21:8080   51m
kubernetes         192.168.2.1:6443   9d

【问题讨论】:

At this moment I can tell that If I run container from docker engine end expose this port I am able to connect with app. 是什么意思?它适用于 docker run 吗?能否尝试部署 ClusterIP example 并检查它是否在内部工作? 是的,当我使用 docker run 时,我可以访问应用程序。我试图部署这个例子。我将 yaml 与 nginx 部署一起应用。 Pods IP 看起来类似。然后我从给定的 yaml 创建了服务。描述 svc 看起来也类似。我试图卷曲 :,但它再次冻结了终端,一段时间后我连接超时。 我会说是centos或虚拟盒子网络问题。看看here 他们是如何配置centos 和virtualbox 网络以使其工作的,也许你会在这里发现一些阻碍你的东西。据我了解,您的所有 pod 都在运行且健康? 我刚刚(偶然发现)为什么 nodePort 不工作,或者为什么我在服务正常时无法访问 pod。实际上我忘了从部署清单中删除“睡眠”命令,所以容器启动了 gunicorn 然后它进入睡眠状态,这意味着 gunicorn 被“睡眠”停止了。不过,非常感谢大家的帮助。 (顺便说一句,奇怪的是我什至无法从示例中访问 pod) 【参考方案1】:

我希望你能够使用 clusterip 在内部使用 curl http://10.111.7.129:70>

好像端口没有打开。尝试在虚拟盒级别打开端口 30000/如果使用 AKS 或 IBM 云在安全组打开端口。

然后使用

谢谢 VB

【讨论】:

10.111.7.129:70> 对我不起作用,我从所有节点都使用了它,它冻结了终端。我已经关闭了firewalld,但我添加了这个规则,仍然没有。

以上是关于Kubernetes NodePort 连接被拒绝的主要内容,如果未能解决你的问题,请参考以下文章

Kubernetes nodeport 并发连接限制

Kubernetes 端口转发 - 连接被拒绝

Kubernetes 中工作节点上的连接被拒绝错误

Kubernetes NAT 流服务器 - 连接被拒绝

Kubernetes 负载均衡器服务器连接被拒绝:默认 80 端口正在工作

SuSE 上的 Kubernetes:NodePort 服务问题