Calico IPIP 跨节点通信
Posted whale_life
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Calico IPIP 跨节点通信相关的知识,希望对你有一定的参考价值。
当前环境信息
[root@master <sub>]# calicoctl get ippool
NAME CIDR SELECTOR
default-ipv4-ippool 10.244.0.0/16 all()
[root@master </sub>]# calicoctl get ippool default-ipv4-ippool -o yaml
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
creationTimestamp: "2022-05-14T09:21:32Z"
name: default-ipv4-ippool
resourceVersion: "10306"
uid: 89d1b7f3-bacc-4f6b-8409-222fb00a4744
spec:
allowedUses:
- Workload
- Tunnel
blockSize: 26
cidr: 10.244.0.0/16
ipipMode: Always
natOutgoing: true
nodeSelector: all()
vxlanMode: Never
路由聚合
blockSize: 26
虽然,calico 默认限制了每个网段只有 64 的 IP 地址,但是如果一个 node 上超过 64 个地址,还是会继续分配一个 网段。
node 节点的出口采用路由聚合的方式,比如, 将 26 位的掩码聚合成 25位,或者 24位的掩码,这样路由的条目就会大大减少。
Always or CrossSubnet
ipipMode: Always
Always:表示跨节点通信一直使用 IPIP 封装
CrossSubnet:表示在同一个二层,跨节点通信走本地路由,而跨三层的节点互相通信则会通过 IPIP 封装。
BGP
IPIP 模式需要 BGP 来建立节点间的邻接关系,VXLAN 不需要
[root@master ~]# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 192.168.0.81 | node-to-node mesh | up | 01:19:42 | Established |
| 192.168.0.82 | node-to-node mesh | up | 01:19:25 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
服务器模拟 IPIP 通信
便于理解,我们在服务器上模拟,tunl1 与 tunl2 之间通过 IPIP 的模式进行通信
1.开启路由模块
echo 1 > /proc/sys/net/ipv4/ip_forward
2.创建两个名称空间
ip netns add ns1
ip netns add ns2
3.创建两对veth-pair
ip link add v1 type veth peer name v1_p
ip link add v2 type veth peer name v2_p
ip link set v1 netns ns1
ip link set v2 netns ns2
4.分别给两对 veth-pair 配置上 IP 地址
#
ip address add 10.10.10.2/24 dev v1_p
ip address add 10.10.20.2/24 dev v2_p
ip link set v1_p up
ip link set v2_p up
ip netns exec ns1 ip address add 10.10.10.1/24 dev v1
ip netns exec ns1 ip link set v1 up
ip netns exec ns1 ip link set lo up
ip netns exec ns2 ip address add 10.10.20.1/24 dev v2
ip netns exec ns2 ip link set v2 up
ip netns exec ns2 ip link set lo up
查看配置是否生效
[root@70-tem <sub>]# ifconfig v1_p | grep inet
inet 10.10.10.2 netmask 255.255.255.0 broadcast 0.0.0.0
[root@70-tem </sub>]# ifconfig v2_p | grep inet
inet 10.10.20.2 netmask 255.255.255.0 broadcast 0.0.0.0
[root@70-tem <sub>]# ip netns exec ns1 ifconfig | grep inet
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
inet 10.10.10.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::5c56:70ff:fe77:df39 prefixlen 64 scopeid 0x20<link>
[root@70-tem </sub>]# ip netns exec ns2 ifconfig | grep inet
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
inet 10.10.20.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::4016:75ff:fe64:107b prefixlen 64 scopeid 0x20<link>
5.命名空间添加路由
默认新添加的 ns 里没有出去的路由,所以需要我们添加一下
新创建的 Pod 就不会存在这种情况,因为 cni 会默认给我们创建出去的路由
[root@70-tem ~]# ip netns exec ns1 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 v1
添加 route
ip netns exec ns1 route add -net 10.10.20.0 netmask 255.255.255.0 gateway 10.10.10.2
ip netns exec ns2 route add -net 10.10.10.0 netmask 255.255.255.0 gateway 10.10.20.2
查看 route
[root@70-tem <sub>]# ip netns exec ns1 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.20.0 10.10.10.2 255.255.255.0 UG 0 0 0 v1
10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 v1
[root@70-tem </sub>]# ip netns exec ns2 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.10.0 10.10.20.2 255.255.255.0 UG 0 0 0 v2
10.10.20.0 0.0.0.0 255.255.255.0 U 0 0 0 v2
v1 ping v2 测试连通性
[root@70-tem ~]# ip netns exec ns1 ping -c 1 10.10.20.1
PING 10.10.20.1 (10.10.20.1) 56(84) bytes of data.
64 bytes from 10.10.20.1: icmp_seq=1 ttl=63 time=0.048 ms
--- 10.10.20.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
6.创建tunl 和 IPIP Mode
在其对应的名称空间创建 tunnel 设备,并设置隧道模式 ipip,然后还需要设置隧道端点,用 remote 和 local 表示,对应的表示隧道外层 IP,用 ip addr xx peer xx 表示
ip netns exec ns1 ip tunnel add tunl1 mode ipip remote 10.10.20.1 local 10.10.10.1
ip netns exec ns1 ip link set tunl1 up
ip netns exec ns1 ip address add 10.10.100.10 peer 10.10.200.20 dev tunl1
ip netns exec ns2 ip tunnel add tunl2 mode ipip remote 10.10.10.1 local 10.10.20.1
ip netns exec ns2 ip link set tunl2 up
ip netns exec ns2 ip address add 10.10.200.20 peer 10.10.100.10 dev tunl2
查看配置
[root@70-tem <sub>]# ip netns exec ns1 ifconfig tunl1
tunl1: flags=209<UP,POINTOPOINT,RUNNING,NOARP> mtu 1480
inet 10.10.100.10 netmask 255.255.255.255 destination 10.10.100.10
inet6 fe80::5efe:a0a:a01 prefixlen 64 scopeid 0x20<link>
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 7 dropped 0 overruns 0 carrier 0 collisions 0
[root@70-tem </sub>]# ip netns exec ns2 ifconfig tunl2
tunl2: flags=209<UP,POINTOPOINT,RUNNING,NOARP> mtu 1480
inet 10.10.200.20 netmask 255.255.255.255 destination 10.10.200.20
inet6 fe80::5efe:a0a:1401 prefixlen 64 scopeid 0x20<link>
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 4 dropped 0 overruns 0 carrier 0 collisions 0
7.抓包验证
ip netns exec ns1 ping -c 1 10.10.200.20
对tunl设备抓包,查看原始报文
ip netns exec ns1 tcpdump -pne -i tunl1 -w tunl1.cap
我们对 v1 或者 v1_p 进行抓包,veth pair 网卡对的报文一致
我们可以看到,原始报文和封装以后的报文,是 ip in ip 的模式
tcpdump calico IPIP 同节点通信