无法通过 virtio 在 qemu 进程中使用 testpmd 将 pkts 发送到 VM

Posted

技术标签:

【中文标题】无法通过 virtio 在 qemu 进程中使用 testpmd 将 pkts 发送到 VM【英文标题】:Cannot use testpmd to send pkts to VM in qemu process though virtio 【发布时间】:2021-12-31 23:50:02 【问题描述】:

我正在尝试测试 vhost-user/virtio-net。我使用 testpmd 将 pkts(以 txonly 模式)发送到 qemu VM。但是testpmd显示所有pkts都被丢弃了。这是我的环境:

DPDK version: 19.08
(HOST) Hugepagesize=1GB Hugepages=16
# testpmd cmd
testpmd -l 0-3 -n 4 --socket-mem 1024 --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
# qemu cmd
qemu-system-x86_64 /opt/vm/centos/vm.img \
        -cpu qemu64,+ssse3,+sse4.1,+sse4.2 \
        --enable-kvm \
        --nographic -vnc :0 \
        -smp 4 \
        -m 4096 -mem-path /dev/hugepages,share=on -mem-prealloc \
        -chardev socket,id=chr0,path=/tmp/sock0 \
        -netdev vhost-user,id=net0,chardev=chr0,queues=1,vhostforce \
        -device virtio-net-pci,netdev=net0,ioeventfd=on,mac=52:54:00:00:00:14 \
        -netdev tap,id=tapnet0,ifname=tap1,script=no,downscript=no \
        -device e1000,netdev=tapnet0

链接已建立,virt-queue 已初始化:

VHOST_CONFIG: new vhost user connection is 31
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0x7
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:32
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:33
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x7820ffc3
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x40000000
     guest physical addr: 0x100000000
     guest virtual  addr: 0x7f9f7fe00000
     host  virtual  addr: 0x7f83d8000000
     mmap addr : 0x7f8318000000
     mmap size : 0x100000000
     mmap align: 0x1000
     mmap off  : 0xc0000000
VHOST_CONFIG: guest memory region 1, size: 0xa0000
     guest physical addr: 0x0
     guest virtual  addr: 0x7f9ebfe00000
     host  virtual  addr: 0x7f8421788000
     mmap addr : 0x7f8421788000
     mmap size : 0xa0000
     mmap align: 0x1000
     mmap off  : 0x0
VHOST_CONFIG: guest memory region 2, size: 0xbff40000
     guest physical addr: 0xc0000
     guest virtual  addr: 0x7f9ebfec0000
     host  virtual  addr: 0x7f82580c0000
     mmap addr : 0x7f8258000000
     mmap size : 0xc0000000
     mmap align: 0x1000
     mmap off  : 0xc0000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:37
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:38
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:32
VHOST_CONFIG: virtio is now ready for processing.

Port 0: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:39

在 vm 中设置 nic promisc:

# ifconfig eth0 promisc up

testpmd 显示链接状态为 up:

testpmd> show port info all

********************* Infos for port 0  *********************
MAC address: 56:48:4F:53:54:00
Device name: net_vhost0
Driver name: net_vhost
Devargs: iface=/tmp/sock0,queues=1
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: disabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off 
  filter off 
  qinq(extend) off 
No RSS offload flow type is supported.
Minimum size of RX buffer: 0
Maximum configurable length of RX packet: 4294967295
Current number of RX queues: 1
Max possible RX queues: 1
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
Max segment number per packet: 65535
Max segment number per MTU/TSO: 65535

testpmd 开始发送 pkts:

testpmd> set fwd txonly
Set txonly packet forwarding mode
testpmd> start

停止 testpmd:

Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 17056768      TX-total: 17056768
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 17056768      TX-total: 17056768
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

所有的pkts都被丢弃了。

我错过了什么吗?

【问题讨论】:

请更新以下信息 1) DPDK 版本,2) 巨大的页面大小,3) Qemu 不使用共享内存作为后端的原因? (1)DPDK 版本:19.08, qemu: 2.6.2 (2)hugepage on host: Hugepagesize=1GB Hugepages=16, (3)qemu 使用 -mem-path /dev/ hugepages,share=on,我以为我使用了共享内存 我可以使用 DPDk 19.11 LTS 和 20.11 LTS 运行相同的程序,但使用 Numa 支持的内存。我也可以将其更新为答案 如果我无知,请原谅我如何使用 Numa 支持的内存? 【参考方案1】:

看起来它有 DPDK 或 NUMA 支持的页面问题。同样适用于 DPDK 版本 19.11 LTS 和 20.11 LTS。

DPDK 应用程序: rm /tmp/sock0; sudo ./build/l2fwd --legacy-mem -l 1-2 --no-pci --vdev=net_vhost0,iface=/tmp/sock0 --vdev=net_tap0 -m 1024 -- -p 3 -T 1 --no-mac-updating

QEMU: taskset -c 4-9 qemu-system-x86_64 -cpu host -enable-kvm -m 1024 -smp 4,sockets=1,cores=4,threads=1 \ -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem,nodeid=0 -mem-prealloc \ -name test \ -no-reboot \ -vnc none \ -nographic \ -net user,hostfwd=tcp::10023-:22 -net nic \ -chardev socket,id=charnet0,path=/tmp/sock0 \ -netdev type=vhost-user,chardev=charnet0,queues=1,id=hostnet0 \ -device virtio-net-pci,mq=on,vectors=18,netdev=hostnet0,id=net0,mac=fa:16:3e:52:30:73 \ -hda [disk name]

VM 启动后,您可以通过ssh port 10023 登录。

【讨论】:

我听从你的建议。我使用`-smp cores=4,sockets=2,threads=2 \ -m 2048 \ -object memory-backend-file,id=mem0,size=1024M,share=on,mem-path=/dev/hugepages, prealloc=on \ -object memory-backend-file,id=mem1,size=1024M,share=on,mem-path=/dev/hugepages,prealloc=on \ -numa node,nodeid=0,cpus=0-1 ,memdev=mem0 \ -numa node,nodeid=1,cpus=2-3,memdev=mem1 `,它的工作原理!谢谢! @weiping 感谢更新,请接受并投票结束此问题。

以上是关于无法通过 virtio 在 qemu 进程中使用 testpmd 将 pkts 发送到 VM的主要内容,如果未能解决你的问题,请参考以下文章

QEMU — VirtIO 的网络实现

qemu-guest-agent---介绍及安装

qemu-kvn中的virtio浅析

kvm虚拟化2-qemu-kvm

virtio 半虚拟化驱动 5.1.1

利用qemu-guest-agent实现重置密码的功能(测试中)