problem with testpmd on dpdk and ovs in ubuntu 18.04

匿名 (未验证) 提交于 2019-12-03 01:39:01

问题:

i have a X520-SR2 10G Network Card, i gonna use that to create 2 virtual interfaces with OpenvSwitch that compiled with dpdk (installed from repository of ubuntu 18.04) and test this virtual interfaces with testpmd, i do following jobs :

Create Bridge

$ ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev 

bind dpdk ports

$ ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:01:00.0 ofport_request=1 $ ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:01:00.1 ofport_request=2 

create dpdkvhostuser ports

$ ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser ofport_request=3 $ ovs-vsctl add-port br0 dpdkvhostuser1 -- set Interface dpdkvhostuser1 type=dpdkvhostuser ofport_request=4 

define flow directions

# clear all directions $ ovs-ofctl del-flows br0 

Add new flow directions

$ ovs-ofctl add-flow br0 in_port=3,dl_type=0x800,idle_timeout=0,action=output:4 $ ovs-ofctl add-flow br0 in_port=4,dl_type=0x800,idle_timeout=0,action=output:3 

Dump flow directions

$ ovs-ofctl dump-flows br0  cookie=0x0, duration=851.504s, table=0, n_packets=0, n_bytes=0, ip,in_port=dpdkvhostuser0 actions=output:dpdkvhostuser1  cookie=0x0, duration=851.500s, table=0, n_packets=0, n_bytes=0, ip,in_port=dpdkvhostuser1 actions=output:dpdkvhostuser0 

now i run testpmd:

$ testpmd -c 0x3 -n 4 --socket-mem 512,512 --proc-type auto --file-prefix testpmd --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/dpdkvhostuser0 --vdev=virtio_user1,path=/var/run/openvswitch/dpdkvhostuser1 -- --burst=64 -i --txqflags=0xf00 --disable-hw-vlan EAL: Detected 32 lcore(s) EAL: Auto-detected process type: PRIMARY EAL: No free hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: VFIO support initialized Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0 USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=155456, size=2176, socket=1 Configuring Port 0 (socket 0) Port 0: DA:17:DC:5E:B0:6F Configuring Port 1 (socket 0) Port 1: 3A:74:CF:43:1C:85 Checking link statuses... Done testpmd> start tx_first  io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled Logical Core 1 (socket 0) forwards packets on 2 streams:   RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01   RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00    io packet forwarding packets/burst=64   nb forwarding cores=1 - nb forwarding ports=2   port 0:   CRC stripping enabled   RX queues=1 - RX desc=128 - RX free threshold=0   RX threshold registers: pthresh=0 hthresh=0  wthresh=0   TX queues=1 - TX desc=512 - TX free threshold=0   TX threshold registers: pthresh=0 hthresh=0  wthresh=0   TX RS bit threshold=0 - TXQ flags=0xf00   port 1:   CRC stripping enabled   RX queues=1 - RX desc=128 - RX free threshold=0   RX threshold registers: pthresh=0 hthresh=0  wthresh=0   TX queues=1 - TX desc=512 - TX free threshold=0   TX threshold registers: pthresh=0 hthresh=0  wthresh=0   TX RS bit threshold=0 - TXQ flags=0xf00 testpmd> stop Telling cores to stop... Waiting for lcores to finish...    ---------------------- Forward statistics for port 0  ----------------------   RX-packets: 0              RX-dropped: 0             RX-total: 0   TX-packets: 64             TX-dropped: 0             TX-total: 64   ----------------------------------------------------------------------------    ---------------------- Forward statistics for port 1  ----------------------   RX-packets: 0              RX-dropped: 0             RX-total: 0   TX-packets: 64             TX-dropped: 0             TX-total: 64   ----------------------------------------------------------------------------    +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++   RX-packets: 0              RX-dropped: 0             RX-total: 0   TX-packets: 128            TX-dropped: 0             TX-total: 128   ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++  Done. testpmd> 

version of softwares:
OS: Ubuntu 18.04
Linux Kernel: 4.15
OVS: 2.9
DPDK: 17.11.3

what should i do now ?? where is the problem from?

回答1:

finally catch the problem , The problem is size of socket memory allocation, i change --socket-mem value to 1024,1024 (1024M for each numa node) and create packets with pktgen (as same using --socket-mem 1024,1024).
Everything works fine.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!