Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Packets sent by pktgen are all dropped #288

Open
surajsureshkumar opened this issue Oct 11, 2024 · 19 comments
Open

Packets sent by pktgen are all dropped #288

surajsureshkumar opened this issue Oct 11, 2024 · 19 comments

Comments

@surajsureshkumar
Copy link

surajsureshkumar commented Oct 11, 2024

Hello, good day to you. I wanted any advice or input that I can get on this issue that I am trying to debug from past one week.

System Setup:
DPDK Version: 23.11.2
Pktgen Version: 24.07.1
Operating System: Linux 5.4.0-192-generic
CPU: Intel Xeon E5-2640 v4 @ 2.40GHz
Mellanox NICs: ConnectX-5 (MT27800 Family), using mlx5_core driver
Driver Bindings: Mellanox NICs are bound to mlx5_core.
Packet Capture: Performed with Wireshark on an identical server.

My goal was to send traffic from one server to another.
I was able to build Pktgen without any issues.
I am able to run the Pktgen from usr/local/bin with appropriate command line arguments.
I set the destination mac address when Pktgen started running and entering the command start all started sending traffic and on the other server at first I used tcp-dump to capture the packets.
What I observed was that the amount of packets being transmitted = the amount of packets being dropped.
Then I used Wireshark to capture packets and Wireshark capture statistics:
Packets Captured: 16,044,383
Packets Displayed: 16,044,383
Dropped Packets: Unknown (Wireshark unable to capture this detail).

I followed this link to tune my NICs:
[https://fasterdata.es.net/host-tuning/linux/]

Command used to run Pktgen:
sudo ./usr/local/bin/pktgen -l 2-4 -n 4 --proc-type auto --log-level 7 --file-prefix pg -a 0000:04:00.1 -- -v -T -P -m [3-4].0 -f themes/black-yellow.theme

After the above command ran without any errors below are a bunch of commands I used to run when Pktgen started running
set 0 dst mac [MAC_ADDRESS]
set 0 size 512
set 0 rate 10
set 0 proto udp

I am unable to identify the root cause of this issue as to why this is happening when all the settings and configurations are set correctly.

  1. Could this be a driver or DPDK configuration issue leading to packet loss?
  2. Are there any known issues with the mlx5_core driver and ConnectX-5 that could cause this?
  3. Suggestions for optimizing Pktgen for high throughput traffic to minimize packet loss?

Any help or further insights or suggestions to troubleshoot and resolve this issue would be greatly appreciated.

@KeithWiles
Copy link
Collaborator

Please try the fixes-for-release branch and see if that helps. The main branch is older and I have been working on fixes in this new branch.

@surajsureshkumar
Copy link
Author

I will try the fixes-for-release branch.
Just curious: The packets get dropped at the receivers side and this happened with Trex for me too. Could this be due to my driver or some other issue?

@KeithWiles
Copy link
Collaborator

One more suggestion please change -m [3-4].0 to -m [3:4].0 this makes sure you have one RX and one TX core running.

@KeithWiles
Copy link
Collaborator

If TRex also drops the packets it means something else is the problem and Pktgen is not the problem. It could be the NIC is dropping the packets thinking they are invalid or malicious packets. A malicious packet could be it has the same source address MAC as the NIC or many other reasons. Not sure how Wireshark worked??

@surajsureshkumar
Copy link
Author

So pktgen is installed on server A and I am using tcpdump or wireshark on server B when pktgen starts running I mention the destination mac address by using the set 0 dst mac [MAC_ADDRESS_OF_DESTINATION] and in the server B if I am using tcpdump then I use:
sudo tcpdump -i enp4s0f1 -nn -e ether src [MAC_ADDRESS_OF_SOURCE] and port 5678
Running wireshark helps me to directly connect to the interface enp4s0f1 by selecting that interface and receives incoming traffic.

@KeithWiles
Copy link
Collaborator

BTW, if you get a build error when doing make buildlua I have just pushed a patch for that problem and you go a git pull to update.

This seems like a DPDK driver problem, but please contact the maintainers from the MAINTAINERS file in DPDK. To me it may not be a driver problem. When TRex and Pktgen being used sees the problem but wireshark and tcpdump are ok as wireshark and tcpdump are using the lInux kernel driver and DPDK is a PMD driver from DPDK.

@surajsureshkumar
Copy link
Author

@KeithWiles my line rate is set to 100%, but I am only transmitting 18mpps, I tweaked the configurations and got it to go from 13mpps to 18mpps. Is there any way I can bump it to 149mpps?

@KeithWiles
Copy link
Collaborator

Most likely 18mpps is the limitation of the single code used to transmit the packets. Without more details on hardware and configuration I would suspect the issue is you only have one core assigned to transmit packets. You can assign more cores to the TX side and that should improve your performance.

But many factors can limit the performance of your system i.e., PCIe bandwidth, limited cores, NIC performance, PMD performance, limited number of RX/TX queues per port, core to PCIe mapping (NUMA related performance) ...

@surajsureshkumar
Copy link
Author

Hardware Configuration:

NIC: Mellanox ConnectX-5 with 100 Gbps capability.
CPU: Intel Xeon CPU E5-2640 v4 @ 2.40GHz.
Memory: Adequate RAM with NUMA awareness; hugepages allocated on the correct NUMA node.
PCIe Slot: The NIC is installed in a PCIe Gen3 x16 slot, verified to be operating at full capacity.

EAL Parameters:
sudo ./usr/local/bin/pktgen -l 2-9,22-29 -n 4 --proc-type auto --log-level 7 --file-prefix pg -a 0000:04:00.1,mprq_en=1,mprq_log_stride_num=8,rxq_pkt_pad_en=1 -- -v -T -P -m [3-9:22-29].0

Pktgen Settings in Console:
Pktgen> set 0 dst mac [Destination MAC Address]
Pktgen> set 0 src mac [Source MAC Address]
Pktgen> set all size 64
Pktgen> set all rate 100
Pktgen> set all burst 64
Pktgen> start all

Disabled hyper-threading and ensured only physical cores are used.
Set CPU frequency scaling governor to performance mode.
Disabled C-States in the BIOS to prevent CPU frequency throttling.
Isolated packet processing cores using kernel parameters (isolcpus=2-11)
Confirmed that the NIC and the allocated hugepages are on the same NUMA node.
Updated NIC firmware and drivers to the latest versions.
Disabled unnecessary offloading features using ethtool.
Configured maximum number of TX queues and increased descriptors.
Disabled unnecessary services like irqbalance to prevent interrupt reassignment.
IRQ affinities to prevent interrupt handling on packet processing cores.

@KeithWiles
Copy link
Collaborator

KeithWiles commented Oct 21, 2024

What is the CPU layout? You can use the DPDK script to dump the layout. What NUMA node is the NIC/PCI attached to?

0000:04:00.1 seems to suggest NIC is on NUMA node 0 and the PCI also.

@surajsureshkumar
Copy link
Author

surajsureshkumar commented Oct 21, 2024

So the CPU layout is as follows:
Core and Socket Information (as reported by '/sys/devices/system/cpu')

cores = [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]
sockets = [0, 1]

    Socket 0        Socket 1
    --------        --------

Core 0 [0, 20] [10, 30]
Core 1 [1, 21] [11, 31]
Core 2 [2, 22] [12, 32]
Core 3 [3, 23] [13, 33]
Core 4 [4, 24] [14, 34]
Core 8 [5, 25] [15, 35]
Core 9 [6, 26] [16, 36]
Core 10 [7, 27] [17, 37]
Core 11 [8, 28] [18, 38]
Core 12 [9, 29] [19, 39]

And the NUMA node is 0

@KeithWiles
Copy link
Collaborator

KeithWiles commented Oct 21, 2024

You have 10 physical cores on each CPU. For DPDK polling design you can't use Hyperthread 0 and 1 at the same time. Each physical core has two LCORES Hyperthread 0 and 1. When running DPDK polling threads on HT-0 it will consume 80-90% of the physical core, which leaves HT-1 on that core getting only 10-20% utilization.

Because of this we should not use two LCORES (HyperThreads) on the same Physical core. This means to your system configuration you have 10 Physical cores on CPU 0 (NUMA 0).

I would use -m [2-5:6-9].0 with a core list of -l 1-9, core 1 will be used for pktgen display and the rest for the port Rx/Tx, you could add a couple more lcores to TX side and reduce the number on the RX side if it makes sense to your needs.

Let me know if you get better performance.

@surajsureshkumar
Copy link
Author

@KeithWiles I will try this out and can I generate HTTP(S) or DNS traffic?

@KeithWiles
Copy link
Collaborator

The only way to generate those types of packets is to use pcap file with the type of traffic you want.

@surajsureshkumar
Copy link
Author

@KeithWiles I tried this -m [2-5:6-9].0 with a core list of -l 1-9 and a couple of other things too, there was no improvement in performance. I am blank now as to what to try to get the line rate to increase

@KeithWiles
Copy link
Collaborator

Sorry, I have no suggestions as to why the performance is low here. We setup the cores, NUMA and NICs correctly I believe. It is possible the driver has a problem or the NIC or something else. Sorry I am not able to debug the problem.

@surajsureshkumar
Copy link
Author

@KeithWiles Thank you for your support! I will try to look into the issue.

@surajsureshkumar
Copy link
Author

@KeithWiles couple of questions, can I send pcap file to generate https traffic over a single core or on multiple cores, which works accurately? Can cross traffic or noise be generated using pktgen by running two instances of pktgen?

@KeithWiles
Copy link
Collaborator

Pktgen does not contain a TCP/IP stack. When using a PCAP file it is not possible to simulate a TCP connection except at a very basic level list noting a SYN/ACK or other very simple packets.

You can run two instances for Pktgen on the same machine, but you need to make sure you split up memory, cores and NICs for each instance. I use this method sometimes to test Pktgen on the same machine. Look in the ccfg directory for two configurations pktgen-1.cfg and pktgen-2.cfg but you will have to update the files for your system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants