-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TCP bandwidth with iperf/qperf/sockperf #778
Comments
By using the recommended sockperf, the TCP bandwidth is still quite low without VMA_SPEC being set before the preload dynamic library. When UDP is used, the bandwidth is much better since there is no ACK required. |
I searched from Internet and found a slides named "Data Transfer Node Performance GÉANT & AENEAS". In this slides, TCP bandwidth with iperf2 is also mentioned. |
Hi @zorrohahaha, VMA is optimize for TCP packets below MTU/MSS size. We are aware for the disadvantage with high TCP bandwidth above MTU/MSS size. Thanks, |
Hi @zorrohahaha please try to use https://github.com/Mellanox/libvma/releases/tag/8.9.2 and compile VMA with --enable-tso |
Hi,we also find this issue, the bandwidth of TCP bypass with vma is much worse than that through the kernel. |
@littleyanglovegithub |
I strongly suggest you write the limitation "VMA is optimize for TCP packets below MTU/MSS size. We are aware for the disadvantage with high TCP bandwidth above MTU/MSS size." in the offical website |
hi , Is there any update for VMA throughput performance with big packet size over MTU? |
VMA usage allows to improve throughput values comparing with Linux for payload size less 16KB. It is not always seen on simple benchmark scenarios as iperf/sockperf. NGNIX configured with different workers under configured VMA (--enable-tso and related VMA environment options) demonstrates better throughput results vs Linux. |
I am trying to run this libvma on the aarch64 platform and it works. And when using iperf/qperf/sockperf to do some evaluation, I find the latency is better than that through the kernel socket stack. The UDP bandwidth via qperf is also better.
But the bandwidth of TCP mode is much worse than that through the kernel mode. Only several hundred Mb to 1Gb (25Gb NIC interface).
After setting the parameters "VMA_TCP_NODELAY=1 VMA_TX_BUFS=4000000", the bandwidth is getting a little better, upto about 10Gbps. But it is still far away from the line rate.
The text was updated successfully, but these errors were encountered: