-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
router: poor performance over real NICs, despite decent performance over veth #4593
Comments
The ultimate performance would come from using some XDP-based approach. The DPDK framework seems less work than doing it from scratch (https://doc.dpdk.org/guides/index.html); however, using from Go code isn't trivial either; but apparently feasible (https://pkg.go.dev/github.com/yerden/go-dpdk, https://pkg.go.dev/github.com/millken/dpdk-go). This article makes a series of less involved suggestions that do not require XDP/DPDK: https://medium.com/@pavel.odintsov/capturing-packets-in-linux-at-a-speed-of-millions-of-packets-per-second-without-using-third-party-ef782fe8959d may be that's a worthy first step. There is a significant drawback: the ring API is meant for traffic sniffing; packets which destination does match the local host are duplicated: one copy to the sniffer's ring and the other to the kernel's network stack. To prevent that we'd need to play games; possibly games that nullify the benefits, but may be not. One way is to filter the traffic with https://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.adv-qdisc.ingress.html (here's a tutorial: https://www.dasblinkenlichten.com/working-with-tc-on-linux-systems/) If my understanding is correct. The following sequence would suppress all traffic from eth0 to the kernel stack, leaving only the ring's copy:
... most likely the packet still gets copied before being dropped. Too bad. |
I did a little more research to clarify the (fast changing) state of affairs with the Linux kernel and high-performance networking. There seems to be, at the moment, three main avenues:
The transition from AF_PACKET v4 to AF_XDP is originally described by its author (Björn Töpel) here: https://lwn.net/Articles/745934/ or, in a more polished version, here: https://www.kernel.org/doc/html/v4.18/networking/af_xdp.html The AF_XDP API seems reasonably accessible. It has several levels of acceleration available depending on the driver's capabilities and will make the best of it. It does require loading an eBPF program, it seems. However that program is a canned well-known one that some libraries (including Go ones: https://pkg.go.dev/github.com/liheng562653799/xdp) can deal with for you (or may be that's been automated away by now - that was on the 2DO list). The DPDK framework will also use AF_XDP behind the scenes, unless it has a user-mode driver for the device that's being used. In which case it will switch to a very different approach and practically map the device in user space. Using the framework seems a lot more involved and so is only worth it if we want to access the user-mode driver performance. There are Go libraries that can expose the DPDK framework, though; as mentionned previously. I found that this doc provided some interesting insight into performance expectations: https://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_NIC_performance_report.pdf The relative performance of the various approaches is the subject of contradictory reports, but roughly:
|
In order to preserve portability to non-linux platforms, we'll need to preserve the plain IP/UDP socket code. Likewise, it is possible that the AF_XDP code isn't as portable as the AF_PACKET with RINGS code, so we probably want to keep both if AF_XDP is much better. Refer to updated action plan in the description. |
…4651) WriteTo needs to be given the destination address explicitly. WriteBatch, on the other end, can either find it in each packet structure, or rely on the connection's destination. WriteTo is only used to send BFD packets. It turns out that BFD packets can also very easily be sent via the regular forwarders that use WriteBtach. The motivation to do that is to simplify the interface between the dataplane and the forwarders, in view of supporting multiple underlays with possibly very different interfaces. So the channels around the processor tasks seems like a good universal interface. In passing, also removed a duplicate field. Slightly off-topic but still in the spirit of noise abatement. As this is to facilitate the necessary refactoring... Contributes to #4593
In every case where the router modified packets it would serialize updated headers to a temporary buffer and then copy that to the packet buffer. To avoid this extra copy, replaced gopacket.serializeBuffer with a custom implementation that writes to a given buffer. In this case, the packet's raw buffer. There is one remaining copy for some SCMP messages because we have to move the existing packet to the payload. This too could be avoided but that's for another PR; it would require to leave headroom in received packets. The performance impact is very small since, on the critical path, it just avoids copying a scion hdr per packet, but it is a simplification. It also pays back the copy added by a previous simplification of the BFD code. As such... Contributes to #4593
The router code itself can demonstrably forward 800K small packets per second and 10Gb/s of traffic in larger (2K packets) when benchmarked over veth. However, the observed performance is less that 1/2 that when using real NICs, including 10GigE NICs.
Since the processing code isn't the bottleneck, it has to be either the effect of the real NICs activity on the overall system (e.g. the interrupt processing overhead), or the impact of the API used by the router (regular UDP socket) on real I/O versus virtual.
Creating this work item to track investigation and resolution.
Current action plan:
The text was updated successfully, but these errors were encountered: