-
Notifications
You must be signed in to change notification settings - Fork 398
Eclipse iceoryx™ in 1000 words
iceoryx has its origins in the automotive domain. In the last decades we had there an evolution from engine control systems to driver assistance and finally automated driving. Along with this evolution the data exchanged between different threads of execution within an Electronic Control Unit (ECU) increased from KB/s to GB/s (see figure 1).
Figure 1: Evolution of ECU internal data exchange
As in other domains like robotics or IoT, the common communication paradigm used in automotive is publish/subscribe. A typical middleware for Inter-Process-Communication (IPC) copies the messages when passing them to or from the middleware. Inside the middleware stack even more copies could be made or serialization of the message payload. Therefore, it is quite usual that you end up with at least n+1 copies if n consumers are subscribed to a publisher (see figure 2). When hitting the GB/s dimension every copy made in the communication middleware hurts with respect to the needed runtime and latency. The goal should be to use the precious runtime for functional computations and not for shifting around bytes in the memory.
Figure 2: A copy perspective of a typical IPC middleware
iceoryx is an IPC technology that is based on shared memory. This alone is not a new topic but rather a thing that has been used since the 1970s. What we do is to combine it with a publish/subscribe architecture, service discovery, modern C++ and lock-free algorithms. By additionally using an Application Programming Interface (API) that avoids copying, we end up in what we call true zero-copy. This is an end-to-end approach from publishers to subscribers without a single copy. With the iceoryx API a publisher directly writes the message into a chunk of memory that was previously requested from the middleware. On deliver, the subscribers get references to these memory chunks while each subscriber has its own queue with a configurable capacity. Every subscriber can have their own view of the world with respect to which messages are still in process or can be discarded. iceoryx does the reference counting behind the scenes and finally releases a memory chunk as soon as there is no more reader (figure 3). The iceoryx API supports polling access and event-driven interaction with callbacks. This allows a wide range of applications up to real-time systems. The shared memory can be divided into segments with different access rights and configurable memory pools.
Figure 3: True zero-copy communication
An important aspect is that publishers can write again while subscribers are still reading, there is no interference from subscribers back to the publisher. The publisher will just get a new memory chunk if the last one is still in use. If a subscriber is operated in polling mode and chunks are queued up until the subscriber checks the queue again, we can recycle older memory chunks with our lock-free queue that we call “safely overflowing”. This queue allows us to guarantee a memory efficient contract made with the subscriber with respect to a maximum number of latest messages that are stored in the queue, no matter how long the time between two polls of the subscriber is. This is useful for common use cases like a high-frequency publisher and a subscriber that is only interested in the latest greatest message. By just passing around smart pointers, iceoryx is doing a data transfer without really transferring the data. This gives us a constant time for a message transfer, independent of the message size. Note that the user has to write the data once to the shared memory, but this is the user write that is needed whenever data is produced for sending. There are some message constraints for really being able to do the true zero-copy communication. As the message payload is not serialized, a message must have the same memory layout for the publishers and subscribers. For an Inter-Process-Communication on a specific processor this can be ensured by using the same compiler with the same settings. The message must also not contain any pointers to memory within the process internal virtual address space. This also includes heap based data structures. If these constraints cannot be fulfilled, iceoryx can still be used with a layer on top that handles the serialization in and the deserialization from the shared memory. Then iceoryx would handle the low layer transport that does no copy itself. iceoryx depends on the POSIX API. We currently support Linux and QNX as underlying operating systems. As there are sometimes slight API differences, small adaptions might be necessary when porting it to another POSIX based operating system.
iceoryx is a data agnostic shared memory transport that provides a fairly low layer API. We assume that its API is not used directly by the users but rather that it is integrated into a larger framework that provides a high layer API and maybe some tooling. Examples would be an AUTOSAR Adaptive platform or the Robotic Operating System (ROS). In both cases, we ensured that the specification supports the zero-copy API. Integration of iceoryx is quite straightforward if the target framework is also based on a publish/subscribe architecture. There are already publicly available integrations of iceoryx for ROS2 and eCAL. Additionally, we have already identified the potential for synergies within the Eclipse family. By combining Eclipse Cyclone DDS and iceoryx we end up in an open and powerful communication middleware for IPC and network communication.