-
Notifications
You must be signed in to change notification settings - Fork 398
2021 11 25 Eclipse iceoryx developer meetup
MatthiasKillat edited this page Nov 25, 2021
·
10 revisions
Date: 2021/11/25
Time: 17:00 CET
Link: https://eclipse.zoom.us/j/95918504483?pwd=RWM5Y1pkeStKVDZsU09EY1hnclREUT09
- Michael Pöhnl, Apex.AI
- Simon Hoinkis, Apex.AI
- Dietrich Krönke, Apex.AI
- Mathias "Bob" Kraus, Apex.AI
- Matthias Killat, Apex.AI
- Ulrich Eck, TUM, Chair for Computer Aided Medical Procedures
- Pablo Inigoblasco, IB Robotics
- General: Introduction of new participants, 10 mins
- General: Are there other agenda points?, 2 mins
- Configurable memory locality, , 20 mins
- Memory allocation alternatives compared to MemPools - to be discussed next time
- Use-case - Data-flow Processing System TUM (Computer Aided Medical Procedures)
- zero-copy for the whole processing pipeline
- async inputs from cameras
- performs image and geometry processing
- every output port is a ringbuffer
- CUDA is used heavily to perform computation
- scheduling between threads is carried out in different ways
- currently best-effort with dropping of frames
- fusing happens according to (approximately) matching timestamps
- limitations due to iceoryx not being aware of CUDA memory on GPU
- with iceoryx this is run as multiple processes instead of one monolithic application
- uses multiple servers over network
- each one uses iceoryx
- data capturing and time synchronization
- viewer
- computation
- rudp for network
- 700ms from capture to display (<1s)
- constant chunk sizes are sufficient since image sizes and related data are constant
- number of cameras can change (more is usually better but requires more memory)
- memory static - memory is acquired at the start and does not increase
- worst case assumption or make algorithm not exceed memory for algorithm output
- conflicting use cases
- real-time best-effort - regular publish-subscribe (drops data depending on queue size)
- slow down to be able to - process everything (could be done by blocking publisher)
- uses iceoryx 1.0.1
- one user launches multiple docker containers
- Potential of iceoryx extension for TUM use-case
- device memory on GPU/CUDA - already planned (but no time schedule yet)
- dedicated memory segments for different purposes and potentially on different devices
- configure a publisher to use a specific memory pool by name (e.g user names)
- different memory pools can associated with e.g. NUMA core
- full windows support (only nice to have, due to windows based front-end)
- rmw_iceoryx as alternative in ROS 2
- useable for ROS applications on the same machine
- currently not under development
- contributions welcome