-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add spawner for hardware #941
add spawner for hardware #941
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
I have only cosmetics to remark.
I think you managed this well. I would keep the warning as it is. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This appears to be a well executed addition of additional functionality.
I personally am totally fine with the usage of warnings for the users to make them aware of the change while preserving the original behavior in such cases.
Thanks for the quick review. Should all be done now. |
High-level comment: how does one spawn actual hardware? From the description I believe I understand what this does, but in reality, the hw is already there. Spawning is a rather overloaded word (perhaps in a ROS context), although in most cases it means to cause to exist, to produce or to cause to be created. The term hardware (as opposed to software) has a rather well defined meaning, and it doesn't appear this utility/cmdlet is capable of causing real hw to appear. Would it be more accurate to say this initialises the components, forces them to (re)configure and then activates them (ie: manages their life-cycle)? |
You are right. We took the nomenclature from the controller's side without much thinking. We use already initialize and (re)configure, that had in my head a bit different meaning. Should we call this "spawn_hardware_interface"? The idea behind this functionality is to have a natively “tool changing” scenario. Long-term idea: For example, you start your control stack without any hardware. (I know that it doesn't make much sense – but the idea is that the stack “always” lives – like when a hardware control box is turned on – but nothing further). Then you can initialize your robot, move it to the tool exchange station and then mechanically connect and then load a plugin (driver) for a tool and initialize it. And you are working like this for days. At some point, you need another tool, you place it to the tool-exchange station, reload your robot description with a new tool (kinematics, plugin, etc.) and then disconnect the old tool, connect the new one, load and work. Does this make sense at all? One of the issues we are trying to solve here is also to use robot description from the topic and make the control stack more resistant against hardware errors and hardware independent. This means that you could reload a whole driver for hardware without killing the control stack. This should help in multi-hardware systems to avoid crashing of everything if only one hardware driver has an issue. This is a bit different from in ros_control where HW and controller manager had 1:1 relationship. |
Merging this now, please feel free to open a new PR to rename |
@Mergifyio backport humble |
✅ Backports have been created
|
@Mergifyio backport iron |
✅ Backports have been created
|
* add spwaner for hardware * aplly suggestions from code review * move comment (cherry picked from commit 937817c) # Conflicts: # controller_manager/controller_manager/__init__.py # controller_manager/controller_manager/controller_manager_services.py
* add spwaner for hardware * aplly suggestions from code review * move comment (cherry picked from commit 937817c) # Conflicts: # controller_manager/src/controller_manager.cpp
* add spwaner for hardware * aplly suggestions from code review * move comment (cherry picked from commit 937817c) # Conflicts: # controller_manager/controller_manager/__init__.py # controller_manager/controller_manager/controller_manager_services.py
* add spwaner for hardware * aplly suggestions from code review * move comment (cherry picked from commit 937817c) # Conflicts: # controller_manager/controller_manager/__init__.py # controller_manager/controller_manager/controller_manager_services.py
* add spwaner for hardware * aplly suggestions from code review * move comment (cherry picked from commit 937817c) # Conflicts: # controller_manager/controller_manager/__init__.py # controller_manager/controller_manager/controller_manager_services.py
* add spwaner for hardware * aplly suggestions from code review * move comment (cherry picked from commit 937817c) # Conflicts: # controller_manager/controller_manager/__init__.py # controller_manager/controller_manager/controller_manager_services.py
* add spwaner for hardware * aplly suggestions from code review * move comment (cherry picked from commit 937817c) # Conflicts: # controller_manager/controller_manager/__init__.py # controller_manager/controller_manager/controller_manager_services.py
--------- Co-authored-by: Manuel Muth <[email protected]> Co-authored-by: Dr. Denis <[email protected]> Co-authored-by: Christoph Froehlich <[email protected]>
Added a spawner for activation of the hardware, like the spawner for the controllers. This allows to wait for the hardware to be activated/configured (needed for my thesis on distributed control).
What has been done:
hardware_spawner
for activation/configuration of hardwareOpen questions:
configure_components_on_start
noractivate_components_on_start
flag is given.Changes can be tested with this ros2_control_demo_repos