-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessing the Gen3 color and depth streams #46
Comments
Vision module works fine on my end. I get the point cloud, registered cloud and the color image. There is one thing though, the xacro in kortex_description does not contain a fixed joint between the |
Hi @RaduCorcodel , I asked a couple people about those dimensions and it seems they are documented in the Gen3 arm user guide at page 145. We will be adding a frame for the color and depth frames in the kortex_description URDF for the Gen3, but for now you can specify the values in the User Guide for the fixed joint you created in your URDF. This should do it for now. Hope this helps, |
Is there currently a way to access the color and depth streams in simulation? |
Hi, any progress made on this thread? The camera related link frames are still missing. Besides, how could I access to the color and depth streams in simulation? As @marlow-fawn asked. Can I use the common used kinect gazebo plugin (can't find ros plugins for other vision sensors, such as realsense) to retrieve the vision information? |
Hi @huiwenzhang @marlow-fawn @RaduCorcodel, |
Hi @huiwenzhang, I opened a pull request where I added the frames needed for the vision module. You can find my repo here and switch to the 'vision_fix' branch. You can clone my repo (don't forget to switch to branch vision_fix, then clone the vision module in the same catkin workspace and build normally. To get the depth/color stream, launch the driver, then in a different terminal launch the vision module launcher ( Radu, |
@RaduCorcodel Great, I saw your code and it really helps. But for now, I don't have a real robot on hand, and the purchasing process may take a few months. So I am working with the simulation and trying to figure out a way to use the vision information Alvin, |
Hi @huiwenzhang , As Radu pointed out, he made a PR (#82) to add vision frame to the kortex_description package. I will approve it as soon as I get my hands on a real robot, as we are currently working from home right now. We didn't implement the Intel Realsense simulation in our Gen3 simulation, but if you want you can take a look at Intel Realsense's Gazebo repository. You could probably get simulated images this way. We don't plan on adding this to the Gen3 simulation on the short to mid term, but if this changes I'll make sure to update this issue. Best, Alex |
If there exist any simulation for ros-kortex-vision package now? Thank you. |
Hi @2000222, No, there is still no implementation for the Intel Realsense simulation. Like Alex said, you could look at the Intel Realsense's Gazebo repository to get simulated images. Best, |
The ros_kortex_vision repository contains the code to access the color and depth streams.
It will eventually be brought in the ros_kortex repository, but in the meantime to access the streams, you have to clone the ros_kortex_vision repo. You will find the installation instructions and examples in the repo.
The text was updated successfully, but these errors were encountered: