-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different branch meanings and operating conditions #8
Comments
@vdsmax Are you able to find the yaml configuration files that we used for the finals? @Cristian-wp The specifics of this branch are mainly about the integration to our SubT competition system. Maxime might give you the necessary yaml files that configure the ICP to the state that worked the best for us (we were using 16-beam lidars in the Urban circuit, and then also some 128-beam for the finals, but we subsampled them anyway). If you have a straight tunnel without any geometrical features, you may use the trick with two robots, one moving while the other being static and vice-versa, this may provide the necessary feature. Otherwise, the mapping will probably converge to what your prior says, along the tunnel direction at least. |
@kubelvla Thank you for your answer, but... Do you think that for my purpose I have to work with this branch or I have to work with another? |
Regarding the branch, I think you can use the melodic one, there's actually some fixes that are missing in the darpa branch. Or you go wild like Norlab and switch to ROS2 :) I haven't switched yet and I'm looking with awe on the new code... |
Hi thanks a lot for your reply.
|
My launch file for the mine is quite simple:
|
But the trick is in the ICP config, the default ones are too crude.
|
The input filters:
Note here that we use the bounding box filter to remove the robotic platform itself |
The post-processing filters:
The first one removes stuff considered dynamic, and the second filter recomputes the surface normals based on the final map pointcloud (otherwise, normals computed from the input pointcloud are kept and used for future matching) |
@kubelvla O_O Another question, the yaml files you send me are already implemented inside some part of the code, or I have to provide the yaml? I have read on libpointmatcher(ethz) that is possible to use them without yaml files, in fact I was wandering if alle the parameters in the northlab_icp_mapper_ros are related to them, or have other function. |
You can generally play with the map density: == points 10 cm apart in the map, and with the amount of points left in the input pointcloud:
|
Save my yamls as .yaml files and provide paths through the parameters. |
Outdoors, the map density around 15cm seems sufficient, indoors we go towards higher densities. We tend to downsample the input pointclouds quite a lot, the Ouster 128-line lidar was actually downsampled to 10 percent or less, and still good for slam. |
Ok, now I try and let you know the results :) |
I am not sure about a publication that would really describe what filters and minimizers to use, since it is really problem-dependent. You can get some insight from our (and other's) underground experience from this: https://arxiv.org/pdf/2208.01787.pdf |
Wait for the Canadians to wake up and see if Maxime can find the very yamls from the competition (but they won't be much different, maybe some extra input filtering) |
@Cristian-wp , you can have a look at "A Review of Point Cloud Registration Algorithms for Mobile Robotics" for a description of the pipeline and most of the filters: |
@pomerlef Thanks :) |
Hi @kubelvla , today I was able do create a map :) is not perfect, because my odometry source give as last position ~250m, and the icp_odom as last position gives ~230m, but is still a good result for the moment. I let you know for further improvement. For the moment thanks a lot for your help! Can you leave this topic open? So in case of other problem and update I write all here. |
Nice @Cristian-wp ! You can also consider a special mode of the ICP minimization, which however requires very accurate attitude estimation in your odometry prior (in roll and pitch, and better than 0.1 degree). You also have to really make sure that the transformation between the frame that is linked to the odometry estimation (base_link?) and the lidar frame is spot-on. If you have these two things, you can suppress pitch and roll drift in the mapping (we were using this in SubT). Documentation here: https://arxiv.org/abs/2203.13799 and the option selected by adding a parameter to the minimizer settings in the icp config yaml:
|
Thanks @kubelvla ! Now I am working to make my odometry source more stable. I am using UKF from ROS robot localization package to fuse my data in order to have a realible VisualInertialLidar slam, after that I will try it for shure. For the Attitude estimation I directly fuse a rugged industrial grade IMU, so the roll-pitch estimation shoud be good. During the Darpa you fuse inside your estimator even the ICP_odom get from the mapper? I know is off-topic, but can you maybe suggest me a good way to improbe my localization system? |
No, the icp odom output was the final output, with no feedback to the odometry system. One has to be very careful not to cause oscillations between the prior and the icp output. I don't know the correct answer how to do this safely, in a way that would converge to a better solution (even in the pose graph approach, Francois remembers situations where the pose graph did some optimization but the ICP still wanted back to its solution). In SubT, we took the IMU and wheel odometry as a prior, and ICP to get the fine localization and mapping. We actually didn't use loop closures, the system was good enough like that. |
I can not use wheel odometry because I am working on a UAS :') |
:D Ok, no wheel odom. I hope you don't have a brand new tunnel with walls clean uniform grey color ;) IMU should help the VIO quite a lot in finding good matches, make sure your solution does that |
Not new, but they have uniform grey color :D, Yes, I am using a VIO + LIO system + external sensors, but its my first time with this type of enviromet. Now I am trying to figure out how to get the covariance matrix in order to improve the fusion |
Hi @Cristian-wp ! Here are the set of parameters we used for the DARPA challenge: |
@vdsmax Thank you a lot! Now I am working on another part of the projrct. Next week I will try your parameter for my cases and let you know the results :) |
Hi @kubelvla, my results with the mapper are getting better, specially after the suggestion of @vdsmax . |
Hello, I am trying to understand how to make this package work.
I am trying to use it in a really bad enviroment: is a tunnel of 300m with flat concrete walls, no window, colums, signals, really nothing.
Now I am trying to investigate the darpa branch, because I think is the best fitting one.
Can you please suggest me the bests than can be usefull in my case?
Thanks in advance.
The text was updated successfully, but these errors were encountered: