-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should calibration be re-computed for inference on new data? #253
Comments
Good question. So first, batch_limits has no impact on the results. I actually usually set the batch limit manually to 1 to force single point cloud inference. |
Thanks for your answer. The density depending much the distance to the scanner, it is a case that we can often find in the reality. So that leads me to another question (sorry, but actually it may be linked to the fixed neighbor_limit if I am right, having a smaller distance influence in dense areas than in sparse): Thanks again for your prompt reply. |
This sounds logical.
This would totally make sense. I did not really test it as I did not have datasets with different training and test densities, but I think this should work. If you can simulate the density value according to distance to a chosen center point, and use that as augmentation that should be very good. You can imagine each point having a probability to be dropped and the prob is chosen to match the density value depending on distance |
Thanks again for your time and guidance, I'll have try and let you know if it has an impact or not. By the way, I read your last publication on kpconvx, but the link https://github.com/apple/ml-kpconvx is not anymore valid. Is there a place where I could find this repo to have a look to that new version of kpconv ? |
Hi @floriandeboissieu, |
Hi Thomas Hugues,
many thanks for this pytorch version of KPConv.
Applying a trained model on new data with different point density using
test_model.py
recomputes the calibration if thebatch_limits.pkl
andneighbor_limits.pkl
are not found in the data directory. I am not sure if that is what is expected, especially for neighbor_limits.Would a different the neighbor limit on the first layer have an influence on the result ?
Or should I keep them (at least the neighbor limits, i.e.
neighbor_limits.pkl
) the same as for training?After reading carefully the code and the issues I wasn't able to have a clear answer about that.
The text was updated successfully, but these errors were encountered: