Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Descriptor loss parameters optimization #283

Open
idanshef opened this issue Dec 7, 2022 · 4 comments
Open

Descriptor loss parameters optimization #283

idanshef opened this issue Dec 7, 2022 · 4 comments

Comments

@idanshef
Copy link

idanshef commented Dec 7, 2022

Hi, your work on superpoint really helped me! Thank you for this great work 🙏

I saw that you changed lambda_loss and lambda_d with different parameters than defined in the paper.
How did you find the parameters which gave you the best results? A lot of experiments or there is another way?
And one more related question, I saw that you changed the lambdas, after you added L2 normalization to the descriptors product. Was there any transformation you did to those parameters or trial and error until you reached to optimal values?

Thanks in advance!!

@rpautrat
Copy link
Owner

rpautrat commented Dec 8, 2022

Hi,

Unfortunately, there was not easier choice than just experimenting until I could find the best parameters. I cannot guarantee that the current values are optimal, but they should work fairly well at least. Feel free to keep tuning them if you want to improve the results further.

@Vincentqyw
Copy link

I have one similar question in tuning parameters, especially $\lambda$ and $\lambda_d$.
Is there any critiria in tunning these two parameters when the trainning set is hard?

@rpautrat
Copy link
Owner

I don't see any reason for changing $\lambda$ and $\lambda_d$ when the training set is harder. The only reasons to change them would be the following I think:

  • $\lambda$: if you care more for the quality of the keypoints or for the descriptors, you can change the balancing factor.
  • $\lambda_d$: if you expect to have many keypoints in your training set, or on the contrary very few keypoints, you might want to tune $\lambda_d$ a bit.

@Vincentqyw
Copy link

Got it! Thanks for your advice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants