Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

strange descriptor loss #277

Open
XhqGlorry11 opened this issue Nov 3, 2022 · 5 comments
Open

strange descriptor loss #277

XhqGlorry11 opened this issue Nov 3, 2022 · 5 comments

Comments

@XhqGlorry11
Copy link

XhqGlorry11 commented Nov 3, 2022

@rpautrat Congrats to your nice work! I'm trying to reproduce training superpoint-coco but run into strange descriptor loss. I used pretrained magicpoint-coco to export coco labels and train superpoint from scratch.
I set: lambda_d = 800, lambda_loss = 1 and other settings same with superpoint-coco.yaml.
My positive_dist starts around 1.8 and negative_dist starts around 1e-6. But in your training log, positive_dist starts around 1 and negative_dist starts more than 0.2.
image
image

Can you help me check what may be wrong? Thx~~
Did you train superpoint-coco from scratch or use any pretrained models?

@rpautrat
Copy link
Owner

rpautrat commented Nov 3, 2022

Hi,

I trained superpoint-coco from scratch, so the initial values of positive_dist and negative_dist are not really meaningful: it depends on the randomization of the initialization weights.

What matters are the final values, and I think that your values are quite close to my training. I get on average positive_dist = 0.01 and negative_dist = 0.04.

@XhqGlorry11
Copy link
Author

XhqGlorry11 commented Nov 3, 2022

@rpautrat Thank you for your reply. It seems that you set lambda_d=0.05, lambda_loss = 10000 in superpoint_coco.yaml, which is quite different with paper's setting. What's the main reason?

@rpautrat
Copy link
Owner

rpautrat commented Nov 3, 2022

The values from the official paper did not work for me. I had to tune these parameters, and the lambda_d = 800, lambda_loss = 1 were the best parameters I found for our released superpoint model.

Since then, I added a normalization in the descriptor loss (95d1cfd) and adapted the values to lambda_d=0.05, lambda_loss = 10000, again by tuning the parameters. So I guess these values should be the right ones to use. Feel free to tune them more of course!

@GuoBo98
Copy link

GuoBo98 commented Mar 11, 2024

The values from the official paper did not work for me. I had to tune these parameters, and the lambda_d = 800, lambda_loss = 1 were the best parameters I found for our released superpoint model.

Since then, I added a normalization in the descriptor loss (95d1cfd) and adapted the values to lambda_d=0.05, lambda_loss = 10000, again by tuning the parameters. So I guess these values should be the right ones to use. Feel free to tune them more of course!

Hello, may I ask why you added a normalization in the descriptor loss ?

@rpautrat
Copy link
Owner

There was a short explanation for this here: #95 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants