-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the MvP-Dense Attention module #15
Comments
hi, you can modify the attention layer, the current code does not include that |
Thanks for your quick reply! Could you please offer me some help to modify the attention layer?
|
yes, you can use nn.MultiheadAttention, it is similar to the usage of cross attention in detr, with query as the joint query, key and value as the multi-view image feature maps. the ray embedding should be kept |
Now my experiment setting is as follows.
Do you think this setting is reasonable and able to derive the results you report in the paper? |
I have tried the above setting and didn't get results near |
i think your modification is correct, how much performance did you get? |
Here is the result i got after 74 epochs:
|
260 is too low. the full attention result is not got from the current setting, you could try turning the learning rate, and also, check the best results during the training process |
The above is the best result I got. And I tried to turn the lr to
I also have the add, norm and ffn after the cross_attn:
Actually, I am confused about the function of |
In your paper, you mention that you have replaced the projective attention with dense attention module, here is the results:
I wonder how did you run the experiment? How can I modify your code to run the experiment? Which module should I modify?
The text was updated successfully, but these errors were encountered: