Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Selective Adam #432

Merged
merged 8 commits into from
Oct 2, 2024
Merged

Conversation

rahul-goel
Copy link
Contributor

@rahul-goel rahul-goel commented Sep 30, 2024

This is the same as the sparse adam described in the Taming3DGS paper. Right now the non-zero radii gaussians are optimized. Hence the flag visible adam is used in the command line argument. Any other kind of mask can be passed to the same optimizer backend for different behaviour.

This optimizer is compatible with packed=True as well. It gives a speed-up as briefly evaluated in the table below. I tried the bicycle scene till 7000 iterations on 3080Ti. Although, it leads to a different number of gaussians and slightly different evaluation metrics. The speed-up should be higher and more noticeable with higher number of iterations (due to higher number of gaussians).

Adam PyTorch Sparse Adam + Packed Visible (Selective) Adam Visible (Selective) Adam + Packed
Time 3m 16s 3m 16s 2m 28s 2m 37s
Count 2642634 2234936 2325614 2337493

PS: I've created a new optimizer directory similar to strategy directory. I expect different optimizers catering to 3DGS to show up as research progresses. So I think this is a good option.

@rahul-goel
Copy link
Contributor Author

Help from the community to evaluate the performance and final evaluation metrics for 30K iterations and to test this for multi-GPUs will be very much appreciated. :)

@rahul-goel
Copy link
Contributor Author

I was able to test the garden scene on a bigger GPU. Here are the results with the default strategy:

Adam Visible (Selective) Adam
Time 1821s 1452s
Count 5771257 4869387
PSNR 27.343 27.259
SSIM 0.866 0.862
LPIPS 0.076 0.082

The comparison can be unfair here due to different number of gaussians. So I tried the MCMC strategy with 1M gaussian cap and the results for that are:

Adam Visible (Selective) Adam
Time 1272s 1188s
Count 1M 1M
PSNR 26.97 26.78
SSIM 0.848 0.839
LPIPS 0.113 0.122

But here, I think that the densification calculation is slow which hides the speed-up of selective adam.

Finally, we can see a drop in metrics. I think the main reason for this is that this is a different optimizer and requires different learning rates which I'm avoiding due to extra complexity. From my old experiments on the outdoor scenes of MN360, changing the learning rates led to better quality.

@liruilong940607
Copy link
Collaborator

This is cool! Do you want to add a doc string to the SelectiveAdam and maybe cite the Taming GS paper there?

@rahul-goel
Copy link
Contributor Author

Done!

Copy link
Collaborator

@liruilong940607 liruilong940607 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now!

@liruilong940607 liruilong940607 merged commit d4020bc into nerfstudio-project:main Oct 2, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants