Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learnable LOD bias #13

Open
ZhouCX117 opened this issue Nov 29, 2024 · 4 comments
Open

Learnable LOD bias #13

ZhouCX117 opened this issue Nov 29, 2024 · 4 comments

Comments

@ZhouCX117
Copy link

ZhouCX117 commented Nov 29, 2024

Thanks for your excellent work!
Could you please kindly refer to where I can find the learnable LOD bias in the code? Is it the "extra_level" parameter? Why this can supplement high-frequency regions?
And where is the code for smooth rendering transitions between different LOD levels without introducing visible artifacts?

@ZhouCX117
Copy link
Author

@AnyiRao @tongji-rkr Sorry to bother you again. I wonder why the rotation parameter is set as unlearnable.

self._rotation = nn.Parameter(rots.requires_grad_(False))

@tongji-rkr
Copy link
Contributor

First, we learn extra_level in the densification stage. The explanation for this problem is roughly as follows. We find that removing this parameter will cause LOD0 to fail to guarantee high-frequency regions (for example, there will be discontinuity at the texture and edge of the building). Therefore, we add extra_level to anchors with larger gradients so that they can be selected more easily.

@tongji-rkr
Copy link
Contributor

Second, to smooth the transition, you need to set dist2level to progressive, as shown here
image

@tongji-rkr
Copy link
Contributor

Third, this is a property of anchor, just to satisfy the prefilter process. The rotation of Gaussian is obtained by MLP,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants