You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
HI, very thanks for your amazing works!
I have some problems when apply nerfacc to my own implementation. For my own NeRF implementation, I use two MLPs and training time is less than 5h(250000 iters), but if I apply nerfacc and just use one MLP to query the training time increases to 6~8h.
The problem is, the update of estimator.binaries.sum()(in training loop) is very slow, it stars from about 100 0000 and remains 10^5 like 90 0000 very very long time(density is updating), which further results long training time.
I find if I set 1. the learning rate to a small value like 5e-4 ~ 2e-3 or 2. small render_size like 0.003(sutiable for current scene) or 3. larger far_plane >= 10 can all cause this situation
However, If I set 1. the learining_rate to a larger value like 5e-3~1e-2 or 2. increase the render_size like 0.03 even larger or 3. smaller far_plane like 2 can all decrease the estimator.binaries.sum() to zero(density will decrease to nan first) very quickly and get error
So I'm very confused about where is wrong, could you give me some suggestion? Very thanks!
The text was updated successfully, but these errors were encountered:
HI, very thanks for your amazing works!
I have some problems when apply nerfacc to my own implementation. For my own NeRF implementation, I use two MLPs and training time is less than 5h(250000 iters), but if I apply nerfacc and just use one MLP to query the training time increases to 6~8h.
The problem is, the update of estimator.binaries.sum()(in training loop) is very slow, it stars from about 100 0000 and remains 10^5 like 90 0000 very very long time(density is updating), which further results long training time.
I find if I set 1. the learning rate to a small value like 5e-4 ~ 2e-3 or 2. small render_size like 0.003(sutiable for current scene) or 3. larger far_plane >= 10 can all cause this situation
However, If I set 1. the learining_rate to a larger value like 5e-3~1e-2 or 2. increase the render_size like 0.03 even larger or 3. smaller far_plane like 2 can all decrease the estimator.binaries.sum() to zero(density will decrease to nan first) very quickly and get error
So I'm very confused about where is wrong, could you give me some suggestion? Very thanks!
The text was updated successfully, but these errors were encountered: