You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to leverage the appearance embedding enabled through the n_extra_learnable_dims variable. While I am able to train the network with the extra parameters, I'm having trouble leveraging them for the rendering step.
From what I can tell, the per-image appearance embedding is not applied at inference time, i.e., when saving screenshots through run.py. It seems that the appearance embedding from the first image is applied to all images, leading to incorrect lighting on all images except the first.
Is there a way to have the appearance embedding applied to each image at inference time? Similarly, is it possible to independently optimize the appearance embedding on a portion of a test image, as is done in NeRF-W paper?
The text was updated successfully, but these errors were encountered:
I am trying to leverage the appearance embedding enabled through the
n_extra_learnable_dims
variable. While I am able to train the network with the extra parameters, I'm having trouble leveraging them for the rendering step.From what I can tell, the per-image appearance embedding is not applied at inference time, i.e., when saving screenshots through
run.py
. It seems that the appearance embedding from the first image is applied to all images, leading to incorrect lighting on all images except the first.Is there a way to have the appearance embedding applied to each image at inference time? Similarly, is it possible to independently optimize the appearance embedding on a portion of a test image, as is done in NeRF-W paper?
The text was updated successfully, but these errors were encountered: