You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 7, 2024. It is now read-only.
I'm trying to train the model myself. In the paper, the loss is defined as l1 loss plus a gradient loss in “Learning the depths of moving people by watching frozen people”. But I found that they calculate the gradient loss in log space. Do you use log depth and gt as well? Besides, for the l1 loss, |(Dxs −Dgt)|, are the output and gt both in the range [0, 255]?
Thanks.
The text was updated successfully, but these errors were encountered:
Hi,
About the log space no, I did not use the log space nor the mask (thank you for pointing out this, I’m going to make it clear). For the second question yes, but it should work even with a different range
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
I'm trying to train the model myself. In the paper, the loss is defined as l1 loss plus a gradient loss in “Learning the depths of moving people by watching frozen people”. But I found that they calculate the gradient loss in log space. Do you use log depth and gt as well? Besides, for the l1 loss, |(Dxs −Dgt)|, are the output and gt both in the range [0, 255]?
Thanks.
The text was updated successfully, but these errors were encountered: