-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about the reconstruction error of combined model #3
Comments
|
There is no fundamental reason. I just wanted to compare variance with reconstruction error and I thought subtracting ground-truth images from the single output with no dropout was more reasonable. |
As for the output of the model, in epistemic-uncertainty/model.py, it seems that when testing, you compute the variance of |
In principle, disabling dropout in test should act as averaging models, since dropout can be interpreted as a model-averaging technique: you train different sampled networks with dropout at training and take the full network at test, which is similar to take expectation under the Bernoulli (in fact you divide by the dropout probability in most implementations).
Not so sure about this. Average of outputs in the loss function would act as if you disable dropout during training. |
Sorry I'm confused. In What Uncertainty Do We Need in Bayesian Deep Learning for Computer Vision, it says that
And according to the theory of Monte Carlo, \int p(y* | x*, w) p(w | data) dw = 1/N \sum_{n = 1}^N p(y* | x*, w_n), where w_n are samples from p(w | data), so why do you take the full network at test? I'm really confused... |
Yes, this is quite convincing. |
Yeah, sure! |
Hi, in your combined model, when testing, the computation of reconstruction error is tf.square(self.rec_images2 - self.images) (line 93), I wonder that why don't you use the output of modeling epistemic uncertainty (i.e. self.rec_images) to compute reconstruction error when testing? Thanks!
The text was updated successfully, but these errors were encountered: