You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 24, 2020. It is now read-only.
The default config for the vae has an intermediate layer size of 32 and a latent layer size of 100. The example data are 3*13, so the input dimension is 39. Is there some reason that unlike most autoencoders, the "bottleneck" middle layer is larger than the input? There are auto-encoder architectures that do this, but usually they require weight regularization somewhere in the layer sequence.
The text was updated successfully, but these errors were encountered:
Hi,
The default config for the vae has an intermediate layer size of 32 and a latent layer size of 100. The example data are 3*13, so the input dimension is 39. Is there some reason that unlike most autoencoders, the "bottleneck" middle layer is larger than the input? There are auto-encoder architectures that do this, but usually they require weight regularization somewhere in the layer sequence.
The text was updated successfully, but these errors were encountered: