Skip to content

Latest commit

 

History

History

1-Variational AutoEncoder (VAE)

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Variational Auto-Encoder (VAE)

The goal of VAEs is to train a genrative model in the form of $p(x, z) = p(z) p(x|z)$ where $p(z)$ is a prior distribution over latent variables $z$ and $p(x|z)$ is the likelihood function or decoder that generates data $x$ given latent variables $z$.

Since the true posterior $p(z|x)$ is in general intractable, the generative model is trained with the aid of an approximate posterior distribution or encoder $q(z|x)$.

The majority of the research efforts on improving VAEs is dedicated to the statistical challenges, such as:

  • reducing the gap between approximate and true posterior distribution
  • formulatig tighter bounds
  • reducing the gradient noise
  • extending VAEs to discrete variables
  • tackling posterior collapse
  • designing special network architectures
    • previous work just borrows the architectures from the classification tasks
介绍

有一个mean和一个log_var

当目标分布是 $\mathcal{N}(0,1^2)$ 时,mean=0, log_var=0

当目标分布是 $\mathcal{N}(0,0.01^2)$ 时,mean=0, log_var=-9

而当mean和log_var也是分布的时候,比如初始化都是 $\mathcal{N}(0, 1^2)$

就应该分别变化到,$\mathcal{N}(0, 0.01^2)$ $\mathcal{N}(-9, 0.01^2)$ 波动小比较好

VAEs maximize the mutual information between the input and latent variables, requiring the networks to retain the information content of the input data as much as possible.

Information maximization in noisy channels: A variational approach
[NeurIPS 2017]

Deep variational information bottleneck
[ICLR 2017]

参考:

https://www.jeremyjordan.me/variational-autoencoders/

https://www.jeremyjordan.me/autoencoders/

https://jaan.io/what-is-variational-autoencoder-vae-tutorial/

Literature