Empirical Study on Exploring the Impact of Controlling the Objective on Disentanglement Learning During Training
There exist many unsupervised or weakly-supervised generative approaches, and they use similar techniques to manipulate some hyper-parameters to modulate the learning constraints to the training objective. For example, for the modified variant of variational autoencoder (VAE, an artificial neural network architectural probabilistic graphical model),
Assume in an unsupervised manner, the learning process of a generative model can be described as two phases, learning the domain and learning the disentangled representation, and these two phases can be alternatively switched by a tunable hyperparameter. In such a scenario, can we minimize the tradeoff between generation quality and disentanglement, which can be evaluated by reconstruction loss and disentanglement metrics? Answering this question is the objective of this empirical study.
We employ
The following tables summarize experiments for each dataset which computes the mean for each experiment case. For the control groups, we gradually increase
For the first domain, the models with training from
During the experimental process, we observed that if we tune the
If there are any questions, please contact me through email at [email protected]