Skip to content

Latest commit

 

History

History
50 lines (29 loc) · 2.14 KB

README.md

File metadata and controls

50 lines (29 loc) · 2.14 KB

Variational Autoencoder (VAE) Implementation from Scratch

Welcome to the Variational Autoencoder (VAE) implementation repository!

This repository contains the implementation of a Variational Autoencoder (VAE) from scratch using the MNIST and CIFAR-10 datasets. The VAE is a generative model that learns to encode data into a latent space and then decode it back to the original data space. This implementation focuses on understanding the core concepts and building blocks of VAEs without relying on high-level libraries.

Table of Contents

Introduction

Variational Autoencoders (VAEs) are a type of generative model that learn to encode data into a latent space and decode it back to the original space. They are widely used in various applications, including image generation, anomaly detection, and data compression. This repository provides a step-by-step implementation of a VAE using Python and popular deep learning libraries.

For more study and understanding, you can visit this link.

Prerequisites

Before you begin, ensure you have met the following requirements:

  • Python 3.9 or later
  • NumPy
  • TensorFlow 2.x
  • Matplotlib

Project Structure

  • vae_mnist.ipynb: Jupyter notebook for training the VAE model on the MNIST dataset.
  • vae_cifar10.ipynb: Jupyter notebook for training the VAE model on the CIFAR-10 dataset.

References

  • Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv preprint arXiv:1312.6114.
  • Doersch, C. (2016). Tutorial on Variational Autoencoders. arXiv preprint arXiv:1606.05908.

Author

This project is implemented by Faezeh. For more information and updates, visit Curious Seekers Hub.

Logo