This repository contains all the code to the Variational Autoencoders for Jet Simulation paper. The paper demonstrates the use of variational autoencoders (VAE) as powerful tools for jet simulation. With the growing popularity and success of generative adversarial networks (GAN) in high energy physics, we explore variational autoencoders for calorimeter simulation and compare it to the Location-Aware Generative Adversarial Network (LAGAN), a preexisting generative adversarial network used for jet simulation.
The model we propose utilizes the feature perceptual loss in order to produce the sparse features of jet images. The feature perceptual loss takes the mean squared error of the hidden layers of two images. In this case, we take the feature perceptual loss between the input and output image of the variational autoencoder. These hidden features are calculated from a pre-trained convolutional neural network (CNN) classifier. The CNN model is shown in this notebook.
This notebook contains the code for the actual VAE model. We load the pre-trained CNN model and use it to calculate the feature perceptual loss. This is ultimately combined with the Bernoulli and KL divergence loss. More detail on the model is found in the notebook.
This is where all the data analysis is found. The results include some visual and quantative assessments of the jets produced by the variational autoencoder. A large portion of the analysis also explores the latent space. We also analyze the LAGAN in the analysis in order to compare it to our model.