Repository containing code and experiments for the paper "Bayesian Neural Network Versus Ex-Post Calibration For Capturing Prediction Uncertainty".
The core files are as follows:
run_experiments
: Entry program. Starts the experiments.config.yaml
: Contains program configurations and training hyperparameters.trainer.py
: VB trainer.net.py
: Standard neural network builder.models.py
: Neural network architecture.dataset.py
: TF dataset builder.utils.py
: General utility functions for plotting, exporting arrays etc.vbutils
: Contains utility functions related to VB.betacal
: Contains calibration methods (Beta, Logistic and Isotonic). Code adapted from here [1].
All results and logs can be access under results/
and logs/
folders (both are automatically created). Datasets (found under data/
) were downloaded and preprocessed using the following code as per previous work on calibration methods [1].
Note: Python version == 3.6
Main dependencies:
- TensorFlow 2.0
- Scikit-learn
- Numpy
Please refer to requirements.txt
file for all the dependencies for this project. To setup the program on your local machine:
- Clone the repo:
git clone https://github.com/sodalabsio/vb_calibration_nn.git
cd vb_calibration_nn/
- Install libraries:
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt
- Execute
run_experiments.sh
with an optional path to activate the virtual environment as an argument (not required if virtualenv is already activated):
./run_experiments.sh
or
./run_experiments.sh venv/bin/activate
GNU General Public License v3.0