This report presents a study of uncertainty quantification of deep neural networks on classification tasks. The main contribution is the introduction of an empirical Bayesian method to compute credible intervals for a deep neural network. We tested the method on a binary classification task and compared the results with an ensemble learning approach for uncertainty quantification. We showed that the Empirical Bayesian Neural Network (EBNN) performed comparatively well to a bootstrapping method using significantly less running time. An attractive feature of Empirical Bayesian Neural Network is the scalability with an increasing number of training points, which is precisely the main weakness of ensemble learning methods. The motivation to develop an Empirical Bayesian Neural Network was to take advantage of the Bayesian framework for uncertainty quantification and apply it to deep neural networks. Unlike fully Bayesian networks, only the last layer of the network is built with Bayesian weights while the other weights are regular point estimators. This significantly addresses the issue of computing the complex true posterior distribution of the whole neural network while providing correct credible intervals guaranteed by theory.
Author: Maxime Casara (Leiden University).