Uncertainty Quantification for Deep Learning

This WP aims to analyse the statistical approximation and convergence properties for DL strategies, e.g. recovery guarantees and bounds on the number of training samples necessary for reaching prescribed accuracy, and uncertainty estimates in terms of statistical confidence statements. Modern methodology can be used to construct computationally efficient Bayesian algorithms for complicated infinite-dimensional nonlinear inverse problems and in very recent works, rigorous mathematical guarantees for such algorithms were provided for prototypical PDE models arising in key applications. This methodology is attractive for scientists because the Bayesian posteriori automatically delivers credible sets that allow to reject or accept scientific hypotheses about the unknown parameters, which can be used to validate model choices. These are crucial both to provide a scientific foundation of inferences drawn from data with inverse problems (‘inverse UQ’), and for applications in industrial settings where numerical outputs feed into new algorithmic tasks whose sensitivity to noisy input needs to be quantified (‘forward UQ’). As of now no-one knows even basic ways of how to attach such statistical significance statements (such as ‘error’ bars) to Deep Learning outputs. Without such guarantees their usefulness in real world applications that go beyond basic ‘forward prediction’ tasks will be severely limited – the goal is thus to establish a Bayesian foundation for UQ in DL.

Get in Touch!

To subscribe to the mailing list

Send an email request