Neural networks for deep learning have a bewildering variety of types and architectures, and it is very unclear in general, what they are doing, and why they are doing it. To understand this, the understanding of the processes of deep learning must be put onto firm theoretical foundations. That it is the purpose of this thematic area.

In studying the theory of deep learning with neural networks we are looking at the key questions of how well we can design, and train,  a stable neural net to both represent the behaviour of a system (expressivity), and also how we can have a high level of certainty that the trained network is doing what we expect it to do (transparency).

The theory area will consist of four Work Packages with the focus on i) the interpretation, and study of,  Neural Networks (NNs) as PDEs including PINNS and neural differential equations, ii) leveraging statistical methods to inform confidence and generalisation of NNs, and iii) the development of networks for non-standard data structures.

Get in Touch!

To subscribe to the mailing list

Send an email request