Algorithms for implementing deep learning methods frequently have to work with large, high dimensional, highly non-standard data sets with complicated architectures. These methods have to be carefully trained in order to produce reliable results. In this thematic area we will investigate the best way to do this in an efficient, robust, reliable, and stable, manner. Combined with the work in the theory area, this will deliver stable and algorithms for deep learning with provable reliability.
The algorithms are will consist of five Work Packages with the focus on i) efficient training using the continuous formulation and metric-based training algorithms, ii) large-scale nonconvex optimisation including meshless methods, iii) learning non-Gaussian systematic model errors.