Topics covered

  • Inverse problems (WP3.3)
  • PINNs/discretisation with NN (WP1.2/2.4, C2)
  • Optimal transport (WP1.4/2.2, C4)
  • Continuum interpretation of DNNs (WP1.1/2.1, C1)
  • Reconstructions/outputs with error bars (WP1.3, C3)
  • Guarantees/stability estimates (WP1.1/1.3/3.2, C1)
  • Learned physics correction/approximation (WP2.5/3.1/3.2, C3/5)
  • Data on manifolds, e.g. point-clouds/PDE constraints (WP1.4/2.2, C2/4)
  • Saddle-point formulations for training (WP2.1/2.3/2.4, C4)
  • Multi-physics/-modalities (WP3.3, C5)

Papers previously discussed

Veit Wild, Motonobu Kanagawa, Dino Sejdinovic

Connections and Equivalences between the Nystr¨om Method and Sparse Variational Gaussian Processes

https://arxiv.org/pdf/2106.01121
Jaehoon Lee, Yasaman Bahri, Roman Novak , Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein

Deep Neural Networks as Gaussian Processes

https://arxiv.org/pdf/1711.00165
Michalis K. Titsias

Variational Learning of Inducing Variables in Sparse Gaussian Processes

https://proceedings.mlr.press/v5/titsias09a/titsias09a.pdf
Dongdong Chen, Julián Tachella, Mike E. Davies

Equivariant Imaging: Learning Beyond the Range Space

Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 4379-4388
Eldad Haber and Lars Ruthotto

Stable Architectures for Deep Neural Networks

https://arxiv.org/pdf/1705.03341.pdf
Kevin Patrick Murphy

Probabilistic Machine Learning: Advanced Topics

https://probml.github.io/pml-book/book2.html
Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley

Underspecification Presents Challenges for Credibility in Modern Machine Learning

https://arxiv.org/abs/2011.03395
Elena Celledoni, Matthias J. Ehrhardt, Christian Etmann, Robert I McLachlan, Brynjulf Owren, Carola-Bibiane Schönlieb, Ferdia Sherry

Structure preserving deep learning

https://arxiv.org/abs/2006.03364
Tom R. Andersson, J. Scott Hosking, María Pérez-Ortiz, Brooks Paige, Andrew Elliott, Chris Russell, Stephen Law, Daniel C. Jones, Jeremy Wilkinson, Tony Phillips, James Byrne, Steffen Tietsche, Beena Balan Sarojini, Eduardo Blanchard-Wrigglesworth, Yevgeny Aksenov, Rod Downie & Emily Shuckburgh

Seasonal Arctic sea ice forecasting with probabilistic deep learning

https://www.nature.com/articles/s41467-021-25257-4
Daniel Obmann and Markus Haltmeier

Convergence analysis of equilibrium methods for inverse problems

https://arxiv.org/pdf/2306.01421.pdf
Zihao Zou, Jiaming Liu, Brendt Wohlberg, Ulugbek S. Kamilov

Deep Equilibrium Learning of Explicit Regularizers for Imaging Inverse Problems

https://arxiv.org/abs/2303.05386
Subhadip Mukherjee, Andreas Hauptmann, Ozan Öktem, Marcelo Pereyra, Carola-Bibiane Schönlieb

Learned reconstruction methods with convergence guarantees

https://arxiv.org/abs/2206.05431
Pulkit Gopalani, Anirbit Mukherjee

Global Convergence of SGD On Two Layer Neural Nets

https://arxiv.org/abs/2210.11452
Dieuwertje Alblas, Christoph Brune, Kak Khee Yeung, and Jelmer M. Wolterink

Going Off-Grid: Continuous Implicit Neural Representations for 3D Vascular Modeling

https://arxiv.org/pdf/2207.14663.pdf
Taco S. Cohen, Max Welling

Steerable CNNs

https://arxiv.org/pdf/1612.08498.pdf
Alexis Goujon, Sebastian Neumayer, Pakshal Bohra, Stanislas Ducotterd, and Michael Unser

A Neural-Network-Based Convex Regularizer for Image Reconstruction

https://arxiv.org/pdf/2211.12461.pdf
Junqi Tang, Subhadip Mukherjee, and Carola-Bibiane Schonlieb

Accelerating Deep Unrolling Networks via Dimensionality Reduction

https://arxiv.org/pdf/2208.14784.pdf
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin

Attention is all you need

https://arxiv.org/pdf/1706.03762.pdf
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals

Understanding deep learning requires rethinking generalization

https://arxiv.org/pdf/1611.03530.pdf
Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole

Score-Based Generative Modeling through Stochastic Differential Equations

https://arxiv.org/pdf/2011.13456.pdf
Sam Greydanus, Misko Dzamba, Jason Yosinski

Hamiltonian neural networks

https://arxiv.org/pdf/1906.01563.pdf
Alexander Immer, Maciej Korzepa, and Matthias Bauer

“Improving predictions of Bayesian neural nets via local linearization”

https://arxiv.org/abs/2008.08400
Babak Maboudi Afkham, Julianne Chung, and Matthias Chung

“Learning Regularization Parameters of Inverse Problems via Deep Neural Networks”

https://arxiv.org/abs/2104.06594
Vishal Monga, Yuelong Li, and Yonina C. Eldar

“Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing”

https://ieeexplore.ieee.org/document/9363511
Lu Lu, Xuhui Meng, Zhiping Mao, and George E. Karniadakis

“DeepXDE: A deep learning library for solving differential equations”

https://arxiv.org/abs/1907.04502
Ehsan Kharazmi, Zhongqiang Zhang, George Em Karniadakis.

"hp-VPINNs: Variational Physics-Informed Neural Networks With Domain Decomposition"

https://arxiv.org/abs/2003.05385
Zakaria Mhammedi

“Risk-Monotonicity in Statistical Learning”

https://arxiv.org/abs/2011.14126
Adityanarayanan Radhakrishnan, Mikhail Belkin, and Caroline Uhler.

“Overparameterized neural networks implement associative memory”

https://www.pnas.org/content/117/44/27162
Krishnapriyan et al.

Characterizing possible failure modes in physics-informed neural networks

https://arxiv.org/abs/2109.01050
Martin Genzel, Jan Macdonald, and Maximilian März

Solving Inverse Problems with Deep Neural Networks – Robustness Included?

https://arxiv.org/pdf/2011.04268.pdf
Dyego Araújo, Roberto I. Oliveira and Daniel Yukimura

“A mean-field limit for certain deep neural networks”

https://arxiv.org/pdf/1906.00193.pdf
Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum and Tom Goldstein

“Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks”

https://arxiv.org/pdf/2106.04537.pdf
Get in Touch!

To subscribe to the mailing list

Send an email request