Topics covered

  • Inverse problems (WP3.3)
  • PINNs/discretisation with NN (WP1.2/2.4, C2)
  • Optimal transport (WP1.4/2.2, C4)
  • Continuum interpretation of DNNs (WP1.1/2.1, C1)
  • Reconstructions/outputs with error bars (WP1.3, C3)
  • Guarantees/stability estimates (WP1.1/1.3/3.2, C1)
  • Learned physics correction/approximation (WP2.5/3.1/3.2, C3/5)
  • Data on manifolds, e.g. point-clouds/PDE constraints (WP1.4/2.2, C2/4)
  • Saddle-point formulations for training (WP2.1/2.3/2.4, C4)
  • Multi-physics/-modalities (WP3.3, C5)

Papers previously discussed

Daniel Obmann and Markus Haltmeier

Convergence analysis of equilibrium methods for inverse problems
Zihao Zou, Jiaming Liu, Brendt Wohlberg, Ulugbek S. Kamilov

Deep Equilibrium Learning of Explicit Regularizers for Imaging Inverse Problems
Subhadip Mukherjee, Andreas Hauptmann, Ozan Öktem, Marcelo Pereyra, Carola-Bibiane Schönlieb

Learned reconstruction methods with convergence guarantees
Pulkit Gopalani, Anirbit Mukherjee

Global Convergence of SGD On Two Layer Neural Nets
Dieuwertje Alblas, Christoph Brune, Kak Khee Yeung, and Jelmer M. Wolterink

Going Off-Grid: Continuous Implicit Neural Representations for 3D Vascular Modeling
Taco S. Cohen, Max Welling

Steerable CNNs
Alexis Goujon, Sebastian Neumayer, Pakshal Bohra, Stanislas Ducotterd, and Michael Unser

A Neural-Network-Based Convex Regularizer for Image Reconstruction
Junqi Tang, Subhadip Mukherjee, and Carola-Bibiane Schonlieb

Accelerating Deep Unrolling Networks via Dimensionality Reduction
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin

Attention is all you need
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals

Understanding deep learning requires rethinking generalization
Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole

Score-Based Generative Modeling through Stochastic Differential Equations
Sam Greydanus, Misko Dzamba, Jason Yosinski

Hamiltonian neural networks
Alexander Immer, Maciej Korzepa, and Matthias Bauer

“Improving predictions of Bayesian neural nets via local linearization”
Babak Maboudi Afkham, Julianne Chung, and Matthias Chung

“Learning Regularization Parameters of Inverse Problems via Deep Neural Networks”
Vishal Monga, Yuelong Li, and Yonina C. Eldar

“Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing”
Lu Lu, Xuhui Meng, Zhiping Mao, and George E. Karniadakis

“DeepXDE: A deep learning library for solving differential equations”
Ehsan Kharazmi, Zhongqiang Zhang, George Em Karniadakis.

"hp-VPINNs: Variational Physics-Informed Neural Networks With Domain Decomposition"
Zakaria Mhammedi

“Risk-Monotonicity in Statistical Learning”
Adityanarayanan Radhakrishnan, Mikhail Belkin, and Caroline Uhler.

“Overparameterized neural networks implement associative memory”
Krishnapriyan et al.

Characterizing possible failure modes in physics-informed neural networks
Martin Genzel, Jan Macdonald, and Maximilian März

Solving Inverse Problems with Deep Neural Networks – Robustness Included?
Dyego Araújo, Roberto I. Oliveira and Daniel Yukimura

“A mean-field limit for certain deep neural networks”
Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum and Tom Goldstein

“Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks”
Get in Touch!

To subscribe to the mailing list

Send an email request