Topics covered

  • Inverse problems (WP3.3)
  • PINNs/discretisation with NN (WP1.2/2.4, C2)
  • Optimal transport (WP1.4/2.2, C4)
  • Continuum interpretation of DNNs (WP1.1/2.1, C1)
  • Reconstructions/outputs with error bars (WP1.3, C3)
  • Guarantees/stability estimates (WP1.1/1.3/3.2, C1)
  • Learned physics correction/approximation (WP2.5/3.1/3.2, C3/5)
  • Data on manifolds, e.g. point-clouds/PDE constraints (WP1.4/2.2, C2/4)
  • Saddle-point formulations for training (WP2.1/2.3/2.4, C4)
  • Multi-physics/-modalities (WP3.3, C5)

Papers previously discussed

Daniel Obmann and Markus Haltmeier

Convergence analysis of equilibrium methods for inverse problems

https://arxiv.org/pdf/2306.01421.pdf
Zihao Zou, Jiaming Liu, Brendt Wohlberg, Ulugbek S. Kamilov

Deep Equilibrium Learning of Explicit Regularizers for Imaging Inverse Problems

https://arxiv.org/abs/2303.05386
Subhadip Mukherjee, Andreas Hauptmann, Ozan Öktem, Marcelo Pereyra, Carola-Bibiane Schönlieb

Learned reconstruction methods with convergence guarantees

https://arxiv.org/abs/2206.05431
Pulkit Gopalani, Anirbit Mukherjee

Global Convergence of SGD On Two Layer Neural Nets

https://arxiv.org/abs/2210.11452
Dieuwertje Alblas, Christoph Brune, Kak Khee Yeung, and Jelmer M. Wolterink

Going Off-Grid: Continuous Implicit Neural Representations for 3D Vascular Modeling

https://arxiv.org/pdf/2207.14663.pdf
Taco S. Cohen, Max Welling

Steerable CNNs

https://arxiv.org/pdf/1612.08498.pdf
Alexis Goujon, Sebastian Neumayer, Pakshal Bohra, Stanislas Ducotterd, and Michael Unser

A Neural-Network-Based Convex Regularizer for Image Reconstruction

https://arxiv.org/pdf/2211.12461.pdf
Junqi Tang, Subhadip Mukherjee, and Carola-Bibiane Schonlieb

Accelerating Deep Unrolling Networks via Dimensionality Reduction

https://arxiv.org/pdf/2208.14784.pdf
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin

Attention is all you need

https://arxiv.org/pdf/1706.03762.pdf
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals

Understanding deep learning requires rethinking generalization

https://arxiv.org/pdf/1611.03530.pdf
Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole

Score-Based Generative Modeling through Stochastic Differential Equations

https://arxiv.org/pdf/2011.13456.pdf
Sam Greydanus, Misko Dzamba, Jason Yosinski

Hamiltonian neural networks

https://arxiv.org/pdf/1906.01563.pdf
Alexander Immer, Maciej Korzepa, and Matthias Bauer

“Improving predictions of Bayesian neural nets via local linearization”

https://arxiv.org/abs/2008.08400
Babak Maboudi Afkham, Julianne Chung, and Matthias Chung

“Learning Regularization Parameters of Inverse Problems via Deep Neural Networks”

https://arxiv.org/abs/2104.06594
Vishal Monga, Yuelong Li, and Yonina C. Eldar

“Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing”

https://ieeexplore.ieee.org/document/9363511
Lu Lu, Xuhui Meng, Zhiping Mao, and George E. Karniadakis

“DeepXDE: A deep learning library for solving differential equations”

https://arxiv.org/abs/1907.04502
Ehsan Kharazmi, Zhongqiang Zhang, George Em Karniadakis.

"hp-VPINNs: Variational Physics-Informed Neural Networks With Domain Decomposition"

https://arxiv.org/abs/2003.05385
Zakaria Mhammedi

“Risk-Monotonicity in Statistical Learning”

https://arxiv.org/abs/2011.14126
Adityanarayanan Radhakrishnan, Mikhail Belkin, and Caroline Uhler.

“Overparameterized neural networks implement associative memory”

https://www.pnas.org/content/117/44/27162
Krishnapriyan et al.

Characterizing possible failure modes in physics-informed neural networks

https://arxiv.org/abs/2109.01050
Martin Genzel, Jan Macdonald, and Maximilian März

Solving Inverse Problems with Deep Neural Networks – Robustness Included?

https://arxiv.org/pdf/2011.04268.pdf
Dyego Araújo, Roberto I. Oliveira and Daniel Yukimura

“A mean-field limit for certain deep neural networks”

https://arxiv.org/pdf/1906.00193.pdf
Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum and Tom Goldstein

“Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks”

https://arxiv.org/pdf/2106.04537.pdf
Get in Touch!

To subscribe to the mailing list

Send an email request