Neural networks provide an unprecedented new option for function approximation. Specifically in physical applications, it is always necessary to represent the “real world” with a function. Whether that’s a grey-scale image of the form (u:ℝ2 → ℝ), or a complex fluid flow (u:ℝ3 → ℝ3). Most classical approaches have focused on explicit local discretisations (such as finite differences or finite elements), whereas neural networks implicitly encode non-local features in very complex ways.
There are many names for this new approach. In application to PDEs the most common name is Physics Informed Neural Networks (PINNs), but are more generally referred to as Neural Implicit Representations (NIRs) in applications such as MRI and CT.
There are many recorded successes for the use of NIRs when compared to classical methods, especially in low-precision and high dimensional regimes. However, there are also well-documented concerns about the unreliability and inability for NIRs to provide high-precision approximations. There is a large gap between the analytical and observed performance in real-world situations.
Since March 2022 we have held regular meetings bringing together a community of researchers in the field to discuss the problems, the solutions, the victories, and the future of NIRs.
If you are interested in joining this community, please feel free to email maths4dl@bath.ac.uk to join the mailing list.
At our first meeting, 16 researchers from across the three partner universities met together to discuss the current properties and future possibilities for PINNs and INRs in modern research. We discussed a range of aspects, from theoretical asymptotic performance to optimal neural network architecture.
One topic covered on the theoretical was how best to use neural networks to represent PDEs or their solutions. One established method in this area is Neural ODE. We felt that there was much promise in new methods which take advantage of classical PDE techniques, such as learning a Green’s function or extending this to a learned Born series.
On the numerical side we were all interested by recent work showing the impact of choosing the correct activation function for your neural network. While most theoretical results only depend very weakly on the choice of activation function, numerical results with the sin activation (e.g. NERF and SIREN, then later with a Gaussian both show huge improvements over more standard choices such as the tanh function. Understanding this effect will be of great practical importance to helping NIRs become more reliable and performant in the future.
An ongoing discussion which is particularly relevant to PINNs for solving PDEs is where to place collocation points. This is analogous to the question of how to perform adaptive meshing for finite elements, where it is well known that a customised discretisation of the domain can lead to huge improvements in terms of the final solution quality. For PINNs, there are many interesting new avenues to explore. For example, collocation points have an effect on the non-convexity of the optimisation problem, and neural networks with ReLU activation functions naturally form continuous piecewise linear functions on an implicit mesh.
Finally, it was agreed that the community need a better collection of benchmark problems to understand the performance properties of PINNs and INRs. This set should include a range of problems, not exclusively PDEs for example, and highlight areas where either INRs or classical methods are expected to struggle. For example, neural networks typically struggle to fit data with features at multiple scales, and finite elements are known to struggle with non-smooth problems such as shallow water equations. Math4DL have made it one of their aims to implement such a benchmark set.
A few new faces joined us for our second meeting and everyone gave an update, or introduction, on their work since the last meeting. The meeting was really thought provoking and motivating to be part of such an engaged discussion.
Work is still ongoing into the relationship between a reconstructed PINN and its implicit mesh. Current attention is on how optimal transport can provide a mechanism for adapting the set of collocation points on the fly. They also raised the interesting question of whether the collocation points coincide with nodes of the implicit mesh formed by a ReLU neural network or not.
More generally, lots of work is going into different architectures of neural networks. These include graph neural networks for multi-scale solutions, using different activation functions to avoid spectral bias, and adapting PINNs to solve PDEs on manifolds.