Dynamics, Data and Deep Learning Workshop 25-26 March 2024

Introduction

The Dynamics, Data and Deep Learning Workshop will bring together academic experts and industrial practitioners to think about new ways to discover, identify and augment mathematical models of dynamic processes using data in a rigorous and explainable fashion, and will focus on recent advancements at the interface of deep learning and dynamical systems. The covered topics will include mathematical concepts such as neural differential equations, Koopman and transfer spectral theory, rough path methodologies, (variational) autoencoders, invariant foliations, dynamic mode decomposition which are supplemented by various function approximators, like neural networks, compressed tensors, compressed sensing, etc. Parameter identification methods, such as online and/or stochastic optimisation techniques, sparse regression techniques could also be discussed to improve model accuracy. The workshop will also encourage discussion on application specific issues and tricks of the trade related to various computational implementations.

Neural differential equations (NDEs) have emerged as one of the central modelling frameworks in machine learning. In scientific applications, NDEs have shown great promise due to their ability to harness both the powerful approximation capabilities of neural networks and the continuous-time modelling of differential equations. This workshop aims to facilitate discussions between NDE researchers and leading experts at the interface of scientific modelling and data-driven machine learning.

Koopman and transfer spectral theory represents finite-dimensional nonlinear dynamical systems using an infinite-dimensional linear operator. This representation potentially enables easier prediction, estimation, and control of nonlinear systems. There have been numerous theoretical and algorithmic developments over the past decade, with many real-world applications. However, there remain many challenges. A goal is to discuss ideas (and cross-fertilisation of communities) to drive future progress in this field.​

Rough path theory provides mathematical and computational tools for modelling the influence of continuous-time signals on dynamical systems. In recent years, it has started to play a key role in the design of state-of-the-art machine learning algorithms for processing noisy high-dimensional data streams in a wide range of contexts including finance, data assimilation, cybersecurity and medicine. Whilst rough paths have some known interactions with NDEs, the workshop intends to bring together researchers and broaden these connections – “sowing the seeds” for future interdisciplinary research.

The workshop is being organised by:

  • Maths4DL team members (Prof Chris Budd, Dr Kweku Abraham, Dr James Foster and Helena Lake)
  • Dr Robert Szalai (University of Bristol)
  • Prof Mark Sandler (Queen Mary University of London)
  • Dr Matt Colbrook (University of Cambridge)

 

Programme

Provisional schedule:

 

Monday 25 March 2024

 

Time
Talk / Activity
10.00 – 10.30 Arrival, tea & coffee and registration
10.30 – 10.55 The Mathematics of Complex Streamed Data – Professor Terry Lyons 
10.55 – 11.20 A high-order numerical method for computing signature kernels – Dr Maud Lemercier
11.20 – 11.45 Scaling limits of random recurrent-residual neural networks – Dr Cris Salvi
11.45 – 12.15 Panel discussion
12.15 – 13.15  Lunch
13.15 – 13.40 Symbolic Regression via Neural Networks – Professor Jeff Moehlis
13.40 – 14.05 Supervised machine learning with tensor network kernel machines – Professor Kim Batselier
14.05 – 14.30 Nonlinear dynamics of recurrent neural network function and malfunction  – Professor Peter Ashwin
14.30 – 15.00 Panel discussion
15.00 – 15.20 Coffee
15.20 – 15.45 Dynamic Models from Data – Professor Nathan Kutz
15.45 – 16.10 Dr Alex Lobbe
16.10 – 16.40 Panel discussion
16.45 – 18.00 Poster reception
19.00  Dinner at Côte Brasserie, Clifton.
Tuesday 26 March 2024

 

Time
Talk / Activity
9.00 – 9.30 Arrival, tea & coffee
9.30 – 9.55 Dynamic mode decomposition for analytic interval maps – Dr Oscar Bandtlow
9.55 – 10.20 EDMD for expanding circle maps: spectral approximation results – Dr Julia Slipantschuk
10.20 – 10.45 Operator learning without the adjoint – Dr Nicolas Boullé
10.45 – 11.30 Analyzing Climate Scenarios Using Dynamic Mode Decomposition With Control – Professor Gustau Camps-Valls
11.30 – 12.00 Panel discussion
12.00 – 13.15 Lunch
13.15 – 13.40 Professor Tim Dodwell
13.40 – 14.05 Learning methodologies for music and audio data – Dr Emmanouil Benetos
14.05 – 14.30

Rigged DMD: Data-Driven Koopman Decompositions via Generalized Eigenfunctions – Dr Catherine Drysdale

14.30 – 14.55 Dr Gonçalo dos Reis
14.55 – 15.25 Panel discussion
15.25 – 16.00 Coffee and finish

Speakers

Dr Catherine Drysdale

University of Birmingham

Rigged DMD: Data-Driven Koopman Decompositions via Generalized Eigenfunctions

Koopman operators globally linearize nonlinear dynamical systems, and their spectral information serves as a powerful tool for analysing and decomposing nonlinear dynamics. However, Koopman operators are inherently infinite-dimensional, posing a significant challenge in computing their spectral information, especially in the presence of continuous spectra. We can often access this spectral information by considering a rigged-Hilbert space structure that allows us to diagonalise the operator. In the rigged Hilbert space setting, there is a smaller topological space which contains some structure corresponding to the operator, and the dual of this space where the generalised eigenvectors live. We have developed an algorithm capable of computing these generalized eigenfunctions for Koopman operators associated with general measure-preserving systems. This algorithm leverages the resolvent to compute smoothed approximations of generalized eigenfunctions. We demonstrate explicit high-order convergence theorems for our algorithm, termed RiggedDMD. These generalized eigenfunctions enable a rigorous approach to Koopman mode decomposition. We demonstrate the algorithm on several examples, include systems with Lebesgue spectrum, integrable systems, the Lorenz system and a turbulent boundary layer flow with Reynolds number $6\times 10^{4}$  and state-space dimension $>10^5$.

Professor Nathan Kutz

University of Washington

Dynamic Models from Data

A major challenge in the study of dynamical systems is that of model discovery: turning data into reduced order models that are not just predictive, but provide insight into the nature of the underlying dynamical system that generated the data. We introduce a number of data-driven strategies for discovering nonlinear multiscale dynamical systems and their embeddings from data. We consider two canonical cases: (i) systems for which we have full measurements of the governing variables, and (ii) systems for which we have incomplete measurements. For systems with full state measurements, we show that the recent sparse identification of nonlinear dynamical systems (SINDy) method can discover governing equations with relatively little data and introduce a sampling method that allows SINDy to scale efficiently to problems with multiple time scales, noise and parametric dependencies. For systems with incomplete observations, we show that the Hankel alternative view of Koopman (HAVOK) method, based on time-delay embedding coordinates and the dynamic mode decomposition, can be used to obtain a linear models and Koopman invariant measurement systems that nearly perfectly captures the dynamics of nonlinear quasiperiodic systems. Neural networks are used in targeted ways to aid in the model reduction process. Together, these approaches provide a suite of mathematical strategies for reducing the data required to discover and model nonlinear multiscale systems.

Professor Terry Lyons

University of Oxford

The Mathematics of Complex Streamed Data

Complex streams of evolving data are better understood by their effects on nonlinear systems that by their values at times. The question of which nonlinear systems would seem to be context dependent, but it is not. Core to rough path theory is a simple universal nonlinear system that captures all the information needed to predict any response to any nonlinear system. This idealised mathematical feature set is known as the signature of the stream. Its abstract simplicity opens the possibilities for understanding and working with streams in the same context free way that calculators work with numbers. Signature-based techniques offer simple to apply universal numerical methods that are robust to irregular data and efficient at representing the order of events and complex oscillatory data. Specific software can be developed and then applied across many contexts. Signatures underpin prize winning contributions in recognizing Chinese handwriting, in detecting sepsis, and in generating financial data, and most recently in the ability to score streams as outliers against a corpus of normal streams. This principled outlier technology has emerged as a powerful unifying technique; it identifies radio frequency interference in astronomical data and brain injury from MEG data. The underpinning theoretical contributions span a range from abstract algebra and non-commutative analysis to questions of organisation of efficient numerical calculation. See www.datasig.ac.uk/. New hyperbolic partial differential equations have been developed that compute the “signature kernel” trick without ever having to introduce signatures. Neural controlled differential equations can directly harness approaches such as the log ode method and consume the control as a rough path. The current step is the rough transformer. For this RoughPy needs to be on the GPU.

Registration

Registration is now closed.

 

Information for delegates

Dinner

The workshop dinner will take place from 7pm on Monday 25 March. The venue is Côte Brasserie, Clifton, Bristol.  You will be contacted nearer the time regarding your menu choices if you selected Yes or Maybe to attend the dinner when you registered.

Travel

The workshop is taking place at Engineers’ House in Bristol.

Engineers’ House
The Promenade
Clifton Down
Avon
Bristol
BS8 3NB

For information on how to get there, please visit their website.

Accommodation

Brsitol has a wide variety of accommodation to suit all tastes and budgets. There are lots of options in and around the Clifton area where the workshop is located.

Below are a few hotels located nearby:

The Berkley Square Hotel

The Clifton Hotel

The Rodney Hotel

The Washington Guesthouse

These hotels are located more centrally:

Premier Inn

Holiday Inn

Travelodge

Delegates are required to book their own accommodation. We encourage delegates to book accommodation as early as possible.

Get in Touch!

To subscribe to the mailing list

Send an email request