Geometric Deep Learning workshop, University of Cambridge 10-12 June 2024


In recent years we have experienced various connotations of geometric ideas entering deep learning models. These include graph neural networks, deep graphical models and structure-preserving deep learning, and are considered to be able to represent more general data types, beyond Euclidean space, as well as help characterise structural properties of the solution such as equivariance under certain group actions. Applications range from diffusion tensor imaging and the processing of protein structures in molecular biology all the way to weather forecasting.

In this meeting we brought together international experts in geometric deep learning to tell us about this exciting field and recent advances therein. Apart from the invited talks we also invited engagement with early career researchers through poster presentations.

The meeting took place in the West Hub building at the University of Cambridge.


Workshop organisers:


Angelica I Aviles-Rivero, University of Cambridge

Moshe Eliasof, University of Cambridge

Helena Lake, University of Bath

Chaoyu Liu, University of Cambridge

Carola-Bibiane Schönlieb, University of Cambridge



Registration is now closed.



Posters and lightning talks

If you would like to bring a poster and give a short lightning talk (Monday 10 June 16.15), please complete the online form. If you require confirmation of acceptance of your poster before booking using the online store, please contact us at

The deadline for poster submissions is 17 May 2024.

The posters and lightning talks that were exhibited and presented at the workshop will be uploaded and linked below. The slides are linked to the name, the poster pdf to the poster title:

Name Institution Poster title
Friso de Kruiff  TU Delft and KTH Stockholm Learning Riemannian Metric Preserving Diffeomorphisms in Protein Dynamics 
Willem Diepeveen University of Cambridge Learning symmetric Riemannian geometry for data analysis
Ines Garcia-Redondo Imperial College London On the Limitations of Fractal Dimension as a Measure of Generalization
Dobrik Georgiev University of Cambridge  
Mohammad Golbabaee University of Bristol MRI2Qmap: Quantitative MRI reconstruction via plug-and-play deep image denoising models pretrained on large weighted-MRI datasets
Jan Kociniak University of Amsterdam Unsupervised learning of Riemannian geometry via geodesics – I will present a novel method for manifold learning based on separation framework
Paul Lezeau

LSGNT CDT (Imperial College, UCL, KCL Tropical Expressivity of Neural Networks
Perla Mayo University of Bristol “Enhancing Magnetic Resonance Fingerprinting with Deep Image Priors” The technique is optimized in the k-space and constrained by the Bloch equations
Matt Price University College London Scalable and equivariant spherical CNNs by discrete-continuous (DISCO) convolutions
Victor Sechaud CNRS, ENS de Lyon Equivariance-based self-supervised learning for audio signal recovery from clipped measurements.
Shubhr Singh Queen Mary University of London LHGNN: Local-Higher Order GNNs for Audio Classification And Tagging
Sara Veneziale  Imperial College London Machine learning detects terminal singularities Applying ML to algebraic geometry objects (that have symmetry) to detect an important property.
Qiquan Wang Imperial College London A Topological Gaussian Mixture Model for Bone Marrow Morphology in Leukaemia
Andrew Wang University of Edinburgh Perspective-Equivariant Imaging: an Unsupervised Framework for Multispectral Pansharpening
Paolo Zuzolo Università di Bologna G-GNN-based spectral shape descriptors

Poster prizes


The winners of the top three posters were:

Sara Veneziale, Perla Mayo and Friso de Kruiff. All three won a book, kindly donated by World Scientific Publishing. Congratulations to you all!




Monday 10 June
Tuesday 11 June
Wednesday 12 June
9.30 – 9.55 Arrival and registration  Arrival and registration  Arrival and registration 
9.55 – 10.00 Welcome and introduction – Prof Carola-Bibiane Schönlieb, University of Cambridge

10.00 – 10.45 Prof Michael Bronstein, University of Oxford Prof. Dr. Sina Ober-Blöbaum, Paderborn University Dr Haggai Maron, NVIDIA and Technion Israel Institute of Technology
10.45 – 11.15 Prof David Saad, Aston University Prof Eldad Haber, University of British Columbia Prof Bin Dong, Beijing International Center for Mathematical Research
11.15 – 11.45 Coffee Coffee Coffee
11.45 – 12.15 Prof Davide Bacciu, Università di Pisa and Aptus.AI Dr Johannes Müller, RWTH, Aachen University  Dr Andrew Dudzik, Google DeepMind
12.15 – 12.45 Dr Erik Bekkers, University of Amsterdam Dr Matt Thorpe, University of Warwick tbc

12.45 – 14.00


Lunch Lunch and finish
14.00 – 14.45 tbc Prof Dejan Slepčev, Carnegie Mellon University  
14.45 – 15.15 Dr Remco Duits, Eindhoven University of Technology Prof Mike Davies, The University of Edinburgh
15.15 – 15.45 Dr Zorah Lähner, University of Bonn Dr Emma Robinson, Kings College London
15.45 – 16.15 Coffee Dr Bruno Ribeiro, Purdue University
16.15 – 16.45 Lightning talks for poster holders Coffee and networking 
16.45 – 18.00

Poster session and drinks reception


18.30 – late


Conference dinner at Jesus College, Cambridge


Keynote Speakers

Invited speakers

Dr Erik Bekkers

University of Amsterdam

Fast, Expressive SE(n) Equivariant Networks through Weight-Sharing in Homogeneous Spaces

Based on the theory of homogeneous spaces we derive geometrically optimal edge attributes to be used within the flexible message-passing framework. We formalize the notion of weight sharing in convolutional networks as the sharing of message functions over point-pairs that should be treated equally. We define equivalence classes of point-pairs that are identical up to a transformation in the group and derive attributes that uniquely identify these classes. Weight sharing is then obtained by conditioning message functions on these attributes. As an application of the theory, we develop an efficient equivariant group convolutional network for processing 3D point clouds. The theory of homogeneous spaces tells us how to do group convolutions with feature maps over the homogeneous space of positions 

$\mathbb{R}^n$, position and orientations $\mathbb{R}^n \times S^{n-1}$, and the group $SE(n)$ itself. Among these, $\mathbb{R}^n \times S^{n=1}$ is an optimal choice due to the ability to represent directional information, which $\mathbb{R}^n$ methods cannot, and it significantly enhances computational efficiency compared to indexing features on the full $SE(n)$ group. We support this claim with state-of-the-art results —in accuracy and speed— on five different benchmarks in 2D and 3D, including interatomic potential energy prediction, trajectory forecasting in N-body systems, and generating molecules via equivariant diffusion models.

Prof David Saad

Aston University

The Space of Boolean Functions Computed by Deep-learning Networks

Recent engineering achievements of deep-learning machines have both impressed and intrigued the scientific community due to our limited theoretical understanding of the underlying reasons for their success. I will briefly review some of the challenges to be addressed and then focus on properties of the function space of different types of deep-learning machines, based on the generating functional analysis. This approach facilitates studying the number of solution networks of a given error around a reference multi-layer network. Exploring the function landscape of densely-connected networks, we uncover a general layer-by-layer learning behaviour, while the study of sparsely-connected networks indicates the advantage in having more layers for increasing generalization ability in such models. This framework accommodates other network architectures and computing elements, including networks with correlated weights, convolutional networks and discretised variables. A similar approach also facilitates studying the distribution of Boolean functions computed by recurrent and layer-dependent architectures, which we find to be the same. Depending on the initial conditions and computing elements used, we characterize the space of functions computed at the large depth limit and show that the macroscopic entropy of Boolean functions is either monotonically increasing or decreasing with the growing depth, depending on the activation functions or Boolean gates used. 

  1. Li and D. Saad, Phys. Rev. Lett. 120, 248301 (2018)
  2. Li and D. Saad, Jour. Phys. A, 53, 104002 (2020)
  3. A. Mozeika, B. Li, and D. Saad, Phys. Rev. Lett. 125, 168301, (2020).

Delegate information


The conference dinner will take place on Tuesday 11 June. The venue is Elena Hall, Jesus College, Cambridge.  You need to sign up and pay to attend the dinner when you register. You will be contacted nearer the time regarding your menu choices.


The West Hub is part of the University of Cambridge and is located on JJ Thompson Road, to the west of the city centre.

It is straightforward to reach by public transport, cycling or on foot. From the city centre the West Hub is a 33 minute walk via the Coton footpath, a 10 minute cycle or a short bus ride on bus routes 4,8,U,X3 or the Park and Ride. 

The conference venue, relative to the city centre is indicated on this map.
Further information about the West Hub can be found here.


Cambridge has a wide variety of accommodation to suit all tastes and budgets. There are lots of options in and around the centre with good bus links to the West Hub, plus some closer to the workshop venue.

Below are a few hotel suggestions:

Premier Inn Cambridge North (Girton) 

Arundel House Hotel

Ibis central Cambridge

Travelodge Central

Hyatt Centric Cambridge

Please note:

Accommodation is not included in the registration fee, delegates are required to book their own accommodation. We encourage delegates to book accommodation as early as possible.


Photos of the workshop

Get in Touch!

To subscribe to the mailing list

Send an email request