Imaging inverse problems and generating models: sparsity and robustness versus expressivity

Home > Imaging inverse problems and generating models: sparsity and robustness versus expressivity

Imaging inverse problems and generating models: sparsity and robustness versus expressivity

 08 - 10 Apr 2024

ICMS, Bayes Centre, Edinburgh

 Enquiries

Invitations will go out to invited participants

Scientific organisers

  • Julie Delon , Université Paris Cité
  • Audrey Repetti, Heriot-Watt University
  • Carola-Bibiane Schönlieb , University of Cambridge
  • Gabriele Steidl , TU Berlin

About:

 
We are aware of a number of current scams targeting participants at ICMS workshops in relation to registration or accommodation bookings. If you are approached by a third party (eg travellerpoint.org, Conference Committee or Royal Visit) asking for booking or payment details, please ignore. Please remember:
i) ICMS never uses third parties to do our administration for events: messages will come directly from ICMS staff
ii) ICMS will never ask participants for credit card or bank details
iii) If you have any doubt about an email you receive please get in touch

 

 

In this workshop, we aim to gather researchers from the intersecting fields of inverse problems, uncertainty quantification, generative models, and related areas to discuss novel theory, new methods, and recent challenges. A focused multifaceted research group gathered at one location is an excellent opportunity, and may significantly advance this field and open new directions.

In recent years, the worlds of generating models and inverse problems in imaging have been converging towards multiple common ideas and paradigms.

The underlying questions within the field of inverse problems can be stated, in a simplified form, as follows: the aim is to estimate an unknown image x, from noisy and incomplete data y. Applications range from healthcare, security, to astronomy and beyond. The estimate x* of x can be defined as the minimiser of an objective function g. It corresponds to the sum of a data-fidelity term f and a regularisation/prior term r promoting a given prior model of the original object to compensate for the ill-posedness of the problem. The choice of f depends on the noise model. The reconstruction precision strongly depends on the prior model r. Traditionally, research on inverse problems has been focusing on explicit image priors, often promoting sparsity, either directly in the image space, or in a transformed space. From a Bayesian perspective, g can be seen as the negative logarithm of a posterior distribution integrating both the likelihood distribution for data fidelity and a prior model for regularisation, with x* corresponding to a Maximum A Posteriori (MAP) estimate. The resulting minimisation problem can efficiently be solved using iterative optimisation methods. Certain optimisation algorithms come with convergence guarantees towards a minimum of the objective. The use of methods relying on strong theoretical results is crucial for accurate decision-making processes. Hence many applications rely on the robustness of the methods developed for data interpretation.

Recently, an important trend has emerged for using data-driven image models, in particular encoded by neural networks. These data-driven models can be represented by generative or discriminative networks, e.g. GANS, VAEs or normalizing flows. In the generative case, the goal of these networks is to generate samples from a high dimensional data distribution, learned from a huge database of examples for instance. Once they have been trained, these networks can be used in optimisation or sampling schemes for studying inverse problems. In the discriminative case, the network is trained to distinguish between desirable images (that should have a high probability in the prior distribution) and undesirable images and noise (that should have a low probability in the prior distribution), and can post training be used as a data-driven regulariser for inverse problems solutions.

All of these methods have shown a remarkable versatility and efficiency to solve inverse problems, notably in a Bayesian framework. They open the way to restoration algorithms that exploit more powerful and accurate models for images.  Nevertheless, they also come with important mathematical challenges regarding the robustness of the delivered solution e.g. in terms of convergence and uncertainty quantification.

This workshop aims to discuss important questions appearing in this context, including (but not restricted to): the interplay between expressivity and robustness of neural networks for solving inverse problems; learning from small datasets; or special networks, e.g. with diverse sparsity assumptions, that are e.g. robust against invariances and appear to be of interest in applications, in particular when tackling the curse of dimensionality.

 

Programme:

MONDAY 08 APRIL 2024
Registration and Refreshments
Welcome from ICMS
Agnès Desolneux , CNRS / ENS Paris-Saclay Can Push-forward Generative Models Fit Multimodal Distributions?
Martin Zach , Graz University of Technology Product of Gaussian Mixture Diffusion Models
Tea & Coffee
Matteo Santacesaria , MaLGa, University of Genoa Continuous generative models for nonlinear inverse problems
Johannes Hertrich , TU Berlin Generative Modeling via Maximum Mean Discrepancy Flows
Lunch
Raymond Chan , City University of Hong Kong LocNet: Deep learning-based 3D point source localization with rotating point spread function
Andrew Curtis , University of Edinburgh Bayesian Variational Full Waveform Inversion to Image the Earth’s Subsurface
Tea & Coffee
Coloma Ballester , UPF Contrastive self-supervised learning and some applications to visual data understanding
Yves Wiaux , Heriot-Watt University R2D2: a deep neural network series for ultra-fast high-dynamic range imaging in radio astronomy (but not only).
Discussions and Collaborations
Workshop dinner
TUESDAY 09 APRIL 2024
Tea & Coffee
Jean-Christophe Pesquet , CentraleSupélec ABBA Neural Networks
Leon Bungert , University of Wuerzburg Sparsity and robustness in machine learning: insights from inverse problems
Tea & Coffee
Tatiana Bubba , University of Bath Regularisation with optimal space-time priors
Martin Benning , UCL Inverting neural networks in imaging and generative modelling
Lunch
Simon Arridge , UCL Learned pdes with time reversal and image invariants
Julian Tachella , CNRS ENS Lyon Learning to reconstruct signals from binary measurements
Tea, Coffee and Poster session
Discussions and Collaborations
Public Lecture - Marcelo Pereyra , Heriot-Watt University TBC
Drinks and Nibbles
WEDNESDAY 10 APRIL 2024
Tea & Coffee
Nelly Pustelnik , CNRS ENS Lyon PNN: From proximal algorithms to robust unfolded image restoration
Ulugbek Kamilov , Washington University in Saint Louis A Restoration Network as an Implicit Prior
Tea & Coffee
Sebastian Neumayer , EPFL Sparsity-Inspired Regularization for Image Reconstruction
Thomas Moreau , INRIA Paris-Saclay A Journey through Algorithm Unrolling for Inverse Problems
Lunch
Georgios Batzolis , University of Cambridge Variational Auto-encoder with Diffusion Decoder
Aretha Teckentrup , University of Edinburgh Some theory for deep Gaussian processes
Tea and Coffee
Kostas Zygalakis , University of Edinburgh Score-based denoising diffusion models for photon-starved image restoration problems
Pierre Chainais , University of Lille plug-and-play split gibbs sampler: embedding deep generative priors in bayesian inference
End of the workshop (give back security passes before 5pm)