Imaging inverse problems and generating models: sparsity and robustness versus expressivity

Home > Imaging inverse problems and generating models: sparsity and robustness versus expressivity

Imaging inverse problems and generating models: sparsity and robustness versus expressivity

 08 - 10 Apr 2024

ICMS, Bayes Centre, Edinburgh

Information on participation to follow.

Scientific organisers

  • Julie Delon , Université Paris Cité
  • Audrey Repetti, Heriot-Watt University
  • Carola-Bibiane Schönlieb , University of Cambridge
  • Gabriele Steidl , TU Berlin

About:

In this workshop, we aim to gather researchers from the intersecting fields of inverse problems, uncertainty quantification, generative models, and related areas to discuss novel theory, new methods, and recent challenges. A focused multifaceted research group gathered at one location is an excellent opportunity, and may significantly advance this field and open new directions.

In recent years, the worlds of generating models and inverse problems in imaging have been converging towards multiple common ideas and paradigms.

The underlying questions within the field of inverse problems can be stated, in a simplified form, as follows: the aim is to estimate an unknown image x, from noisy and incomplete data y. Applications range from healthcare, security, to astronomy and beyond. The estimate x* of x can be defined as the minimiser of an objective function g. It corresponds to the sum of a data-fidelity term f and a regularisation/prior term r promoting a given prior model of the original object to compensate for the ill-posedness of the problem. The choice of f depends on the noise model. The reconstruction precision strongly depends on the prior model r. Traditionally, research on inverse problems has been focusing on explicit image priors, often promoting sparsity, either directly in the image space, or in a transformed space. From a Bayesian perspective, g can be seen as the negative logarithm of a posterior distribution integrating both the likelihood distribution for data fidelity and a prior model for regularisation, with x* corresponding to a Maximum A Posteriori (MAP) estimate. The resulting minimisation problem can efficiently be solved using iterative optimisation methods. Certain optimisation algorithms come with convergence guarantees towards a minimum of the objective. The use of methods relying on strong theoretical results is crucial for accurate decision-making processes. Hence many applications rely on the robustness of the methods developed for data interpretation.

Recently, an important trend has emerged for using data-driven image models, in particular encoded by neural networks. These data-driven models can be represented by generative or discriminative networks, e.g. GANS, VAEs or normalizing flows. In the generative case, the goal of these networks is to generate samples from a high dimensional data distribution, learned from a huge database of examples for instance. Once they have been trained, these networks can be used in optimisation or sampling schemes for studying inverse problems. In the discriminative case, the network is trained to distinguish between desirable images (that should have a high probability in the prior distribution) and undesirable images and noise (that should have a low probability in the prior distribution), and can post training be used as a data-driven regulariser for inverse problems solutions.

All of these methods have shown a remarkable versatility and efficiency to solve inverse problems, notably in a Bayesian framework. They open the way to restoration algorithms that exploit more powerful and accurate models for images.  Nevertheless, they also come with important mathematical challenges regarding the robustness of the delivered solution e.g. in terms of convergence and uncertainty quantification.

This workshop aims to discuss important questions appearing in this context, including (but not restricted to): the interplay between expressivity and robustness of neural networks for solving inverse problems; learning from small datasets; or special networks, e.g. with diverse sparsity assumptions, that are e.g. robust against invariances and appear to be of interest in applications, in particular when tackling the curse of dimensionality.