# European Summer School in Financial Mathematics 14th edition

*Aug 30 - Sep 03, 2021*

*Online*

The European Summer School in Financial Mathematics, for its 14th edition, will be hosted by the International Centre for Mathematical Sciences (ICMS).

This summer school will take place online using Zoom

The Summer School brings together talented young researchers in mathematical finance.

The summer school will focus on two advanced courses: **1) Optimal transport methods for economic models and machine learning 2) Signature method in machine learning and its application to mathematical finance. **

There will also be student seminars and discussion sessions which allow the participants to engage with each other and discuss their current research.

One of the aims of the Summer School is to encourage active cooperation and collaboration in mathematical finance among European institutions. We very much thank the members of the scientific committee for their support in achieving this aim.

This school belongs to the series of the European Mathematical Society applied mathematics schools. We gratefully acknowledge the support of International Centre for Mathematical Sciences (ICMS), CMAP, Ecole Polytechnique (Paris, France), Adam Smith Business School (University of Glasgow), Glasgow Mathematical Journal Learning and Research Support Fund and the ANR program *Investissements d’Avenir*.

**The Organising and Scientific Committee **

Ankush Agarwal

Gonçalo Dos Reis

Stefano De Marco

Thibaut Mastrolia**The Scientific Committee**

The Scientific Committee consists of European leaders and representatives of financial mathematics. We warmly thank them for their encouragement and for accepting to be part of this committee.

Peter Bank, Peter Imkeller, Wolfgang Runggaldier, Mete Soner, Youri Kabanov, Walter Schachermayer, Josef Teichmann, Santiago Carillo, Ralf Korn, Martin Schweizer, Albert Shiryaev, Nicole El Karoui, Gilles Pagès, Huyen Pham, Marco Frittelli, Damien Lamberton, Bernard Lapeyre, Lukas Stettner, David Hobson, Bernt Øksendal, Denis Talay, Chris Rogers

### Arrangements

PhD students and early career researchers are invited to register and participate in the summer school. Please register by filling out the online form.

All are welcome to attend our event online, but registration is still required.

For any enquiries please contact Ollie Quinn ICMS.

Details of the Mini Courses and Speakers

**Optimal Transport Methods in Machine Learning: from the Sinkhorn algorithm to Generative Adversarial Networksby Beatrice Acciaio (ETH Zurich, Switzerland)**

We start by recalling tools from the classical optimal transport (OT) theory, and then we introduce new developments in OT, specifically what is now called causal optimal transport (COT). We illustrate how the concept of causality in OT is the suitable one in order to tackle dynamic problems, where time plays a crucial role, especially in a financial context. We then consider regularized optimal transport problems, and the Sinkhorn algorithm used for computing entropic OT. Further, we review recent development of generative adversarial networks (GANs), which employ tools from OT theory. We then combine all the above concepts to train a network to generate or predict (financial) time series. Finally, we discuss the results and the numerical challenges.

**Optimal Transport Methods for Economic Models**

by Alfred Galichon (New York University, USA)

by Alfred Galichon (New York University, USA)

This course is focused on the computation of competitive equilibrium, which is at the core of surge pricing engines and allocation mechanisms. It will investigate diverse applications such as network congestion, surge pricing, and matching platforms. It provides a bridge between theory, empirics and computation and will introduce tools from economics, mathematical and computer science. Mathematical concepts (such as lattice programming, supermodularity, discrete convexity, Galois connections, etc.) will be taught while studying various economic models. The same is true of computational methods (such as ‘tatonnement’ algorithms, asynchronous parallel computation, mathematical programming under equilibrium constraints, etc.).

**A Primer on the Signature Method in Machine Learning**

by Ilya Chevyrev (University of Edinburgh, UK)

by Ilya Chevyrev (University of Edinburgh, UK)

The signature of a path has been recognised in the last few years as a powerful method to store information about a path. At its basic level, the signature is the collection of iterated integrals of a path. This simple definition leads to surprisingly deep properties, which all indicate that the signature is a natural analogue of polynomials on paths. In this minicourse, I will present the definition of the signature and how it arises in several contexts, including control theory and stochastic differential equations. I will demonstrate some of its important properties: these include the shuffle identity, which is responsible for the polynomial-like behaviour on paths, and the Chen identity, which is important for computations. In the last part of the course, I will discuss some recent applications to machine learning, focusing on kernel learning and classification tasks.

**Harnessing quantitative finance by deep learning**

Blanka Horvath (King's College London, UK) and Mikko Pakkanen (Imperial College London, UK)

Blanka Horvath (King's College London, UK) and Mikko Pakkanen (Imperial College London, U

Deep learning is currently making headway in the realm of quantitative finance, whilst the financial industry is increasingly embracing data-driven workflows powered by machine learning and data science. In this minicourse, we shall present some of the recent advances of deep learning applied to quantitative finance. After a brief introduction to the basic principles of deep learning, we will explain how it can be applied to derivatives pricing, hedging and market data simulation in a novel way. We will demonstrate these methods by extensive numerical examples.

**Differential Machine Learning**

Antoine Savine (Danske Bank and Copenhagen University, Denmark)

Antoine Savine (Danske Bank and Copenhagen University, Denmark)

Differential machine learning (ML) extends supervised learning, with models trained on examples of not only inputs and labels, but also differentials of labels to inputs. Differential ML is applicable in all situations where high quality first order derivatives wrt training inputs are available. In the context of financial derivatives risk management, pathwise differentials are efficiently computed with automatic adjoint differentiation (AAD). Differential ML, combined with AAD, provides extremely effective pricing and risk approximations. We can produce fast pricing analytics in models too complex for closed form solutions, extract the risk factors of complex transactions and trading books, and effectively compute risk management metrics like reports across a large number of scenarios, backtesting and simulation of hedge strategies, or capital regulations.

Neural Stochastic Differential Equations for Time Series Modelling

James Foster (Oxford University, UK)

Stochastic differential equations (SDEs) are a popular model for describing continuous-time phenomena and have seen particular success in the pricing and hedging of financial derivatives. However, given the current data science revolution and following the seminar paper “Neural Ordinary Differential Equations”, it is natural to investigate how SDE methodologies could be improved using tools from machine learning. This has led to several recent works on so-called “Neural SDEs”, which seek to combine the modelling capabilities of SDEs with the flexibility and efficient training of neural networks. In this talk, I will give an overview of these developments and show how SDEs can be viewed as time series models that fit nicely with well-known ideas from data science, such as generative adversarial networks (GANs).

The course focuses on differential deep learning (DL), arguably the strongest application. We will show how standard DL trains neural networks (NN) on punctual examples, whereas differential DL teaches them the shape of the target function, resulting in vastly improved performance, illustrating it with a number of numerical examples, both idealized and real world. We will also discuss how to apply differential learning to other ML models, like classic regression or principal component analysis (PCA).