*Apr 16 2021*

15:00 - 16:00

This diverse seminar series will highlight novel advances in methodology and application in statistics and data science, and will take the place of the University of Glasgow Statistics Group seminar during this period of remote working. We welcome all interested attendees at Glasgow and further afield. For more information please see the University of Glasgow webpage

Call details will be sent out 30mins before the start of the seminar

These seminars are recorded. All recordings can be found here.

### Future Seminars:

**Theodore Papamarkou -*** Challenges in Markov chain Monte Carlo for Bayesian neural networks*

- Markov chain Monte Carlo (MCMC) methods have not been broadly adopted in Bayesian neural networks (BNNs). This paper initially reviews the main challenges in sampling from the parameter posterior of a neural network via MCMC. Such challenges culminate to lack of convergence to the parameter posterior. Nevertheless, this paper shows that a non-converged Markov chain, generated via MCMC sampling from the parameter space of a neural network, can yield via Bayesian marginalization a valuable predictive posterior of the output of the neural network. Classification examples based on multilayer perceptrons showcase highly accurate predictive posteriors. The postulate of limited scope for MCMC developments in BNNs is partially valid; an asymptotically exact parameter posterior seems less plausible, yet an accurate predictive posterior is a tenable research avenue. This is joint work with Jacob Hinkle, M. Todd Young and David Womble.

### Past Seminars:

23 April 2020

**Neil Chada, **National University of Singapore - *Advancements of non-Gaussian random fields for statistical inversion*

- Developing informative priors for Bayesian inverse problems is an important direction, which can help quantify information on the posterior. In this talk we introduce a new of a class priors for inversion based on alpha-stable sheets, which incorporate multiple known processes such as a Gaussian and Cauchy process. We analyze various convergence properties which is achieved through different representations these sheets can take. Other aspects we wish to address are well-posedness of the inverse problem and finite-dimensional approximations. To complement the analysis we provide some connections with machine learning, which will allow us to use sampling based MCMC schemes. We will conclude the talk with some numerical experiments, highlighting the robustness of the established connection, on various inverse problems arising in regression and PDEs.

14 May 2020

**Roberta Pappadà, **University of Trieste - *Consensus clustering based on pivotal methods*

- Despite its large use, one major limitation of K-means clustering algorithm is its sensitivity to the initial seeding used to produce the ﬁnal partition. We propose a modiﬁed version of the classical approach, which exploits the information contained into a co-association matrix obtained from clustering ensembles. Our proposal is based on the identiﬁcation of a set of data points–pivotal units–that are representative of the group they belong to. The presented approach can thus be viewed as a possible strategy to perform consensus clustering. The selection of pivotal units has been originally employed for solving the so-called label-switching problem in Bayesian estimation of ﬁnite mixture models. Diﬀerent criteria for identifying the pivots are discussed and compared. We investigate the performance of the proposed algorithm via simulation experiments and the comparison with other consensus methods available in the literature.

21 May 2020

**Ana Basiri, **UCL - *Who Are the "Crowd"? Learning from Large but Patchy Samples*

- This talk will look at the challenges of crowdsourced/self-reporting data, such as missingness and biases in ‘new forms of data’ and consider them as a useful source of data itself. A few applications and examples of these will be discussed, including extracting the 3D map of cities using the patterns of blockage, reflection, and attenuation of the GPS signals (or other similar signals), that are contributed by the volunteers/crowd. In the era of big data, open data, social media and crowdsourced data when “we are drowning in data”, gaps and unavailability, representativeness and bias issues associated with them may indicate some hidden problems or reasons allowing us to understand the data, society and cities better.

4 June 2020

**Colin Gillespie**, University of Newcastle *- Getting the most out of other people's R sessions*

- Have you ever wondered how you could hack other people's R sessions? Well, I did, and discovered that it wasn't that hard! In this talk, I discuss a few ways I got people to run arbitrary, and hence very dangerous, R scripts. This is certainly worrying now thatwe have all moved to working from home.

18 June 2020

**Jo Eidsvik, **NTNU -* Autonomous Oceanographic Sampling Designs Using Excursion Sets for Multivariate Gaussian random fields*

- Improving and optimizing oceanographic sampling is a crucial task for marine science and maritime management. Faced with limited resources to understand processes in the water-column, the combination of statistics and autonomous robotics provides new opportunities for experimental designs. In this work we develop methods for efficient spatial sampling applied to the mapping of coastal processes by providing informative descriptions of spatial characteristics of ocean phenomena. Specifically, we define a design criterion based on improved characterization of the uncertainty in the excursions of vector-valued Gaussian random fields, and derive tractable expressions for the expected Bernoulli variance reduction in such a framework. We demonstrate how this criterion can be used to prioritize sampling efforts at locations that are ambiguous, making exploration more effective. We use simulations to study the properties of methods and to compare them with state-of-the-art approaches, followed by results from field deployments with an autonomous underwater vehicle as part of a case study mapping the boundary of a river plume. The results demonstrate the potential of combining statistical methods and robotic platforms to effectively inform and execute data-driven environmental sampling.

9 July 2020

**Vianey Leos-Barajas**, NCSU - *Spatially-coupled hidden Markov models for short-term wind speed forecasting*

- Hidden Markov models (HMMs) provide a flexible framework to model time series data where the observation process, Yt, is taken to be driven by an underlying latent state process, Zt. In this talk, we will focus on discrete-time, finite-state HMMs as they provide a flexible framework that facilitates extending the basic structure in many interesting ways. HMMs can accommodate multivariate processes by (i) assuming that a single state governs the M observations at time t, (ii) assuming that each observation process is governed by its own HMM, irrespective of what occurs elsewhere, or (iii) a balance between the two, as in the coupled HMM framework. Coupled HMMs assume that a collection of M observation processes is governed by its respective M state processes. However, the mth state process at time t, Zm,t not only depends on Zm,t−1 but also on the collection of state process Z−m,t−1. We introduce spatially-coupled hidden Markov models whereby the state processes interact according to an imposed spatial structure and the observations are collected at S spatial locations. We outline an application (in progress) to short-term forecasting of wind speed using data collected across multiple wind turbines at a wind farm.

6 August 2020

**Helen Ogden, **University of Southampton - *Towards More Flexible Models for Count Data*

- Count data are widely encountered across a range of application areas, including medicine, engineering and ecology. Many of the models used for the statistical analysis of count data are quite simple and make strong assumptions about the data generating process, and it is common to encounter situations in which these models fail to fit data well. I will review various existing models for count data, and describe some simple scenarios where each of these models fail. I will describe current work on an extension to existing Poisson mixture models, and demonstrate the performance of this new class of models in some simple examples.

17 September 2020

**Andrew Zammit Mangion, **University of Wollongong -* Statistical Machine Learning for Spatio-Temporal Forecasting*

- Conventional spatio-temporal statistical models are well-suited for modelling and forecasting using data collected over short time horizons. However, they are generally time-consuming to fit, and often do not realistically encapsulate temporally-varying dynamics. Here, we tackle these two issues by using a deep convolution neural network (CNN) in a hierarchical statistical framework, where the CNN is designed to extract process dynamics from the process' most recent behaviour. Once the CNN is fitted, probabilistic forecasting can be done extremely quickly online using an ensemble Kalman filter with no requirement for repeated parameter estimation. We conduct an experiment where we train the model using 13 years of daily sea-surface temperature data in the North Atlantic Ocean. Forecasts are seen to be accurate and calibrated. A key advantage of the approach is that the CNN provides a global prior model for the dynamics that is realistic, interpretable, and computationally efficient to forecast with. We show the versatility of the approach by successfully producing 10-minute nowcasts of weather radar reflectivities in Sydney using the same model that was trained on daily sea-surface temperature data in the North Atlantic Ocean. This is joint work with Christopher Wikle, University of Missouri.

25 September 2020

**Ed Hill, **University of Warwick - *Predictions of COVID-19 dynamics in the UK: short-term forecasting, analysis of potential exit strategies and impact of contact networks*

- Regarding the future course of the COVID-19 outbreak in the UK, mathematical models have provided, and continue to provide, short and long term forecasts to support evidence-based policymaking. We present a deterministic, age-structured transmission model for SARS-CoV-2 that uses real-time data on confirmed cases requiring hospital care and mortality to provide predictions on epidemic spread in ten regions of the UK. The model captures a range of age-dependent heterogeneities, reduced transmission from asymptomatic infections and is fit to the key epidemic features over time. We illustrate how the model has been used to generate short-term predictions and assess potential lockdown exit strategies. As steps are taken to relax social distancing measures, questions also surround the ramifications on community disease spread of workers returning to the workplace and students returning to university. To study these aspects, we present a network model to capture the transmission of SARS-CoV-2 over overlapping sets of networks in household, social and work/study settings.

2 October 2020

**Eleni Matechou, **University of Kent - *Environmental DNA as a monitoring tool at a single and multi-species level*

- Environmental DNA (eDNA) is a survey tool with rapidly expanding applications for assessing presence of a wildlife species at surveyed sites. eDNA surveys consist of two stages: stage 1, when a sample is collected from a site, and stage 2, when the sample is analysed in the lab for presence of species' DNA. The methods were originally developed to target particular species (single-species), but can now be used to obtain a list of species at each surveyed site (multi-species/metabarcoding). In this talk, I will present a novel Bayesian model for analysing single-species eDNA data, while accounting for false positive and false negative errors, which are known to be non-negligible, in both stages of eDNA surveys. All model parameters can be modelled as functions of covariates and the proposed methodology allows us to perform efficient Bayesian variable selection that does not require the use of trans-dimensional algorithms. I will also discuss joint species distribution models as the starting point for modelling multi-species eDNA data and will outline the next steps required to obtain a unifying modelling framework for eDNA surveys.

9 October 2020

**Daniela Castro Camilo, **University of Glasgow - *Bayesian space-time gap filling for inference on extreme hot-spots: an application to Red Sea surface temperatures*

- We develop a method for probabilistic prediction of extreme value hot-spots in a spatio-temporal framework, tailored to big datasets containing important gaps. In this setting, direct calculation of summaries from data, such as the minimum over a space-time domain, is not possible. To obtain predictive distributions for such cluster summaries, we propose a two-step approach. We first model marginal distributions with a focus on accurate modeling of the right tail and then, after transforming the data to a standard Gaussian scale, we estimate a Gaussian space-time dependence model defined locally in the time domain for the space-time subregions where we want to predict. In the first step, we detrend the mean and standard deviation of the data and fit a spatially resolved generalized Pareto distribution to apply a correction of the upper tail. To ensure spatial smoothness of the estimated trends, we either pool data using nearest-neighbor techniques, or apply generalized additive regression modeling. To cope with high space-time resolution of data, the local Gaussian models use a Markov representation of the Matérn correlation function based on the stochastic partial differential equations (SPDE) approach. In the second step, they are fitted in a Bayesian framework through the integrated nested Laplace approximation implemented in R-INLA. Finally, posterior samples are generated to provide statistical inferences through Monte-Carlo estimation. Motivated by the 2019 Extreme Value Analysis data challenge, we illustrate our approach to predict the distribution of local space-time minima in anomalies of Red Sea surface temperatures, using a gridded dataset (11,315 days, 16,703 pixels) with artificially generated gaps. In particular, we show the improved performance of our two-step approach over a purely Gaussian model without tail transformations.

16 October 2020

**Daniel Lawson, **University of Bristol - *CLARITY - Comparing heterogeneous data using dissimiLARITY*

- Integrating datasets from different disciplines is hard because the data are often qualitatively different in meaning, scale, and reliability. When two datasets describe the same entities, many scientific questions can be phrased around whether the similarities between entities are conserved. Our method, CLARITY, quantifies consistency across datasets, identifies where inconsistencies arise, and aids in their interpretation. We explore three diverse comparisons: Gene Methylation vs Gene Expression, evolution of language sounds vs word use, and country-level economic metrics vs cultural beliefs. The non-parametric approach is robust to noise and differences in scaling, and makes only weak assumptions about how the data were generated. It operates by decomposing similarities into two components: the `structural' component analogous to a clustering, and an underlying `relationship' between those structures. This allows a `structural comparison' between two similarity matrices using their predictability from `structure'. This presentation describes work presented in arXiv:2006.00077 with software, the work can be found here.

22 October 2020

**Charlotte Jones-Todd, **University of Aukland *- Modelling systematic effects and latent phenomena in point referenced data*

- The spatial location and time of events (or objects) is the currency that point process statistics invests to estimate the drivers of the intensity, or rate of occurrence, of those events. Yet, the assumed processes giving rise to the observed data typically fail to represent the full complexity of the driving mechanisms. Ignoring spatial or temporal dependence between events leads to incorrect inference, as does assuming the wrong dependency structure. Latent Gaussian models are a flexible class of model that accounts for dependency structures in a hide all ills fashion; the stochastic structures in these models absorb and amalgamate the underlying, unaccounted for, mechanisms leading to the observed data. In this talk I will introduce this class of model and discuss recent work using latent Gaussian fields to model the fluctuations in the data that cannot otherwise be accounted for.

30 October 2020

**Theresa Smith, **University of Bath -* A collaborative project to monitor and improve engagement in talking therapies*

- Over the past two years, I have been working on a joint research project between the University of Bath and as software company Mayden to develop and test models to predict whether a patient in NHS England’s Improving Access to Psychological Therapy (IAPT) programme will attend their therapy appointments, allowing for targeted intervention to be developed. Currently only a third of patients referred to IAPT complete their treatment and of these only half reach recovery. Given that nearly two thirds of people in the UK report having experienced mental health problems, these rates are concerning. In this talk, I’ll give an overview of the collaboration and present the co-development and findings of two recent papers investigating factors associated with engagement in IAPT services:

Predicting patient engagement in IAPT services: a statistical analysis of electronic health records (doi: 10.1136/ebmental-2019-300133)

Impact of COVID-19 on Primary Care Mental Health Services: A Descriptive, Cross-Sectional Timeseries of Electronic Healthcare Records (doi: 10.1101/2020.08.15.20175562)

6 November 2020

**Manuele Leonelli, **IE University - *Diagonal distrirbutions*

- Diagonal distributions are an extension of marginal distributions, which can be used to summarize and efficiently visualize the main features of multivariate distributions in arbitrary dimensions. The main diagonal is studied in detail, which consists of a mean-constrained univariate distribution function on [0; 1], whose variance connects with Spearman's rho, and whose mass at the endpoints 0 and 1 offers insights on the strength of tail dependence. To learn about the main diagonal density from data, histogram and kernel-based methods that take advantage of auxiliary information in the form of a moment constraint to which diagonal distributions must obey are introduced. An application is given illustrating how diagonal densities can be used in order to contrast the diversification of a portfolio based on FAANG stocks against one based on crypto-assets.

20 November 2020

**Nicole Augustin, **University of Edinburgh* - Introduction of standardised tobacco packaging and minimum excise tax in the UK: a prospective study*

- Standardised packaging for factory made and roll your own tobacco was implemented inthe UK in May, 2017, alongside a minimum excise tax for factory made products. As other jurisdictions attempt to implement standardised packaging, the tobacco industrycontinues to suggest that it would be counterproductive, in part by leading to falls inprice due to commoditisation. Here, we assess the impact of the introduction of these policies on the UK tobacco market. We carried out a prospective study of UK commercial electronic point-of-sale data from 11 constituent geographic areas. Data were available for each tobacco product (or Stock Keeping Unit (SKU)): the tobacco brand, brand family, brand variant, specific features ofthe pack. For each SKU, three years (May 2015 to April 2018) of monthly data onvolume of sales, sales prices, and extent of distribution of sales within the 11 UK geographical areas were available. The main outcomes were changes in sales volumes, volume-weighted real prices, andtobacco industry revenue. To estimate temporal trends of monthly price per stick, revenue and volume sold we used additive mixed models. In the talk we will cover some of the statistically interesting details on data preparation, model choice, trend estimationand presentation of model results. We will also present the main results and talk about limitations.

27 November 2021

**Mark Brewer, **BIOSS, & **Marcel van Oijen, **CEH - *Drought risk analysis for forested landscapes: project prafor*

- This NERC-funded project aims to extend theory for probabilistic risk analysis of continuous systems, test its use against forest data, use process models to predict future risks, and develop decision-support tools. Risk is commonly defined as the expectation value for loss. Most risk theory is developed for discrete hazards such as accidents, disasters and other forms of sudden system failure and not for systems where the hazard variable is always present and continuously varying, with matching continuous system response. Risks from such continuous hazards (levels of water, pollutants) are not associated with sudden discrete events, but with extended periods of time during which the hazard variable exceeds a threshold. To manage such risks, we need to know whether we should aim to reduce the probability of hazard threshold exceedance or the vulnerability of the system. In earlier work, we showed that there is only one possible definition of vulnerability that allows formal decomposition of risk as the product of hazard probability and system vulnerability. We have used this approach to analyse risks from summer droughts to the productivity of vegetation across Europe under current and future climatic conditions; this showed that climate change will likely lead to greatest drought risks in southern Europe, primarily because of increased hazard probability rather than significant changes in vulnerability. We plan to improve on this earlier work by: adding exposure to hazard; quantifying uncertainties in our risk estimates for risk; relaxing assumptions via Bayesian hierarchical modelling; testing our approach on both observational data from forests in the U.K., Spain and Finland and on simulated data from process-based modelling of forest response to climate change; embedding the approach in Bayesian decision theory; and developing an interactive web application as a tool for preliminary exploration of risk and its components to support decision-making.

4 December 2020

**Ruth King**, University of Edinburgh: *To integrated models ... and beyond …*

- Capture-recapture studies are often conducted on wildlife populations in order to improve our understanding of the given species and/or for conservation and management purposes.

**Agnieszka Borowska, **University of Glasgow

- Chemotaxis is a type of cell movement in response to a chemical stimulus which plays a key role in multiple biophysical processes, such as embryogenesis and wound healing, and which is crucial for understanding metastasis in cancer research. In the literature, chemotaxis has been modelled using biophysical models based on systems of nonlinear stochastic partial differential equations (NSPDEs), which are known to be challenging for statistical inference due to the intractability of the associated likelihood and the high computational costs of their numerical integration. Therefore, data analysis in this context has been limited to comparing predictions from NSPDE models to laboratory data using simple descriptive statistics. In my talk, I will present a statistically rigorous framework for parameter estimation in complex biophysical systems described by NSPDEs such as the one of chemotaxis. I will adopt a likelihood-free approach based on approximate Bayesian computations with sequential Monte Carlo (ABC-SMC) which allows for circumventing the intractability of the likelihood. To find informative summary statistics, crucial for the performance of ABC, I will discuss a Gaussian process (GP) regression model. The interpolation provided by the GP regression turns out useful on its own merits: it relatively accurately estimates the parameters of the NSPDE model and allows for uncertainty quantification, at a very low computational cost. The proposed methodology allows for a considerable part of computations to be completed before having observed any data, providing a practical toolbox to experimental scientists whose modes of operation frequently involve experiments and inference taking place at distinct points in time. In an application to externally provided synthetic data I will demonstrate that the correction provided by ABC-SMC is essential for accurate estimation of some of the NSPDE model parameters and for more flexible uncertainty quantification.

5 February 2021

**Mihaela Paun, **University of Glasgow

- There have recently been impressive advancements in the mathematical modelling of complex cardio-vascular systems. However, parameter estimation and uncertainty quantification are still challenging problems in this field. In my talk, I will describe a study that uses Bayesian inference to quantify the uncertainty of model parameters and haemodynamic predictions in a o-d

**Ernst C. Wit **, Università della Svizzera italiana: *Causal regularization*

- Causality is the holy grail of science, but for millennia humankind has struggled to operationalize it efficiently. In recent decades, a number of more successful ways of dealing with causality in practice, such as propensity score matching, the PC algorithm and invariant causal prediction, have been introduced. However, approaches that use a graphical model formulation tend to struggle with the computational complexity, whenever the system gets large. Finding the causal structure typically becomes a combinatorial-hard problem. In our causal inference approach, we build forth on ideas present in invariant causal prediction and the causal Dantzig, by replacing the combinatorial optimization by a continuous optimization using a form of causal regularization. This makes our method applicable to large systems. Furthermore, our method allows a precise formulation of the trade-off between in-sample and out-of-sample prediction error. This is joint work with Lucas Kania.

**Guido Sanguinetti, **SISSA:* Robustness and interpretability of Bayesian neural networks*

- Deep neural networks have surprised the world in the last decade with their successes in a number of difficult machine learning tasks. However, while their successes are now part of everyday life, DNNs also exhibit some profound weaknesses: chief amongst them, in my opinion, their black box nature and brittleness under adversarial attacks. In this talk, I will discuss a geometric perspective which sheds light on the origins of their vulnerability under adversarial attack, and has also considerable implications for their interpretability. I will also show how a Bayesian treatment of DNNs provably avoids adversarial weaknesses, and improves interpretability (in a saliency context).

- Refs: Carbone et al, NeurIPS 2020 https://arxiv.org/abs/2002.04359; Carbone et al, under review, https://arxiv.org/abs/2102.11010