15:00 - 16:00
Research and Teaching in Statistical and Data Sciences - Virtual Seminar Series
This diverse seminar series will highlight novel advances in methodology and application in statistics and data science, and will take the place of the University of Glasgow Statistics Group seminar during this period of remote working. We welcome all interested attendees at Glasgow and further afield. For more information please see the University of Glasgow webpage
To sign up for this seminar series, please register here
Call details will be sent out 30mins before the start of the seminar
These seminars are recorded. All recordings can be found here.
22 January 2021
Luca Del Core (Groningen) - Stochastic modelling of COVID-19 spread in Italy
- Italy was particularly hard hit during COVID-19 pandemic and the aim of this work is to describe the dynamics of infections within each region and the geographical spread of the virus over time. To this end we extend the standard SIRD framework by introducing pairwise interaction terms between the region-specific infected compartments. That is, we check how the increments of infected people of one region depends on that of another region. That information could be used as a proxy to measure how the epidemics spread geographically. Furthermore, we make all the transition parameters dependent on time so as to capture potential changes in the epidemic spread due to the effect of external factors such as containment measures. We also include the information of the number of tests performed as predictor for the infection parameters. We then use the master equation as probabilistic model to describe the evolution of the temporal process of the states and we focus on its first two-order moments by considering a local linear approximation. Finally we infer the corresponding parameters using a step-wise optimisation procedure which leads to the maximum likelihood estimate.
29 January 2021
Agnieszka Borowska (Glasgow): Gaussian process enhanced semi-automatic approximate Bayesian computation: parameter inference in a stochastic differential equation system for chemotaxis
- Chemotaxis is a type of cell movement in response to a chemical stimulus which plays a key role in multiple biophysical processes, such as embryogenesis and wound healing, and which is crucial for understanding metastasis in cancer research. In the literature, chemotaxis has been modelled using biophysical models based on systems of nonlinear stochastic partial differential equations (NSPDEs), which are known to be challenging for statistical inference due to the intractability of the associated likelihood and the high computational costs of their numerical integration. Therefore, data analysis in this context has been limited to comparing predictions from NSPDE models to laboratory data using simple descriptive statistics. In my talk, I will present a statistically rigorous framework for parameter estimation in complex biophysical systems described by NSPDEs such as the one of chemotaxis. I will adopt a likelihood-free approach based on approximate Bayesian computations with sequential Monte Carlo (ABC-SMC) which allows for circumventing the intractability of the likelihood. To find informative summary statistics, crucial for the performance of ABC, I will discuss a Gaussian process (GP) regression model. The interpolation provided by the GP regression turns out useful on its own merits: it relatively accurately estimates the parameters of the NSPDE model and allows for uncertainty quantification, at a very low computational cost. The proposed methodology allows for a considerable part of computations to be completed before having observed any data, providing a practical toolbox to experimental scientists whose modes of operation frequently involve experiments and inference taking place at distinct points in time. In an application to externally provided synthetic data I will demonstrate that the correction provided by ABC-SMC is essential for accurate estimation of some of the NSPDE model parameters and for more flexible uncertainty quantification.
5 February 2021
Mihaela Paun (Glasgow): Assessing model mismatch and model selection in a Bayesian uncertainty quantification analysis of a fluid-dynamics model of pulmonary blood circulation
- There have recently been impressive advancements in the mathematical modelling of complex cardio-vascular systems. However, parameter estimation and uncertainty quantification are still challenging problems in this field. In my talk, I will describe a study that uses Bayesian inference to quantify the uncertainty of model parameters and haemodynamic predictions in a one-dimensional pulmonary circulation model based on an integration of mouse haemodynamic and micro-computed tomography imaging data. I will discuss an often neglected, though important source of uncertainty: in the mathematical model form due to the discrepancy between the model and the reality, and in the measurements due to the wrong noise model (jointly called ‘model mismatch’). I will demonstrate that minimising the mean squared error between the measured and the predicted data (the conventional method) in the presence of model mismatch leads to biased and overly confident parameter estimates and haemodynamic predictions. I will show that the proposed method allowing for model mismatch, which is represented with Gaussian Processes, corrects the bias. Additionally, I will compare a linear and a non-linear wall model, as well as models with different vessel stiffness relations. I'll use formal model selection analysis based on the Watanabe Akaike Information Criterion to select the model that best predicts the pulmonary haemodynamics. Results show that the non-linear pressure-area relationship with stiffness dependent on the unstressed radius predicts best the data measured in a control mouse.
12 February 2021
Bernal Arzola (Groningen): Improved network reconstruction with shrinkage-based Gaussian graphical models
- Gaussian graphical models (GGMs) are undirected network models where the nodes represent the random variables and the edges their partial correlations. GGMs are straightforward to implement, easy to interpret, and have an acceptable computational cost. These advantages have made GGMs popular to reconstruct gene regulatory networks from high throughput data. In this talk, I will discuss the reconstruction of GGMs in the high dimensional case (n << p). In this scenario, the inference becomes challenging and requires regularization/shrinkage. I will present a novel approach to infer GGM's structures based on the Ledoit-Wolf (LW) shrinkage, which accounts for the sample size, the number of variables, and the shrinkage value.
19 February 2021
Mu Niu (Glasgow): Intrinsic Gaussian processes on nonlinear manifolds and point clouds
- We propose a class of intrinsic Gaussian processes (GPs) for interpolation, regression on manifolds with a primary focus on complex constrained domains or irregularly shaped spaces arising as subsets or submanifolds of R, R2, R3 and beyond. For example, intrinsic GPs can accommodate spatial domains arising as complex subsets of Euclidean space. Intrinsic GPs respect the potentially complex boundary or interior conditions as well as the intrinsic geometry of the spaces. The key novelty of the approach proposed is to utilize the relationship between heat kernels and the transition density of Brownian motion on manifolds for constructing and approximating valid and computationally feasible covariance kernels.
26 February 2021
Sara Wade & Karla Monterrubio-Gomez (Edinburgh): On MCMC for variationally sparse Gaussian processes: A pseudo-marginal approach
- Machine learning is changing the world we live in at a break neck pace. From image recognition and generation, to the deployment of recommender systems, it seems to be breaking new ground constantly and influencing almost every aspect of our lives. In ths seminar series we ask distinguished speakers to comment on what role Bayesian statistics and Bayesian machine learning have in this rapidly changing landscape. Do we need to optimally process information or borrow strength in the big data era? Are philosophical concepts such as coherence and the likelihood principle relevant when you are running a large scale recommender system? Are variational approximations, MCMC or EP appropriate in a production environment? Can I use the propensity score and call myself a Bayesian? How can I elicit a prior over a massive dataset? Is Bayes a reasonable theory of how to be perfect but a hopeless theory of how to be good? Do we need Bayes when we can just A/B test? What combinations of pragmatism and idealism can be used to deploy Bayesian machine learning in a large scale live system? We ask Bayesian believers, Bayesian pragmatists and Bayesian sceptics to comment on all of these subjects and more.
5 March 2021
Robert Gramacy, (Virginia Tech Department of Statistics): Replication or Exploration? Sequential Design for Stochastic Simulation Experiments
- We investigate the merits of replication, and provide methods that search for optimal designs (including replicates), in the context of noisy computer simulation experiments. We first show that replication offers the potential to be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead based sequential design scheme that can determine if a new run should be at an existing input location (i.e. replicate) or at a new one (explore). When paired with a newly developed heteroskedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology.
12 March 2021
Chris Williams (Edinburgh): Multi-task Dynamical Systems
- Time series datasets are often composed of a variety of sequences from the same domain, but from different entities, such as individuals, products, or organizations. We are interested in how time series models can be specialized to individual sequences (capturing the specific characteristics) while still retaining statistical power by sharing commonalities across the sequences. We describe the multi-task dynamical system (MTDS)---a general methodology for extending multi-task learning (MTL) to time series models. Our approach endows dynamical systems with a set of hierarchical latent variables which can modulate all model parameters. To our knowledge, this is a novel development of MTL, and applies to time series both with and without control inputs. We apply the MTDS to motion-capture data of people walking in various styles using a multi-task recurrent neural network (RNN), and to patient drug-response data using a multi-task pharmacodynamic model. Joint work with Alex Bird and Chris Hawthorne
19 March 2021
Ernst C. Wit (Università della Svizzera italiana): Causal regularization
- Causality is the holy grail of science, but for millennia humankind has struggled to operationalize it efficiently. In recent decades, a number of more successful ways of dealing with causality in practice, such as propensity score matching, the PC algorithm and invariant causal prediction, have been introduced. However, approaches that use a graphical model formulation tend to struggle with the computational complexity, whenever the system gets large. Finding the causal structure typically becomes a combinatorial-hard problem. In our causal inference approach, we build forth on ideas present in invariant causal prediction and the causal Dantzig, by replacing the combinatorial optimization by a continuous optimization using a form of causal regularization. This makes our method applicable to large systems. Furthermore, our method allows a precise formulation of the trade-off between in-sample and out-of-sample prediction error. This is joint work with Lucas Kania.
26 March 2021
Guido Sanguinetti (SISSA, Trieste, Italy): Robustness in Bayesian neural networks
- Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse the geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, i.e., when the data lies on a lower-dimensional submanifold of the ambient space. As a direct consequence, we demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks. Experimental results on the MNIST and Fashion MNIST datasets with BNNs trained with Hamiltonian Monte Carlo and Variational Inference support this line of argument, showing that BNNs can display both high accuracy and robustness to gradient based adversarial attacks.
23 April 2020
Neil Chada (National University of Singapore) - Advancements of non-Gaussian random fields for statistical inversion
- Developing informative priors for Bayesian inverse problems is an important direction, which can help quantify information on the posterior. In this talk we introduce a new of a class priors for inversion based on alpha-stable sheets, which incorporate multiple known processes such as a Gaussian and Cauchy process. We analyze various convergence properties which is achieved through different representations these sheets can take. Other aspects we wish to address are well-posedness of the inverse problem and finite-dimensional approximations. To complement the analysis we provide some connections with machine learning, which will allow us to use sampling based MCMC schemes. We will conclude the talk with some numerical experiments, highlighting the robustness of the established connection, on various inverse problems arising in regression and PDEs.
14 May 2020
Roberta Pappadà (University of Trieste) - Consensus clustering based on pivotal methods
- Despite its large use, one major limitation of K-means clustering algorithm is its sensitivity to the initial seeding used to produce the ﬁnal partition. We propose a modiﬁed version of the classical approach, which exploits the information contained into a co-association matrix obtained from clustering ensembles. Our proposal is based on the identiﬁcation of a set of data points–pivotal units–that are representative of the group they belong to. The presented approach can thus be viewed as a possible strategy to perform consensus clustering. The selection of pivotal units has been originally employed for solving the so-called label-switching problem in Bayesian estimation of ﬁnite mixture models. Diﬀerent criteria for identifying the pivots are discussed and compared. We investigate the performance of the proposed algorithm via simulation experiments and the comparison with other consensus methods available in the literature.
21 May 2020
Ana Basiri (UCL) - Who Are the "Crowd"? Learning from Large but Patchy Samples
- This talk will look at the challenges of crowdsourced/self-reporting data, such as missingness and biases in ‘new forms of data’ and consider them as a useful source of data itself. A few applications and examples of these will be discussed, including extracting the 3D map of cities using the patterns of blockage, reflection, and attenuation of the GPS signals (or other similar signals), that are contributed by the volunteers/crowd. In the era of big data, open data, social media and crowdsourced data when “we are drowning in data”, gaps and unavailability, representativeness and bias issues associated with them may indicate some hidden problems or reasons allowing us to understand the data, society and cities better.
4 June 2020
Colin Gillespie (University of Newcastle) - Getting the most out of other people's R sessions
- Have you ever wondered how you could hack other people's R sessions? Well, I did, and discovered that it wasn't that hard! In this talk, I discuss a few ways I got people to run arbitrary, and hence very dangerous, R scripts. This is certainly worrying now thatwe have all moved to working from home.
18 June 2020
Jo Eidsvik (NTNU) - Autonomous Oceanographic Sampling Designs Using Excursion Sets for Multivariate Gaussian random fields
- Improving and optimizing oceanographic sampling is a crucial task for marine science and maritime management. Faced with limited resources to understand processes in the water-column, the combination of statistics and autonomous robotics provides new opportunities for experimental designs. In this work we develop methods for efficient spatial sampling applied to the mapping of coastal processes by providing informative descriptions of spatial characteristics of ocean phenomena. Specifically, we define a design criterion based on improved characterization of the uncertainty in the excursions of vector-valued Gaussian random fields, and derive tractable expressions for the expected Bernoulli variance reduction in such a framework. We demonstrate how this criterion can be used to prioritize sampling efforts at locations that are ambiguous, making exploration more effective. We use simulations to study the properties of methods and to compare them with state-of-the-art approaches, followed by results from field deployments with an autonomous underwater vehicle as part of a case study mapping the boundary of a river plume. The results demonstrate the potential of combining statistical methods and robotic platforms to effectively inform and execute data-driven environmental sampling.
9 July 2020
Vianey Leos-Barajas (NCSU) - 'Spatially-coupled hidden Markov models for short-term wind speed forecasting
- Hidden Markov models (HMMs) provide a flexible framework to model time series data where the observation process, Yt, is taken to be driven by an underlying latent state process, Zt. In this talk, we will focus on discrete-time, finite-state HMMs as they provide a flexible framework that facilitates extending the basic structure in many interesting ways. HMMs can accommodate multivariate processes by (i) assuming that a single state governs the M observations at time t, (ii) assuming that each observation process is governed by its own HMM, irrespective of what occurs elsewhere, or (iii) a balance between the two, as in the coupled HMM framework. Coupled HMMs assume that a collection of M observation processes is governed by its respective M state processes. However, the mth state process at time t, Zm,t not only depends on Zm,t−1 but also on the collection of state process Z−m,t−1. We introduce spatially-coupled hidden Markov models whereby the state processes interact according to an imposed spatial structure and the observations are collected at S spatial locations. We outline an application (in progress) to short-term forecasting of wind speed using data collected across multiple wind turbines at a wind farm.
6 August 2020
Helen Ogden (University of Southampton) - Towards More Flexible Models for Count Data
- Count data are widely encountered across a range of application areas, including medicine, engineering and ecology. Many of the models used for the statistical analysis of count data are quite simple and make strong assumptions about the data generating process, and it is common to encounter situations in which these models fail to fit data well. I will review various existing models for count data, and describe some simple scenarios where each of these models fail. I will describe current work on an extension to existing Poisson mixture models, and demonstrate the performance of this new class of models in some simple examples.
17 September 2020
Andrew Zammit Mangion (University of Wollongong) - Statistical Machine Learning for Spatio-Temporal Forecasting
- Conventional spatio-temporal statistical models are well-suited for modelling and forecasting using data collected over short time horizons. However, they are generally time-consuming to fit, and often do not realistically encapsulate temporally-varying dynamics. Here, we tackle these two issues by using a deep convolution neural network (CNN) in a hierarchical statistical framework, where the CNN is designed to extract process dynamics from the process' most recent behaviour. Once the CNN is fitted, probabilistic forecasting can be done extremely quickly online using an ensemble Kalman filter with no requirement for repeated parameter estimation. We conduct an experiment where we train the model using 13 years of daily sea-surface temperature data in the North Atlantic Ocean. Forecasts are seen to be accurate and calibrated. A key advantage of the approach is that the CNN provides a global prior model for the dynamics that is realistic, interpretable, and computationally efficient to forecast with. We show the versatility of the approach by successfully producing 10-minute nowcasts of weather radar reflectivities in Sydney using the same model that was trained on daily sea-surface temperature data in the North Atlantic Ocean. This is joint work with Christopher Wikle, University of Missouri.
25 September 2020
Ed Hill (University of Warwick) - Predictions of COVID-19 dynamics in the UK: short-term forecasting, analysis of potential exit strategies and impact of contact networks
- Regarding the future course of the COVID-19 outbreak in the UK, mathematical models have provided, and continue to provide, short and long term forecasts to support evidence-based policymaking. We present a deterministic, age-structured transmission model for SARS-CoV-2 that uses real-time data on confirmed cases requiring hospital care and mortality to provide predictions on epidemic spread in ten regions of the UK. The model captures a range of age-dependent heterogeneities, reduced transmission from asymptomatic infections and is fit to the key epidemic features over time. We illustrate how the model has been used to generate short-term predictions and assess potential lockdown exit strategies. As steps are taken to relax social distancing measures, questions also surround the ramifications on community disease spread of workers returning to the workplace and students returning to university. To study these aspects, we present a network model to capture the transmission of SARS-CoV-2 over overlapping sets of networks in household, social and work/study settings.
2 October 2020
Eleni Matechou (University of Kent) - Environmental DNA as a monitoring tool at a single and multi-species level
- Environmental DNA (eDNA) is a survey tool with rapidly expanding applications for assessing presence of a wildlife species at surveyed sites. eDNA surveys consist of two stages: stage 1, when a sample is collected from a site, and stage 2, when the sample is analysed in the lab for presence of species' DNA. The methods were originally developed to target particular species (single-species), but can now be used to obtain a list of species at each surveyed site (multi-species/metabarcoding). In this talk, I will present a novel Bayesian model for analysing single-species eDNA data, while accounting for false positive and false negative errors, which are known to be non-negligible, in both stages of eDNA surveys. All model parameters can be modelled as functions of covariates and the proposed methodology allows us to perform efficient Bayesian variable selection that does not require the use of trans-dimensional algorithms. I will also discuss joint species distribution models as the starting point for modelling multi-species eDNA data and will outline the next steps required to obtain a unifying modelling framework for eDNA surveys.
9 October 2020
Daniela Castro Camilo (University of Glasgow) - Bayesian space-time gap filling for inference on extreme hot-spots: an application to Red Sea surface temperatures
- We develop a method for probabilistic prediction of extreme value hot-spots in a spatio-temporal framework, tailored to big datasets containing important gaps. In this setting, direct calculation of summaries from data, such as the minimum over a space-time domain, is not possible. To obtain predictive distributions for such cluster summaries, we propose a two-step approach. We first model marginal distributions with a focus on accurate modeling of the right tail and then, after transforming the data to a standard Gaussian scale, we estimate a Gaussian space-time dependence model defined locally in the time domain for the space-time subregions where we want to predict. In the first step, we detrend the mean and standard deviation of the data and fit a spatially resolved generalized Pareto distribution to apply a correction of the upper tail. To ensure spatial smoothness of the estimated trends, we either pool data using nearest-neighbor techniques, or apply generalized additive regression modeling. To cope with high space-time resolution of data, the local Gaussian models use a Markov representation of the Matérn correlation function based on the stochastic partial differential equations (SPDE) approach. In the second step, they are fitted in a Bayesian framework through the integrated nested Laplace approximation implemented in R-INLA. Finally, posterior samples are generated to provide statistical inferences through Monte-Carlo estimation. Motivated by the 2019 Extreme Value Analysis data challenge, we illustrate our approach to predict the distribution of local space-time minima in anomalies of Red Sea surface temperatures, using a gridded dataset (11,315 days, 16,703 pixels) with artificially generated gaps. In particular, we show the improved performance of our two-step approach over a purely Gaussian model without tail transformations.
16 October 2020
Daniel Lawson (University of Bristol) - CLARITY - Comparing heterogeneous data using dissimiLARITY
- Integrating datasets from different disciplines is hard because the data are often qualitatively different in meaning, scale, and reliability. When two datasets describe the same entities, many scientific questions can be phrased around whether the similarities between entities are conserved. Our method, CLARITY, quantifies consistency across datasets, identifies where inconsistencies arise, and aids in their interpretation. We explore three diverse comparisons: Gene Methylation vs Gene Expression, evolution of language sounds vs word use, and country-level economic metrics vs cultural beliefs. The non-parametric approach is robust to noise and differences in scaling, and makes only weak assumptions about how the data were generated. It operates by decomposing similarities into two components: the `structural' component analogous to a clustering, and an underlying `relationship' between those structures. This allows a `structural comparison' between two similarity matrices using their predictability from `structure'. This presentation describes work presented in arXiv:2006.00077 with software, the work can be found here.
22 October 2020
Charlotte Jones-Todd (University of Aukland) - Modelling systematic effects and latent phenomena in point referenced data
- The spatial location and time of events (or objects) is the currency that point process statistics invests to estimate the drivers of the intensity, or rate of occurrence, of those events. Yet, the assumed processes giving rise to the observed data typically fail to represent the full complexity of the driving mechanisms. Ignoring spatial or temporal dependence between events leads to incorrect inference, as does assuming the wrong dependency structure. Latent Gaussian models are a flexible class of model that accounts for dependency structures in a hide all ills fashion; the stochastic structures in these models absorb and amalgamate the underlying, unaccounted for, mechanisms leading to the observed data. In this talk I will introduce this class of model and discuss recent work using latent Gaussian fields to model the fluctuations in the data that cannot otherwise be accounted for.
30 October 2020
Theresa Smith (University of Bath) - A collaborative project to monitor and improve engagement in talking therapies
- Over the past two years, I have been working on a joint research project between the University of Bath and as software company Mayden to develop and test models to predict whether a patient in NHS England’s Improving Access to Psychological Therapy (IAPT) programme will attend their therapy appointments, allowing for targeted intervention to be developed. Currently only a third of patients referred to IAPT complete their treatment and of these only half reach recovery. Given that nearly two thirds of people in the UK report having experienced mental health problems, these rates are concerning. In this talk, I’ll give an overview of the collaboration and present the co-development and findings of two recent papers investigating factors associated with engagement in IAPT services:
Predicting patient engagement in IAPT services: a statistical analysis of electronic health records (doi: 10.1136/ebmental-2019-300133)
Impact of COVID-19 on Primary Care Mental Health Services: A Descriptive, Cross-Sectional Timeseries of Electronic Healthcare Records (doi: 10.1101/2020.08.15.20175562)
6 November 2020
Manuele Leonelli (IE University) - Diagonal distrirbutions
- Diagonal distributions are an extension of marginal distributions, which can be used to summarize and efficiently visualize the main features of multivariate distributions in arbitrary dimensions. The main diagonal is studied in detail, which consists of a mean-constrained univariate distribution function on [0; 1], whose variance connects with Spearman's rho, and whose mass at the endpoints 0 and 1 offers insights on the strength of tail dependence. To learn about the main diagonal density from data, histogram and kernel-based methods that take advantage of auxiliary information in the form of a moment constraint to which diagonal distributions must obey are introduced. An application is given illustrating how diagonal densities can be used in order to contrast the diversification of a portfolio based on FAANG stocks against one based on crypto-assets.
20 November 2020
Nicole Augustin (University of Edinburgh) - Introduction of standardised tobacco packaging and minimum excise tax in the UK: a prospective study
- Standardised packaging for factory made and roll your own tobacco was implemented inthe UK in May, 2017, alongside a minimum excise tax for factory made products. As other jurisdictions attempt to implement standardised packaging, the tobacco industrycontinues to suggest that it would be counterproductive, in part by leading to falls inprice due to commoditisation. Here, we assess the impact of the introduction of these policies on the UK tobacco market. We carried out a prospective study of UK commercial electronic point-of-sale data from 11 constituent geographic areas. Data were available for each tobacco product (or Stock Keeping Unit (SKU)): the tobacco brand, brand family, brand variant, specific features ofthe pack. For each SKU, three years (May 2015 to April 2018) of monthly data onvolume of sales, sales prices, and extent of distribution of sales within the 11 UK geographical areas were available. The main outcomes were changes in sales volumes, volume-weighted real prices, andtobacco industry revenue. To estimate temporal trends of monthly price per stick, revenue and volume sold we used additive mixed models. In the talk we will cover some of the statistically interesting details on data preparation, model choice, trend estimationand presentation of model results. We will also present the main results and talk about limitations. This is joint work with Rosemary Hiscock, Rob Branston and Anna Gilmore. The project was funded by Cancer Research UK.
Friday 27 November 15:00-16:00
Mark Brewer (BIOSS)& Marcel van Oijen (CEH) - Drought risk analysis for forested landscapes: project prafor
- This NERC-funded project aims to extend theory for probabilistic risk analysis of continuous systems, test its use against forest data, use process models to predict future risks, and develop decision-support tools. Risk is commonly defined as the expectation value for loss. Most risk theory is developed for discrete hazards such as accidents, disasters and other forms of sudden system failure and not for systems where the hazard variable is always present and continuously varying, with matching continuous system response. Risks from such continuous hazards (levels of water, pollutants) are not associated with sudden discrete events, but with extended periods of time during which the hazard variable exceeds a threshold. To manage such risks, we need to know whether we should aim to reduce the probability of hazard threshold exceedance or the vulnerability of the system. In earlier work, we showed that there is only one possible definition of vulnerability that allows formal decomposition of risk as the product of hazard probability and system vulnerability. We have used this approach to analyse risks from summer droughts to the productivity of vegetation across Europe under current and future climatic conditions; this showed that climate change will likely lead to greatest drought risks in southern Europe, primarily because of increased hazard probability rather than significant changes in vulnerability. We plan to improve on this earlier work by: adding exposure to hazard; quantifying uncertainties in our risk estimates for risk; relaxing assumptions via Bayesian hierarchical modelling; testing our approach on both observational data from forests in the U.K., Spain and Finland and on simulated data from process-based modelling of forest response to climate change; embedding the approach in Bayesian decision theory; and developing an interactive web application as a tool for preliminary exploration of risk and its components to support decision-making.
Friday 4 December 15:00-16:00
Ruth King (University of Edinburgh): To integrated models ... and beyond …
- Capture-recapture studies are often conducted on wildlife populations in order to improve our understanding of the given species and/or for conservation and management purposes. Such studies involve the repeated observation of individuals over a period of time, and in many cases, over years and even decades. At initial capture each individual is uniquely marked (e.g. using a tag/ring/using natural markings) and released. The data then correspond to the set of observed individual capture histories, detailing at which capture occasions each observed individual is recorded at. For standard capture-recapture models, it is well known that the survival probabilities correspond to apparent survival, with permanent migration and death confounded. We describe an integrated modelling approach, where in addition to the live resighting of individuals, we are also able to recover dead individuals, typically referred to as capture-recapture-recovery data. We show how such data permits the estimation of the dispersal (or migration) parameter and hence true survival probabilities. We describe the general modelling approach and, using sufficient statistics, calculate a corresponding efficient likelihood expression, which makes use of hidden Markov model-type ideas.
This seminar series is supported as part of the ICMS Online Mathematical Sciences Seminars.