/post/index.xml Past Seminar Series - McGill Statistics Seminars
  • Epidemic Forecasting using Delayed Time Embedding

    Date: 2023-02-17

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    Forecasting the future trajectory of an outbreak plays a crucial role in the mission of managing emerging infectious disease epidemics. Compartmental models, such as the Susceptible-Exposed-Infectious-Recovered (SEIR), are the most popular tools for this task. They have been used extensively to combat many infectious disease outbreaks including the current COVID-19 pandemic. One downside of these models is that they assume that the dynamics of an epidemic follow a pre-defined dynamical system which may not capture the true trajectories of an outbreak. Consequently, the users need to make several modifications throughout an epidemic to ensure their models fit well with the data. However, there is no guarantee that these modifications can also help increase the precision of forecasting. In this talk, I will introduce a new method for predicting epidemics that does not make any assumption on the underlying dynamical system. Our method combines sparse random feature expansion and delay embedding to learn the trajectory of an epidemic.

  • Efficient Label Shift Adaptation through the Lens of Semiparametric Models

    Date: 2023-02-10

    Time: 15:00-16:00 (Montreal time)

    Hybrid: In person / Zoom

    Location: Burnside Hall 1205

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    We study the domain adaptation problem with label shift in this work. Under the label shift context, the marginal distribution of the label varies across the training and testing datasets, while the conditional distribution of features given the label is the same. Traditional label shift adaptation methods either suffer from large estimation errors or require cumbersome post-prediction calibrations. To address these issues, we first propose a moment-matching framework for adapting the label shift based on the geometry of the influence function. Under such a framework, we propose a novel method named efficient label shift adaptation (ELSA), in which the adaptation weights can be estimated by solving linear systems. Theoretically, the ELSA estimator is root-n consistent (n is the sample size of the source data) and asymptotically normal. Empirically, we show that ELSA can achieve state-of-the-art estimation performances without post-prediction calibrations, thus, gaining computational efficiency.

  • Learning from a Biased Sample

    Date: 2023-02-03

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    The empirical risk minimization approach to data-driven decision making assumes that we can learn a decision rule from training data drawn under the same conditions as the ones we want to deploy it under. However, in a number of settings, we may be concerned that our training sample is biased, and that some groups (characterized by either observable or unobservable attributes) may be under- or over-represented relative to the general population; and in this setting empirical risk minimization over the training set may fail to yield rules that perform well at deployment. Building on concepts from distributionally robust optimization and sensitivity analysis, we propose a method for learning a decision rule that minimizes the worst-case risk incurred under a family of test distributions whose conditional distributions of outcomes given covariates differ from the conditional training distribution by at most a constant factor, and whose covariate distributions are absolutely continuous with respect to the covariate distribution of the training data. We apply a result of Rockafellar and Uryasev to show that this problem is equivalent to an augmented convex risk minimization problem. We give statistical guarantees for learning a robust model using the method of sieves and propose a deep learning algorithm whose loss function captures our robustness target. We empirically validate our proposed method in simulations and a case study with the MIMIC-III dataset.

  • What is TWAS and how do we use it in integrating gene expression data

    Date: 2023-01-20

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    The transcriptome-wide association studies (TWAS) is a pioneering approach utilizing gene expression data to identify genetic basis of complex diseases. Its core component is called “genetically regulated expression (GReX)”. GReX links gene expression information with phenotype by serving as both the outcome of genotype-based expression models and the predictor for downstream association testing. Although it is popular and has been used in many high-profile projects, its mathematical nature and interpretation haven’t been rigorously verified. As such, we have first conducted power analysis using NCP-based closed forms (Cao et al, PLoS Genet 2021), based on which we realized that the common interpretation of TWAS that looks biologically sensible is actually mathematically questionable. Following this insight, by real data analysis and simulations, we demonstrated that current linear models of GReX inadvertently combine two separable steps of machine learning - feature selection and aggregation - which can be independently replaced to improve overall power (Cao et al, Genetics 2021). Based on this new interpretation, we have developed novel protocols disentangling feature selections and aggregations, leading to improved power and novel biological discoveries (Cao et al, BiB 2021; Genetics 2021). To promote this new understanding, we moved forward to develop two statistical tools utilizing gene expressions in identifying genetic basis of gene-gene interactions (Kossinna et al, in preparation) and low-effect genetic variants (Li et al, in review), respectively. Looking forward, our mathematical characterization of TWAS opens a door for a new way to integrate gene expressions in genetic studies towards the realization of precision medicine.

  • To split or not to split that is the question: From cross validation to debiased machine learning

    Date: 2023-01-13

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    Data splitting is an ubiquitous method in statistics with examples ranging from cross validation to cross-fitting. However, despite its prevalence, theoretical guidance regarding its use is still lacking. In this talk we will explore two examples and establish an asymptotic theory for it. In the first part of this talk, we study the cross-validation method, a ubiquitous method for risk estimation, and establish its asymptotic properties for a large class of models and with an arbitrary number of folds. Under stability conditions, we establish a central limit theorem and Berry-Esseen bounds for the cross-validated risk, which enable us to compute asymptotically accurate confidence intervals. Using our results, we study the statistical speed-up offered by cross validation compared to a train-test split procedure. We reveal some surprising behavior of the cross-validated risk and establish the statistically optimal choice for the number of folds. In the second part of this talk, we study the role of cross fitting in the generalized method of moments with moments that also depend on some auxiliary functions. Recent lines of work show how one can use generic machine learning estimators for these auxiliary problems, while maintaining asymptotic normality and root-n consistency of the target parameter of interest. The literature typically requires that these auxiliary problems are fitted on a separate sample or in a cross-fitting manner. We show that when these auxiliary estimation algorithms satisfy natural leave-one-out stability properties, then sample splitting is not required. This allows for sample re-use, which can be beneficial in moderately sized sample regimes.

  • Optimal One-pass Nonparametric Estimation Under Memory Constraint

    Date: 2022-11-18

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    For nonparametric regression in the streaming setting, where data constantly flow in and require real-time analysis, a main challenge is that data are cleared from the computer system once processed due to limited computer memory and storage. We tackle the challenge by proposing a novel one-pass estimator based on penalized orthogonal basis expansions and developing a general framework to study the interplay between statistical efficiency and memory consumption of estimators. We show that, the proposed estimator is statistically optimal under memory constraint, and has asymptotically minimal memory footprints among all one-pass estimators of the same estimation quality. Numerical studies demonstrate that the proposed one-pass estimator is nearly as efficient as its non-streaming counterpart that has access to all historical data.

  • Automated Inference on Sharp Bounds

    Date: 2022-11-11

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    Many causal parameters involving the joint distribution of potential outcomes in treated and control states cannot be point-identified, but only be bounded from above and below. The bounds can be further tightened by conditioning on pre-treatment covariates, and the sharp version of the bounds corresponds to using a full covariate vector. This paper gives a method for estimation and inference on sharp bounds determined by a linear system of under-identified equalities (e.g., as in Heckman et al (ReSTUD, 1997)). In the sharp bounds’ case, the RHS of this system involves a nuisance function of (many) covariates (e.g., the conditional probability of employment in treated or control state). Combining Neyman-orthogonality and sample splitting, I provide an asymptotically Gaussian estimator of sharp bound that does not require solving the linear system in closed form. I demonstrate the method in an empirical application to Connecticut’s Jobs First welfare reform experiment.

  • Max-linear Graphical Models for Extreme Risk Modelling

    Date: 2022-11-04

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    Graphical models can represent multivariate distributions in an intuitive way and, hence, facilitate statistical analysis of high-dimensional data. Such models are usually modular so that high-dimensional distributions can be described and handled by careful combination of lower dimensional factors. Furthermore, graphs are natural data structures for algorithmic treatment. Moreover, graphical models can allow for causal interpretation, often provided through a recursive system on a directed acyclic graph (DAG) and the max-linear Bayesian network we introduced in [1] is a specific example. This talk contributes to the recently emerged topic of graphical models for extremes, in particular to max-linear Bayesian networks, which are max-linear graphical models on DAGs. Generalized MLEs are derived in [2]. In this context, the Latent River Problem has emerged as a flagship problem for causal discovery in extreme value statistics. In [3] we provide a simple and efficient algorithm QTree to solve the Latent River Problem. QTree returns a directed graph and achieves almost perfect recovery on the Upper Danube, the existing benchmark dataset, as well as on new data from the Lower Colorado River in Texas. It can handle missing data, and has an automated parameter tuning procedure. In our paper, we also show that, under a max-linear Bayesian network model for extreme values with propagating noise, the QTree algorithm returns asymptotically a.s. the correct tree. Here we use the fact that the non-noisy model has a left-sided atom for every bivariate marginal distribution, when there is a directed edge between the the nodes.

  • A Conformal-Based Two-Sample Conditional Distribution Test

    Date: 2022-10-21

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    We consider the problem of testing the equality of the conditional distribution of a response variable given a set of covariates between two populations. Such a testing problem is related to transfer learning and causal inference. We develop a nonparametric procedure by combining recent advances in conformal prediction with some new ingredients such as a novel choice of conformity score and data-driven choices of weight and score functions. To our knowledge, this is the first successful attempt of using conformal prediction for testing statistical hypotheses beyond exchangeability. The final test statistic reveals a natural connection between conformal inference and the classical rank-sum test. Our method is suitable for modern machine learning scenarios where the data has high dimensionality and the sample size is large, and can be effectively combined with existing classification algorithms to find good weight and score functions. The performance of the proposed method is demonstrated in synthetic and real data examples.

  • Some steps towards causal representation learning

    Date: 2022-10-07

    Time: 15:30-16:30 (Montreal time)

    https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09

    Meeting ID: 834 3668 6293

    Passcode: 12345

    Abstract:

    High-dimensional unstructured data such images or sensor data can often be collected cheaply in experiments, but is challenging to use in a causal inference pipeline without extensive engineering and domain knowledge to extract underlying latent factors. The long term goal of causal representation learning is to find appropriate assumptions and methods to disentangle latent variables and learn the causal mechanisms that explain a system’s behaviour. In this talk, I’ll present results from a series of recent papers that describe how we can leverage assumptions about a system’s causal mechanisms to provably disentangle latent factors. I will also talk about the limitations of a commonly used injectivity assumption, and discuss a hierarchy of settings that relax this assumption.