/post/index.xml Past Seminar Series - McGill Statistics Seminars
  • Machine Learning and Neural Networks: Foundations and Some Fundamental Questions

    Date: 2020-10-09

    Time: 15:30-16:30

    Zoom Link

    Meeting ID: 924 5390 4989

    Passcode: 690084

    Abstract:

    Statistical learning theory is by now a mature branch of data science that hosts a vast variety of practical techniques for tackling data-related problems. In this talk we present some fundamental concepts upon which statistical learning theory has been based. Different approaches to statistical inference will be discussed and the main problem of learning from Vapnik’s point of view will be explained. Further we discuss the topic of function estimation as the heart of Vapnik-Chervonenkis theory. There exist several state-of-the-art methods for estimating functional dependencies, such as maximum margin estimator and artificial neural networks. While for some of these methods, e.g., the support vector machines, there has already been developed a profound theory, others require more investigation. Accordingly, we pay a closer attention to the so-called mapping neural networks and try to shed some light on certain theoretical aspects of them. We highlight some of the fundamental challenges that have attracted the attention of researcher and they are yet to be fully resolved. One of these challenges is estimation of the intrinsic dimension of data that will be discussed in detail. Another challenge is inferring causal direction when the training data set is not representative of the target population.

  • Data Science, Classification, Clustering and Three-Way Data

    Date: 2020-10-02

    Time: 15:30-16:30

    Zoom Link

    Meeting ID: 939 8331 3215

    Passcode: 096952

    Abstract:

    Data science is discussed along with some historical perspective. Selected problems in classification are considered, either via specific datasets or general problem types. In each case, the problem is introduced before one or more potential solutions are discussed and applied. The problems discussed include data with outliers, longitudinal data, and three-way data. The proposed approaches are generally mixture model-based.

  • Large-scale Network Inference

    Date: 2020-09-25

    Time: 14:00-15:00

    Zoom Link

    Meeting ID: 939 4707 7997

    Passcode: no password

    Abstract:

    Network data is prevalent in many contemporary big data applications in which a common interest is to unveil important latent links between different pairs of nodes. Yet a simple fundamental question of how to precisely quantify the statistical uncertainty associated with the identification of latent links still remains largely unexplored. In this paper, we propose the method of statistical inference on membership profiles in large networks (SIMPLE) in the setting of degree-corrected mixed membership model, where the null hypothesis assumes that the pair of nodes share the same profile of community memberships. In the simpler case of no degree heterogeneity, the model reduces to the mixed membership model for which an alternative more robust test is also proposed. Both tests are of the Hotelling-type statistics based on the rows of empirical eigenvectors or their ratios, whose asymptotic covariance matrices are very challenging to derive and estimate. Nevertheless, their analytical expressions are unveiled and the unknown covariance matrices are consistently estimated. Under some mild regularity conditions, we establish the exact limiting distributions of the two forms of SIMPLE test statistics under the null hypothesis and contiguous alternative hypothesis. They are the chi-square distributions and the noncentral chi-square distributions, respectively, with degrees of freedom depending on whether the degrees are corrected or not. We also address the important issue of estimating the unknown number of communities and establish the asymptotic properties of the associated test statistics. The advantages and practical utility of our new procedures in terms of both size and power are demonstrated through several simulation examples and real network applications.

  • BdryGP: a boundary-integrated Gaussian process model for computer code emulation

    Date: 2020-09-18

    Time: 15:30-16:30

    Zoom Link

    Meeting ID: 924 5390 4989

    Passcode: 690084

    Abstract:

    With advances in mathematical modeling and computational methods, complex phenomena (e.g., universe formations, rocket propulsion) can now be reliably simulated via computer code. This code solves a complicated system of equations representing the underlying science of the problem. Such simulations can be very time-intensive, requiring months of computation for a single run. Gaussian processes (GPs) are widely used as predictive models for “emulating” this expensive computer code. Yet with limited training data on a high-dimensional parameter space, such models can suffer from poor predictive performance and physical interpretability. Fortunately, in many physical applications, there is additional boundary information on the code beforehand, either from governing physics or scientific knowledge. We propose a new BdryGP model which incorporates such boundary information for prediction. We show that BdryGP not only enjoys improved convergence rates over standard GP models which do not incorporate boundaries, but is also more resistant to the ``curse-of-dimensionality’’ in nonparametric regression. We then demonstrate the improved predictive performance and posterior contraction of the BdryGP model on several test problems in the literature.

  • Machine Learning for Causal Inference

    Date: 2020-09-11

    Time: 16:00-17:00

    Zoom Link

    Meeting ID: 965 2536 7383

    Passcode: 421254

    Abstract:

    Given advances in machine learning over the past decades, it is now possible to accurately solve difficult non-parametric prediction problems in a way that is routine and reproducible. In this talk, I’ll discuss how machine learning tools can be rigorously integrated into observational study analyses, and how they interact with classical statistical ideas around randomization, semiparametric modeling, double robustness, etc. I’ll also survey some recent advances in methods for treatment heterogeneity. When deployed carefully, machine learning enables us to develop causal estimators that reflect an observational study design more closely than basic linear regression based methods.

  • A gentle introduction to generalized structured component analysis and its recent developments

    Date: 2020-03-27

    Time: 15:30-16:30

    Location: BURNSIDE 1205

    Abstract:

    Generalized structured component analysis (GSCA) was developed as a component-based approach to structural equation modeling, where constructs are represented by components or weighted composites of observed variables, rather than (common) factors. Unlike another long-lasting component-based approach – partial least squares path modeling, GSCA is a full-information method that optimizes a single criterion to estimate model parameters simultaneously, utilizing all information available in the entire system of equations. Over the decade, this approach has been refined and extended in various ways to enhance its data-analytic capability. I will briefly discuss the theoretical underpinnings of GSCA and demonstrate the use of an R package for GSCA - gesca. Moreover, I will outline some recent developments in GSCA, which include GSCA_M for estimating models with factors and integrated GSCA (IGSCA) for estimating models with both factors and components.

  • Informative Prior Elicitation from Historical Individual Patient Data

    Date: 2020-03-20

    Time: 15:30-16:30

    Location: BURNSIDE 1205

    Abstract:

    Historical data from previous studies may be utilized to strengthen statistical inference. Under the Bayesian framework incorporation of information obtained from any source other than the current data is facilitated through construction of an informative prior. The existing methodology for defining an informative prior based on historical data relies on measuring similarity to the current data at the study level that can result in discarding useful individual patient data (IPD). In this talk I present a family of priors that utilize IPD to strengthen statistical inference. IPD-based priors can be obtained as a weighted likelihood of the historical data where each individual’s weight is a function of their distance to the current study population. It is demonstrated that the proposed prior construction approach can considerably improve estimation accuracy and precision in compare with existing methods.

  • Geometry-based Data Exploration

    Date: 2020-03-13

    Time: 15:30-16:30

    Location: BURNSIDE 1205

    Abstract:

    High-throughput data collection technologies are becoming increasingly common in many fields, especially in biomedical applications involving single cell data (e.g., scRNA-seq and CyTOF). These introduce a rising need for exploratory analysis to reveal and understand hidden structure in the collected (high-dimensional) Big Data. A crucial aspect in such analysis is the separation of intrinsic data geometry from data distribution, as (a) the latter is typically biased by collection artifacts and data availability, and (b) rare subpopulations and sparse transitions between meta-stable states are often of great interest in biomedical data analysis. In this talk, I will show several tools that leverage manifold learning, graph signal processing, and harmonic analysis for biomedical (in particular, genomic/proteomic) data exploration, with emphasis on visualization, data generation/augmentation, and nonlinear feature extraction. A common thread in the presented tools is the construction of a data-driven diffusion geometry that both captures intrinsic structure in data and provides a generalization of Fourier harmonics on it. These, in turn, are used to process data features along the data geometry for denoising and generative purposes. Finally, I will relate this approach to the recently-proposed geometric scattering transform that generalizes Mallat’s scattering to non-Euclidean domains, and provides a mathematical framework for theoretical understanding of the emerging field of geometric deep learning.

  • Neyman-Pearson classification: parametrics and sample size requirement

    Date: 2020-02-28

    Time: 15:30-16:30

    Location: BURNSIDE 1104

    Abstract:

    The Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified level alpha. This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong, Feng and Li (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given type I error (i.e., conditional probability of classifying a class 0 observation as class 1 under the 0-1 coding) upper bound alpha with high probability, without specific distributional assumptions on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class 0, which is often the more scarce class, such as in rare disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum sample size requirements on class 0 observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class 0 observations, we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers. The proposed NP classifiers are implemented in the R package nproc.

  • Non-central squared copulas: properties and applications

    Date: 2020-02-21

    Time: 15:30-16:30

    Location: BURNSIDE 1205

    Abstract:

    The goal of this presentation is to introduce new families of multivariate copulas, extending the chi-square copulas, the Fisher copula, and squared copulas. The new families are constructed from existing copulas by first transforming their margins to standard Gaussian distributions, then transforming these variables into non-central chi-square variables with one degree of freedom, and finally by considering the copula associated with these new variables. It is shown that by varying the non-centrality parameters, one can model non-monotonic dependence, and when one or many non-centrality parameters are outside a given hyper-rectangle, then the copula is almost the same as the one when these parameters are infinite. For these new families, the tail behavior, the monotonicity of dependence measures such as Kendall’s tau and Spearman’s rho are investigated, and estimation is discussed. Some examples will illustrate the usefulness of these new copula families.