/tags/2019-winter/index.xml 2019 Winter - McGill Statistics Seminars
  • Network models, sampling, and symmetry properties

    Date: 2019-02-01 Time: 15:30-16:30 Location: BURN 1205 Abstract: A recent body of work, by myself and many others, aims to develop a statistical theory of network data for problems a single network is observed. Of the models studied in this area, graphon models are probably most widely known in statistics. I will explain the relationship between three aspects of this work: (1) Specific models, such as graphon models, graphex models, and edge-exchangeable graphs.
  • Modern Non-Problems in Optimization: Applications to Statistics and Machine Learning

    Date: 2019-01-25 Time: 16:00-17:00 Location: BURN 920 Abstract: We have witnessed a lot of exciting development of data science in recent years. From the perspective of optimization, many modern data-science problems involve some basic ``non’’-properties that lack systematic treatment by the current approaches for the sake of the computation convenience. These non-properties include the coupling of the non-convexity, non-differentiability and non-determinism. In this talk, we present rigorous computational methods for solving two typical non-problems: the piecewise linear regression and the feed-forward deep neural network.
  • Singularities of the information matrix and longitudinal data with change points

    Date: 2019-01-18 Time: 15:30-16:30 Location: BURN 1205 Abstract: Non-singularity of the information matrix plays a key role in model identification and the asymptotic theory of statistics. For many statistical models, however, this condition seems virtually impossible to verify. An example of such models is a class of mixture models associated with multi-path change-point problems (MCP) which can model longitudinal data with change points. The MCP models are similar in nature to mixture-of-experts models in machine learning.
  • Magic Cross-Validation Theory for Large-Margin Classification

    Date: 2019-01-11 Time: 15:30-16:30 Location: BURN 1205 Abstract: Cross-validation (CV) is perhaps the most widely used tool for tuning supervised machine learning algorithms in order to achieve better generalization error rate. In this paper, we focus on leave-one-out cross-validation (LOOCV) for the support vector machine (SVM) and related algorithms. We first address two wide-spreading misconceptions on LOOCV. We show that LOOCV, ten-fold, and five-fold CV are actually well-matched in estimating the generalization error, and the computation speed of LOOCV is not necessarily slower than that of ten-fold and five-fold CV.