/categories/mcgill-statistics-seminar/index.xml McGill Statistics Seminar - McGill Statistics Seminars
  • Bayesian regression with B-splines under combinations of shape constraints and smoothness properties

    Date: 2014-11-07

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    We approach the problem of shape constrained regression from a Bayesian perspective. A B-spline basis is used to model the regression function. The smoothness of the regression function is controlled by the order of the B-splines and the shape is controlled by the shape of an associated control polygon. Controlling the shape of the control polygon reduces to some inequality constraints on the spline coefficients. Our approach enables us to take into account combinations of shape constraints and to localize each shape constraint on a given interval. The performances of our method is investigated through a simulation study. Applications to real data sets from the food industry and Global Warming are provided.

  • A copula-based model for risk aggregation

    Date: 2014-10-31

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    A flexible approach is proposed for risk aggregation. The model consists of a tree structure, bivariate copulas, and marginal distributions. The construction relies on a conditional independence assumption whose implications are studied. Selection the tree structure, estimation and model validation are illustrated using data from a Canadian property and casualty insurance company.

    Speaker

    Marie-Pier Côté is a PhD student in the Department of Mathematics and Statistics at McGill University.

  • PREMIER: Probabilistic error-correction using Markov inference in error reads

    Date: 2014-10-24

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Next generation sequencing (NGS) is a technology revolutionizing genetics and biology. Compared with the old Sanger sequencing method, the throughput is astounding and has fostered a slew of innovative sequencing applications. Unfortunately, the error rates are also higher, complicating many downstream analyses. For example, de novo assembly of genomes is less accurate and slower when reads include many errors. We develop a probabilistic model for NGS reads that can detect and correct errors without a reference genome and while flexibly modeling and estimating the error properties of the sequencing machine. It uses a penalized likelihood to enforce our prior belief that the kmer spectrum (collection of k-length strings observed in the reads) generated from a genome is sparse when k is sufficiently large. The model formalizes core ideas that are used in many ad hoc algorithmic approaches to error correction. We show our method can detect and remove more errors from sequencing reads than existing methods. Though our method carries a higher computational burden than the best algorithmic approaches, the probabilistic approach is extensible, flexible, and well-positioned to support downstream statistical analysis of the increasing volume of sequence data.

  • Patient privacy, big data, and specimen pooling: Using an old tool for new challenges

    Date: 2014-10-17

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In the recent past, electronic health records and distributed data networks emerged as a viable resource for medical and scientific research. As the use of confidential patient information from such sources become more common, maintaining privacy of patients is of utmost importance. For a binary disease outcome of interest, we show that the techniques of specimen pooling could be applied for analysis of large and/or distributed data while respecting patient privacy. I will review the pooled analysis for a binary outcome and then show how it can be used for distributed data. Aggregate level data are passed from the nodes of the network to the analysis center and can be used very easily with logistic regression for estimation of disease odds ratio associated with a set of categorical or continuous covariates. Pooling approach allows for consistent estimation of the parameters of logistic regression that can include confounders. Additionally, since the individual covariate values can be accessed within a network, effect modifiers can be accommodated and consistently estimated. Since pooling effectively reduces the size of the dataset by creating pools or sets of individual, the resulting dataset can be analyzed much more quickly as compared to an original dataset that is too big as compared to computing environment.

  • A margin-free clustering algorithm appropriate for dependent maxima in the domain of attraction of an extreme-value copula

    Date: 2014-10-10

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Extracting relevant information in complex spatial-temporal data sets is of paramount importance in statistical climatology. This is especially true when identifying spatial dependencies between quantitative extremes like heavy rainfall. The paper of Bernard et al. (2013) develops a fast and simple clustering algorithm for finding spatial patterns appropriate for extremes. They develop their algorithm by adapting multivariate extreme-value theory to the context of spatial clustering. This is done by relating the variogram, a well-known distance used in geostatistics, to the extremal coefficient of a pair of joint maxima. This gives rise to a straightforward nonparametric estimator of this distance using the empirical distribution function. Their clustering approach is used to analyze weekly maxima of hourly precipitation recorded in France and a spatial pattern consistent with existing weather models arises. This applied talk is devoted to the validation and extension of this clustering approach. A simulation study using the multivariate logistic distribution as well as max-stable random fields shows that this approach provides accurate clustering when the maxima belong to an extreme-value distribution. Furthermore this clustering distance can be viewed as an average absolute rank difference, implying that it is appropriate for margin-free clustering of dependent variables. In particular it is appropriate for dependent maxima in the domain of attraction of an extreme-value copula.

  • Statistical exploratory data analysis in the modern era

    Date: 2014-10-03

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Major challenges arising from today’s “data deluge” include how to handle the commonly occurring situation of different types of variables (say, continuous and categorical) being simultaneously measured, as well as how to assess the accompanying flood of questions. Based on information theory, a bias-corrected mutual information (BCMI) measure of association that is valid and estimable between all basic types of variables has been proposed. It has the advantage of being able to identify non-linear as well as linear relationships. Based on the BCMI measure, a novel exploratory approach to finding associations in data sets having a large number of variables of different types has been developed. These associations can be used as a basis for downstream analyses such as finding clusters and networks. The application of this exploratory approach is very general. Comparisons also will be made with other measures. Illustrative examples include exploring relationships (i) in clinical and genomic (say, gene expression and genotypic) data, and (ii) between social, economic, health and political indicators from the World Health Organisation.

  • Analysis of palliative care studies with joint models for quality-of-life measures and survival

    Date: 2014-09-26

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In palliative care studies, the primary outcomes are often health related quality of life measures (HRLQ). Randomized trials and prospective cohorts typically recruit patients with advanced stage of disease and follow them until death or end of the study. An important feature of such studies is that, by design, some patients, but not all, are likely to die during the course of the study. This affects the interpretation of the conventional analysis of palliative care trials and suggests the need for specialized methods of analysis. We have developed a “terminal decline model” for palliative care trials that, by jointly modeling the time until death and the HRQL measures, leads to flexible interpretation and efficient analysis of the trial data (Li, Tosteson, Bakitas, STMED 2012).

  • Covariates missing by design

    Date: 2014-09-19

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Incomplete data can arise in many different situations for many different reasons. Sometimes the data may be incomplete for reasons beyond the control of the experimenter. However, it is also possible that this missingness is part of the study design. By using a two-phase sampling approach where only a small sub-sample gives complete information, it is possible to greatly reduce the cost of a study and still obtain precise estimates. This talk will introduce the concepts of incomplete data and two-phase sampling designs and will discuss adaptive two-phase designs which exploit information from an internal pilot study to approximate the optimal sampling scheme for an analysis based on mean score estimating equations.

  • Hydrological applications with the functional data analysis framework

    Date: 2014-09-12

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    River flows records are an essential data source for a variety of hydrological applications including the prevention of flood risks and as well as the planning and management of water resources. A hydrograph is a graphical representation of the temporal variation of flow over a period of time (continuously measured, usually over a year). A flood hydrograph is commonly characterized by a number of features, mainly its peak, volume and duration. Classical and recent multivariate approaches considered in hydrological applications treated these features jointly in order to take into account their dependence structure or their relationship. However, all these approaches are based on the analysis of a limited number of characteristics and do not make use of the full information provided by the hydrograph. Even though these approaches provided good results, they present some drawbacks and limitations. The objective of the present talk is to introduce a new framework for hydrological applications where data, such as hydrographs, are employed as continuous curves: functional data. In this context, the whole hydrograph is considered as one infinite-dimensional observation. This context contributes to addressing the problem of lack of data commonly encountered in hydrology. A number of functional data analysis tools and methods are presented and adapted.

  • Some aspects of data analysis under confidentiality protection

    Date: 2014-04-04

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Statisticians working in most federal agencies are often faced with two conflicting objectives: (1) collect and publish useful datasets for designing public policies and building scientific theories, and (2) protect confidentiality of data respondents which is essential to uphold public trust, leading to better response rates and data accuracy. In this talk I will provide a survey of two statistical methods currently used at the U.S. Census Bureau: synthetic data and noise perturbed data.