/categories/mcgill-statistics-seminar/index.xml McGill Statistics Seminar - McGill Statistics Seminars
  • A margin-free clustering algorithm appropriate for dependent maxima in the domain of attraction of an extreme-value copula

    Date: 2014-10-10

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Extracting relevant information in complex spatial-temporal data sets is of paramount importance in statistical climatology. This is especially true when identifying spatial dependencies between quantitative extremes like heavy rainfall. The paper of Bernard et al. (2013) develops a fast and simple clustering algorithm for finding spatial patterns appropriate for extremes. They develop their algorithm by adapting multivariate extreme-value theory to the context of spatial clustering. This is done by relating the variogram, a well-known distance used in geostatistics, to the extremal coefficient of a pair of joint maxima. This gives rise to a straightforward nonparametric estimator of this distance using the empirical distribution function. Their clustering approach is used to analyze weekly maxima of hourly precipitation recorded in France and a spatial pattern consistent with existing weather models arises. This applied talk is devoted to the validation and extension of this clustering approach. A simulation study using the multivariate logistic distribution as well as max-stable random fields shows that this approach provides accurate clustering when the maxima belong to an extreme-value distribution. Furthermore this clustering distance can be viewed as an average absolute rank difference, implying that it is appropriate for margin-free clustering of dependent variables. In particular it is appropriate for dependent maxima in the domain of attraction of an extreme-value copula.

  • Statistical exploratory data analysis in the modern era

    Date: 2014-10-03

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Major challenges arising from today’s “data deluge” include how to handle the commonly occurring situation of different types of variables (say, continuous and categorical) being simultaneously measured, as well as how to assess the accompanying flood of questions. Based on information theory, a bias-corrected mutual information (BCMI) measure of association that is valid and estimable between all basic types of variables has been proposed. It has the advantage of being able to identify non-linear as well as linear relationships. Based on the BCMI measure, a novel exploratory approach to finding associations in data sets having a large number of variables of different types has been developed. These associations can be used as a basis for downstream analyses such as finding clusters and networks. The application of this exploratory approach is very general. Comparisons also will be made with other measures. Illustrative examples include exploring relationships (i) in clinical and genomic (say, gene expression and genotypic) data, and (ii) between social, economic, health and political indicators from the World Health Organisation.

  • Analysis of palliative care studies with joint models for quality-of-life measures and survival

    Date: 2014-09-26

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In palliative care studies, the primary outcomes are often health related quality of life measures (HRLQ). Randomized trials and prospective cohorts typically recruit patients with advanced stage of disease and follow them until death or end of the study. An important feature of such studies is that, by design, some patients, but not all, are likely to die during the course of the study. This affects the interpretation of the conventional analysis of palliative care trials and suggests the need for specialized methods of analysis. We have developed a “terminal decline model” for palliative care trials that, by jointly modeling the time until death and the HRQL measures, leads to flexible interpretation and efficient analysis of the trial data (Li, Tosteson, Bakitas, STMED 2012).

  • Covariates missing by design

    Date: 2014-09-19

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Incomplete data can arise in many different situations for many different reasons. Sometimes the data may be incomplete for reasons beyond the control of the experimenter. However, it is also possible that this missingness is part of the study design. By using a two-phase sampling approach where only a small sub-sample gives complete information, it is possible to greatly reduce the cost of a study and still obtain precise estimates. This talk will introduce the concepts of incomplete data and two-phase sampling designs and will discuss adaptive two-phase designs which exploit information from an internal pilot study to approximate the optimal sampling scheme for an analysis based on mean score estimating equations.

  • Hydrological applications with the functional data analysis framework

    Date: 2014-09-12

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    River flows records are an essential data source for a variety of hydrological applications including the prevention of flood risks and as well as the planning and management of water resources. A hydrograph is a graphical representation of the temporal variation of flow over a period of time (continuously measured, usually over a year). A flood hydrograph is commonly characterized by a number of features, mainly its peak, volume and duration. Classical and recent multivariate approaches considered in hydrological applications treated these features jointly in order to take into account their dependence structure or their relationship. However, all these approaches are based on the analysis of a limited number of characteristics and do not make use of the full information provided by the hydrograph. Even though these approaches provided good results, they present some drawbacks and limitations. The objective of the present talk is to introduce a new framework for hydrological applications where data, such as hydrographs, are employed as continuous curves: functional data. In this context, the whole hydrograph is considered as one infinite-dimensional observation. This context contributes to addressing the problem of lack of data commonly encountered in hydrology. A number of functional data analysis tools and methods are presented and adapted.

  • Some aspects of data analysis under confidentiality protection

    Date: 2014-04-04

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Statisticians working in most federal agencies are often faced with two conflicting objectives: (1) collect and publish useful datasets for designing public policies and building scientific theories, and (2) protect confidentiality of data respondents which is essential to uphold public trust, leading to better response rates and data accuracy. In this talk I will provide a survey of two statistical methods currently used at the U.S. Census Bureau: synthetic data and noise perturbed data.

  • How much does the dependence structure matter?

    Date: 2014-03-28

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In this talk, we will look at some classical problems from an anti-traditional perspective. We will consider two problems regarding a sequence of random variables with a given common marginal distribution. First, we will introduce the notion of extreme negative dependence (END), a new benchmark for negative dependence, which is comparable to comonotonicity and independence. Second, we will study the compatibility of the marginal distribution and the limiting distribution when the dependence structure in the sequence is allowed to vary among all possibilities. The results are somewhat simple, yet surprising. We will provide some interpretation and applications of the theoretical results in financial risk management, with the hope to deliver the following message: with the common marginal distribution known and dependence structure unknown, we know essentially nothing about the asymptotic shape of the sum of random variables.

  • Mixed effects trees and forests for clustered data

    Date: 2014-03-14

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In this talk, I will present extensions of tree-based and random forest methods for the case of clustered data. The proposed methods can handle unbalanced clusters, allows observations within clusters to be splitted, and can incorporate random effects and observation-level covariates. The basic tree-building algorithm for a continuous outcome is implemented using standard algorithms within the framework of the EM algorithm. The extension to other types of outcomes (e.g., binary, count) uses the penalized quasi-likelihood (PQL) method for the estimation and the EM algorithm for the computation. Simulation results show that the proposed methods provides substantial improvements over standard trees and forests when the random effects are non negligible. The use of the method will be illustrated with real data sets.

  • On the multivariate analysis of neural spike trains: Skellam process with resetting and its applications

    Date: 2014-02-21

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Nerve cells (a.k.a. neurons) communicate via electrochemical waves (action potentials), which are usually called spikes as they are very localized in time. A sequence of consecutive spikes from one neuron is called a spike train. The exact mechanism of information coding in spike trains is still an open problem; however, one popular approach is to model spikes as realizations of an inhomogeneous Poisson process. In this talk, the limitations of the Poisson model are highlighted , and the Skellam Process with Resetting (SPR) is introduced as an alternative model for the analysis of neural spike trains. SPR is biologically justified, and the parameter estimation algorithm developed for it is computationally efficient. To allow for the modelling of neural ensembles, this process is generalized to the multivariate case, where Multivariate Skellam Process with Resetting (MSPR), as well as the multivariate Skellam distribution are introduced. Simulation and real data studies confirm the promising results of the Skellam model in the statistical analysis of neural spike trains.

  • Divergence based inference for general estimating equations

    Date: 2014-02-14

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Hellinger distance and its variants have long been used in the theory of robust statistics to develop inferential tools that are more robust than the maximum likelihood but as ecient as the MLE when the posited model holds. A key aspect of this alternative approach requires speci cation of a parametric family, which is usually not feasible in the context of problems involving complex data structures wherein estimating equations are typically used for inference. In this presentation, we describe how to extend the scope of divergence theory for inferential problems involving estimating equations and describe useful algorithms for their computation. Additionally, we theoretically study the robustness properties of the methods and establish the semi-parametric eciency of the new divergence based estimators under suitable technical conditions. Finally, we use the proposed methods to develop robust sure screening methods for ultra high dimensional problems. Theory of large deviations, convexity theory, and concentration inequalities play an essential role in the theoretical analysis and numerical development. Applications from equine parasitology, stochastic optimization, and antimicrobial resistance will be used to describe various aspects of the proposed methods.