/post/index.xml Past Seminar Series - McGill Statistics Seminars
  • Bayesian analysis of non-identifiable models, with an example from epidemiology and biostatistics

    Date: 2015-11-06

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Most regression models in biostatistics assume identifiability, which means that each point in the parameter space corresponds to a unique likelihood function for the observable data. Recently there has been interest in Bayesian inference for non-identifiable models, which can better represent uncertainty in some contexts. One example is in the field of epidemiology, where the investigator is concerned with bias due to unmeasured confounders (omitted variables). In this talk, I will illustrate Bayesian analysis of a non-identifiable model from epidemiology using government administrative data from British Columbia. I will show how to use the software STAN, which is new software developed by Andrew Gelman and others in the USA. STAN allows the careful study of posterior distributions in a vast collection of Bayesian models, including non-identifiable models for bias in epidemiology, which are poorly suited to conventional Gibbs sampling.

  • A knockoff filter for controlling the false discovery rate

    Date: 2015-10-30

    Time: 16:00-17:00

    Location: Salle 1360, Pavillon André-Aisenstadt, Université de Montréa

    Abstract:

    The big data era has created a new scientific paradigm: collect data first, ask questions later. Imagine that we observe a response variable together with a large number of potential explanatory variables, and would like to be able to discover which variables are truly associated with the response. At the same time, we need to know that the false discovery rate (FDR) - the expected fraction of false discoveries among all discoveries - is not too high, in order to assure the scientist that most of the discoveries are indeed true and replicable. We introduce the knockoff filter, a new variable selection procedure controlling the FDR in the statistical linear model whenever there are at least as many observations as variables. This method works by constructing fake variables, knockoffs, which can then be used as controls for the true variables; the method achieves exact FDR control in finite-sample settings no matter the design or covariates, the number of variables in the model, and the amplitudes of the unknown regression coefficients, and does not require any knowledge of the noise level. This is joint work with Rina Foygel Barber.

  • Robust mixture regression and outlier detection via penalized likelihood

    Date: 2015-10-23

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Finite mixture regression models have been widely used for modeling mixed regression relationships arising from a clustered and thus heterogenous population. The classical normal mixture model, despite of its simplicity and wide applicability, may fail dramatically in the presence of severe outliers. We propose a robust mixture regression approach based on a sparse, case-specific, and scale-dependent mean-shift parameterization, for simultaneously conducting outlier detection and robust parameter estimation. A penalized likelihood approach is adopted to induce sparsity among the mean-shift parameters so that the outliers are distinguished from the good observations, and a thresholding-embedded Expectation-Maximization (EM) algorithm is developed to enable stable and efficient computation. The proposed penalized estimation approach is shown to have strong connections with other robust methods including the trimmed likelihood and the M-estimation methods. Comparing with several existing methods, the proposed methods show outstanding performance in numerical studies.

  • Estimating high-dimensional multi-layered networks through penalized maximum likelihood

    Date: 2015-10-16

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Gaussian graphical models represent a good tool for capturing interactions between nodes represent the underlying random variables. However, in many applications in biology one is interested in modeling associations both between, as well as within molecular compartments (e.g., interactions between genes and proteins/metabolites). To this end, inferring multi-layered network structures from high-dimensional data provides insight into understanding the conditional relationships among nodes within layers, after adjusting for and quantifying the effects of nodes from other layers. We propose an integrated algorithmic approach for estimating multi-layered networks, that incorporates a screening step for significant variables, an optimization algorithm for estimating the key model parameters and a stability selection step for selecting the most stable effects. The proposed methodology offers an efficient way of estimating the edges within and across layers iteratively, by solving an optimization problem constructed based on penalized maximum likelihood (under a Gaussianity assumption). The optimization is solved on a reduced parameter space that is identified through screening, which remedies the instability in high-dimension. Theoretical properties are considered to ensure identifiability and consistent estimation of the parameters and convergence of the optimization algorithm, despite the lack of global convexity. The performance of the methodology is illustrated on synthetic data sets and on an application on gene and metabolic expression data for patients with renal disease.

  • Parameter estimation of partial differential equations over irregular domains

    Date: 2015-10-09

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Spatio-temporal data are abundant in many scientific fields; examples include daily satellite images of the earth, hourly temperature readings from multiple weather stations, and the spread of an infectious disease over a particular region. In many instances the spatio-temporal data are accompanied by mathematical models expressed in terms of partial differential equations (PDEs). These PDEs determine the theoretical aspects of the behavior of the physical, chemical or biological phenomena considered. Azzimonti (2013) showed that including the associated PDE as a regularization term as opposed to the conventional two-dimensional Laplacian provides a considerable improvement in the estimation accuracy. The PDEs parameters often have interesting interpretations. Although they are typically unknown and must be inferred from expert knowledge of the phenomena considered. In this talk I will discuss extending the profiling with a parameter cascading procedure outlined in Ramsay et al. (2007) to incorporate PDE parameter estimation. I will also show how, following Sangalli et al. (2013), the estimation procedure can be extended to include finite-element methods (FEMs). This allows the proposed method to account for attributes of the geometry of the physical problem such as irregular shaped domains, external and internal boundary features, as well as strong concavities. Thus this talk will introduce a methodology for data-driven estimates of the parameters of PDEs defined over irregular domains.

  • Estimating covariance matrices of intermediate size

    Date: 2015-10-02

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In finance, the covariance matrix of many assets is a key component of financial portfolio optimization and is usually estimated from historical data. Much research in the past decade has focused on improving estimation by studying the asymptotics of large covariance matrices in the so-called high-dimensional regime, where the dimension p grows at the same pace as the sample size n, and this approach has been very successful. This choice of growth makes sense in part because, based on results for eigenvalues, it appears that there are only two limits: the high-dimensional one when p grows like n, and the classical one, when p grows more slowly than n. In this talk, I will present evidence that this binary view is false, and that there could be hidden intermediate regimes lying in between. In turn, this allows for corrections to the sample covariance matrix that are more appropriate when the dimension is large but moderate with respect to the sample size, as is often the case; this can also lead to better optimization for portfolio volatility in many situations of interest.

  • Topics in statistical inference for the semiparametric elliptical copula model

    Date: 2015-09-25

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    This talk addresses aspects of the statistical inference problem for the semiparametric elliptical copula model. The semiparametric elliptical copula model is the family of distributions whose dependence structures are specified by parametric elliptical copulas but whose marginal distributions are left unspecified. An elliptical copula is uniquely characterized by a characteristic generator and a copula correlation matrix Sigma. In the first part of this talk, I will consider the estimation of Sigma. A natural estimate for Sigma is the plug-in estimator Sigmahat with Kendall’s tau statistic. I will first exhibit a sharp bound on the operator norm of Sigmahat - Sigma. I will then consider a factor model of Sigma, for which I will propose a refined estimator Sigmatilde by fitting a low-rank matrix plus a diagonal matrix to Sigmahat using least squares with a nuclear norm penalty on the low-rank matrix. The bound on the operator norm of Sigmahat - Sigma serves to scale the penalty term, and we obtained finite-sample oracle inequalities for Sigmatilde that I will present. In the second part of this talk, we will look at the classification of two distributions that have the same Gaussian copula but that are otherwise arbitrary in high dimensions. Under this semiparametric Gaussian copula setting, I will give an accurate semiparametric estimator of the log-density ratio, which leads to an empirical decision rule and a bound on its associated excess risk. Our estimation procedure takes advantage of the potential sparsity as well as the low noise condition in the problem, which allows us to achieve faster convergence rate of the excess risk than is possible in the existing literature on semiparametric Gaussian copula classification. I will demonstrate the efficiency of our semiparametric empirical decision rule by showing that the bound on the excess risk nearly achieves a convergence rate of 1 over square-root-n in the simple setting of Gaussian distribution classification.

  • A unified algorithm for fitting penalized models with high-dimensional data

    Date: 2015-09-18

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In the light of high-dimensional problems, research on the penalized model has received much interest. Correspondingly, several algorithms have been developed for solving penalized high-dimensional models. I will describe fast and efficient unified algorithms for computing the solution path for a collection of penalized models. In particular, we will look at an algorithm for solving L1-penalized learning problems and an algorithm for solving group-lasso learning problems. These algorithm take advantage of a majorization-minimization trick to make each update simple and efficient. The algorithms also enjoy a proven convergence property. To demonstrate the generality of these algorithms, I extend them to a class of elastic net penalized large margin classification methods and to elastic net penalized Cox proportional hazards models. These algorithms have been implemented in three R packages gglasso, gcdnet and fastcox, which are publicly available from the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org/web/packages. On simulated and real data, our algorithms consistently outperform the existing software in speed for computing penalized models and often delivers better quality solutions.

  • Bias correction in multivariate extremes

    Date: 2015-09-11

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    The estimation of the extremal dependence structure of a multivariate extreme-value distribution is spoiled by the impact of the bias, which increases with the number of observations used for the estimation. Already known in the univariate setting, the bias correction procedure is studied in this talk under the multivariate framework. New families of estimators of the stable tail dependence function are obtained. They are asymptotically unbiased versions of the empirical estimator introduced by Huang (1992). Given that the new estimators have a regular behavior with respect to the number of observations, it is possible to deduce aggregated versions so that the choice of threshold is substantially simplified. An extensive simulation study is provided as well as an application on real data.

  • Some new classes of bivariate distributions based on conditional specification

    Date: 2015-05-14

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    A bivariate distribution can sometimes be characterized completely by properties of its conditional distributions. In this talk, we will discuss models of bivariate distributions whose conditionals are members of prescribed parametric families of distributions. Some relevant models with specified conditionals will be discussed, including the normal and lognormal cases, the skew-normal and other families of distributions. Finally, some conditionally specified densities will be shown to provide convenient flexible conjugate prior families in certain multiparameter Bayesian settings.