/post/index.xml Past Seminar Series - McGill Statistics Seminars
  • Construction of bivariate distributions via principal components

    Date: 2011-11-18

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    The diagonal expansion of a bivariate distribution (Lancaster, 1958) has been used as a tool to construct bivariate distributions; this method has been generalized using principal dimensions of random variables (Cuadras 2002). Sufficient and necessary conditions are given for uniform, exponential, logistic and Pareto marginals in the one and two-dimensional case. The corresponding copulas are obtained.

    Speaker

    Amparo Casanova is an Assistant Professor at the Dalla Lana School of Public Health, Division of Biostatistics, University of Toronto.

  • Guérin: An ergodic variant of the telegraph process for a toy model of bacterial chemotaxis | Staicu: Skewed functional processes and their applications

    Date: 2011-11-11

    Time: 14:00-16:30

    Location: UdeM

    Abstract:

    Guérin: I will study the long time behavior of a variant of the classic telegraph process, with non-constant jump rates that induce a drift towards the origin. This process can be seen as a toy model for velocity-jump processes recently proposed as mathematical models of bacterial chemotaxis. I will give its invariant law and construct an explicit coupling for velocity and position, providing exponential ergodicity with moreover a quantitative control of the total variation distance to equilibrium at each time instant. It is a joint work with Joaquin Fontbona (Universidad de Santiago, Chile) and Florent Malrieu (Université Rennes 1, France).

  • A Bayesian method of parametric inference for diffusion processes

    Date: 2011-11-04

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Diffusion processes have been used to model a multitude of continuous-time phenomena in Engineering and the Natural Sciences, and as in this case, the volatility of financial assets. However, parametric inference has long been complicated by an intractable likelihood function. For many models the most effective solution involves a large amount of missing data for which the typical Gibbs sampler can be arbitrarily slow. On the other hand, joint parameter and missing data proposals can lead to a radical improvement, but their acceptance rate tends to scale exponentially with the number of observations.

  • Maximum likelihood estimation in network models

    Date: 2011-11-03

    Time: 16:00-17:00

    Location: BURN 1205

    Abstract:

    This talk is concerned with maximum likelihood estimation (MLE) in exponential statistical models for networks (random graphs) and, in particular, with the beta model, a simple model for undirected graphs in which the degree sequence is the minimal sufficient statistic. The speaker will present necessary and sufficient conditions for the existence of the MLE of the beta model parameters that are based on a geometric object known as the polytope of degree sequences. Using this result, it is possible to characterize in a combinatorial fashion sample points leading to a non-existent MLE and non-estimability of the probability parameters under a non-existent MLE. The speaker will further indicate some conditions guaranteeing that the MLE exists with probability tending to 1 as the number of nodes increases. Much of this analysis applies also to other well-known models for networks, such as the Rasch model, the Bradley-Terry model and the more general p1 model of Holland and Leinhardt. These results are in fact instantiations of rather general geometric properties of exponential families with polyhedral support that will be illustrated with a simple exponential random graph model.

  • Simulated method of moments estimation for copula-based multivariate models

    Date: 2011-10-28

    Time: 15:00-16:00

    Location: BURN 1205

    Abstract:

    This paper considers the estimation of the parameters of a copula via a simulated method of moments type approach. This approach is attractive when the likelihood of the copula model is not known in closed form, or when the researcher has a set of dependence measures or other functionals of the copula, such as pricing errors, that are of particular interest. The proposed approach naturally also nests method of moments and generalized method of moments estimators. Combining existing results on simulation based estimation with recent results from empirical copula process theory, we show the consistency and asymptotic normality of the proposed estimator, and obtain a simple test of over-identifying restrictions as a goodness-of-fit test. The results apply to both iid and time series data. We analyze the finite-sample behavior of these estimators in an extensive simulation study. We apply the model to a group of seven financial stock returns and find evidence of statistically significant tail dependence, and that the dependence between these assets is stronger in crashes than booms.

  • Bayesian modelling of GWAS data using linear mixed models

    Date: 2011-10-21

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Genome-wide association studies (GWAS) are used to identify physical positions (loci) on the genome where genetic variation is causally associated with a phenotype of interest at the population level. Typical studies are based on the measurement of several hundred thousand single nucleotide polymorphism (SNP) variants spread across the genome, in a few thousand individuals. The resulting datasets are large and require computationally efficient methods of statistical analysis.

  • Dupuis: Modeling non-stationary extremes: The case of heat waves | Davis: Estimating extremal dependence in time series via the extremogram

    Date: 2011-10-14

    Time: 14:00-16:30

    Location: TROTTIER 1080

    Abstract:

    Dupuis: Environmental processes are often non-stationary since climate patterns cause systematic seasonal effects and long-term climate changes cause trends. The usual limit models are not applicable for non-stationary processes, but models from standard extreme value theory can be used along with statistical modeling to provide useful inference. Traditional approaches include letting model parameters be a function of covariates or using time-varying thresholds. These approaches are inadequate for the study of heat waves however and we show how a recent pre-processing approach by Eastoe and Tawn (2009) can be used in conjunction with an innovative change-point analysis to model daily maximum temperature. The model is then fitted to data from four U.S. cities and used to estimate the recurrence probabilities of runs over seasonally high temperatures. We show that the probability of long and intense heat waves has increased considerably over 50 years.

  • Nonexchangeability and radial asymmetry identification via bivariate quantiles, with financial applications

    Date: 2011-10-07

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In this talk, the following topics will be discussed: A class of bivariate probability integral transforms and Kendall distribution; bivariate quantile curves, central and lateral regions; non-exchangeability and radial asymmetry identification; new measures of nonexchangeability and radial asymmetry; financial applications and a few open problems (joint work with Flavio Ferreira).

    Speaker

    Nikolai Kolev is a Professor of Statistics at the University of Sao Paulo, Brazil.

  • Data sketching for cardinality and entropy estimation?

    Date: 2011-09-30

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    Streaming data is ubiquitous in a wide range of areas from engineering and information technology, finance, and commerce, to atmospheric physics, and earth sciences. The online approximation of properties of data streams is of great interest, but this approximation process is hindered by the sheer size of the data and the speed at which it is generated. Data stream algorithms typically allow only one pass over the data, and maintain sub-linear representations of the data from which target properties can be inferred with high efficiency.

  • What is singular learning theory?

    Date: 2011-09-23

    Time: 15:30-16:30

    Location: BURN 1205

    Abstract:

    In this talk, we give a basic introduction to Sumio Watanabe’s Singular Learning Theory, as outlined in his book “Algebraic Geometry and Statistical Learning Theory”. Watanabe’s key insight to studying singular models was to use a deep result in algebraic geometry known as Hironaka’s Resolution of Singularities. This result allows him to reparametrize the model in a normal form so that central limit theorems can be applied. In the second half of the talk, we discuss new algebraic methods where we define fiber ideals for discrete/Gaussian models. We show that the key to understanding the singular model lies in monomializing its fiber ideal.