Date: 2017-11-10
Time: 15:30-16:30
Location: BURN 1205
Abstract:
One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data. In light of this capacity for overfitting, it is remarkable that simple algorithms like SGD reliably return solutions with low test error. One roadblock to explaining these phenomena in terms of implicit regularization, structural properties of the solution, and/or easiness of the data is that many learning bounds are quantitatively vacuous when applied to networks learned by SGD in this “deep learning” regime. Logically, in order to explain generalization, we need nonvacuous bounds. We return to an idea by Langford and Caruana (2001), who used PAC-Bayes bounds to compute nonvacuous numerical bounds on generalization error for stochastic two-layer two-hidden-unit neural networks via a sensitivity analysis. By optimizing the PAC-Bayes bound directly, we are able to extend their approach and obtain nonvacuous generalization bounds for deep stochastic neural network classifiers with millions of parameters trained on only tens of thousands of examples. We connect our findings to recent and old work on flat minima and MDL-based explanations of generalization. Time permitting, I will discuss recent work on computing even tighter generalization bounds associated with a learning algorithm introduced by Chaudhari et al. (2017), called Entropy-SGD. We show that Entropy-SGD indirectly optimizes a PAC-Bayes bound, but does so by optimizing the “prior” term, violating the hypothesis that the prior be independent of the data. We show how to fix this defect using differential privacy. The result is a new PAC-Bayes bound for data-dependent priors, which we show, up to some approximations, delivers even tighter generalization bounds. Joint work with Gintare Karolina Dziugaite, based on https://arxiv.org/abs/1703.11008
Speaker
Daniel Roy is an Assistant Professor in the Department of Statistical Sciences at the University of Toronto.