Date: 2021-09-17
Time: 15:30-16:30 (Montreal time)
https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09
Meeting ID: 834 3668 6293
Passcode: 12345
Abstract:
In recent years, highly expressive machine learning models, i.e. models that can express rich classes of functions, are becoming more and more commonly used due their success both in regression and classification tasks, such models are deep neural nets, kernel machines and more. From the classical theory statistics point of view (the minimax theory), rich models tend to have a higher minimax rate, i.e. any estimator must have a high risk (a “worst case scenario” error). Therefore, it seems that for modern models the classical theory may be too conservative and strict. In this talk, we consider the most popular procedure for regression task, that is Empirical Risk Minimization with squared loss (ERM) and we shall analyze its minimal squared error both in the random and the fixed design settings, under the assumption of a convex family of functions. Namely, the minimal squared error that the ERM attains on estimating any function in our class in both settings. In the fixed design setting, we show that the error is governed by the global complexity of the entire class. In contrast, in random design, the ERM may only adapt to simpler models if the local neighborhoods around the regression function are nearly as complex as the class itself, a somewhat counter-intuitive conclusion. We provide sharp lower bounds for performance of ERM for both Donsker and non-Donsker classes. This talk is based on joint work with Alexander Rakhlin.
Speaker
Gil Kur is a PhD student MIT. He is currently a visiting scholar working with Professor Elliot Paquette. His research focuses on nonparametric statistics, high dimensional statistics and convex geometry.