Date: 2022-10-07
Time: 15:30-16:30 (Montreal time)
https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09
Meeting ID: 834 3668 6293
Passcode: 12345
Abstract:
High-dimensional unstructured data such images or sensor data can often be collected cheaply in experiments, but is challenging to use in a causal inference pipeline without extensive engineering and domain knowledge to extract underlying latent factors. The long term goal of causal representation learning is to find appropriate assumptions and methods to disentangle latent variables and learn the causal mechanisms that explain a system’s behaviour. In this talk, I’ll present results from a series of recent papers that describe how we can leverage assumptions about a system’s causal mechanisms to provably disentangle latent factors. I will also talk about the limitations of a commonly used injectivity assumption, and discuss a hierarchy of settings that relax this assumption.
Speaker
Jason Hartford is currently a postdoc at Mila with Yoshua Bengio. Previously - PhD at UBC with Kevin Leyton-Brown. His research interest is focused on using deep learning for causal inference, and on designing deep network architectures for permutation invariant data.
McGill Statistics Seminar schedule: https://mcgillstat.github.io/