Off-Policy Confidence Interval Estimation with Confounded Markov Decision Process
Rui Song · Feb 4, 2022
Date: 2022-02-04
Time: 15:30-16:30 (Montreal time)
https://mcgill.zoom.us/j/83436686293?pwd=b0RmWmlXRXE3OWR6NlNIcWF5d0dJQT09
Meeting ID: 834 3668 6293
Passcode: 12345
Abstract:
In this talk, we consider constructing a confidence interval for a target policy’s value offline based on pre-collected observational data in infinite horizon settings. Most of the existing works assume no unmeasured variables exist that confound the observed actions. This assumption, however, is likely to be violated in real applications such as healthcare and technological industries. We show that with some auxiliary variables that mediate the effect of actions on the system dynamics, the target policy’s value is identifiable in a confounded Markov decision process. Based on this result, we develop an efficient off-policy value estimator that is robust to potential model misspecification and provides rigorous uncertainty quantification.