时 间:2025年6月9日(周一) 15:30 – 17:00
地 点:普陀校区理科大楼A1514室
报告人:于曙光 上海财经大学博士生
主持人:谌自奇 华东师范大学研究员
摘 要:
Two-way Deconfounder for Off-policy Evaluation in Causal Reinforcement Learning 摘要:This paper studies off-policy evaluation (OPE) in the presence of unmeasured confounders. Inspired by the two-way fixed effects regression model widely used in the panel data literature, we propose a two-way unmeasured confounding assumption to model the system dynamics in causal reinforcement learning and develop a two-way deconfounder algorithm that devises a neural tensor network to simultaneously learn both the unmeasured confounders and the system dynamics, based on which a model-based estimator can be constructed for consistent policy value estimation. We illustrate the effectiveness of the proposed estimator through theoretical results and numerical experiments.
报告人简介:
于曙光是来自上海财经大学统计与数据科学学院的在读博士生,导师为周帆副教授,研究方向包括强化学习,因果推断,随机森林,模型诊断,成果发表在NeurIPS,ICLR等机器学习领域顶级会议。