Wen Sun (Cornell University)
https://simons.berkeley.edu/talks/tbd...
Quantifying Uncertainty: Stochastic, Adversarial, and Beyond
Offline Reinforcement Learning (RL) is a learning paradigm where the RL agent only learns from a pre-collected static dataset and cannot further interact with the environment anymore. Offline RL is a promising approach for safety-critical applications where randomized exploration is not safe. In this talk, we study offline RL in large scale settings with rich function approximation. In the first part of the talk, we will study the generalization property in offline RL and we will give a general model-based offline RL algorithm that provably generalizes in large scale Markov Decision Processes. Our approach is also robust in the sense that as long as there is a high-quality policy whose traces are covered by the offline data, our algorithm will find it. In the second part of the talk, we consider the offline Imitation Learning (IL) setting where the RL agent has an additional set of high-quality expert demonstrations. In this setting, we give an IL algorithm that learns with polynomial sample complexity and achieves start-of-art performance in standard continuous control robotics benchmark.
Смотрите видео Generalization and Robustness in Offline Reinforcement Learning онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Simons Institute 01 Январь 1970, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 3,88 раз и оно понравилось 3 людям.