Login [Center] Logout Join Us Guidelines  I  中文  I  CQI

Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?【IIIS-Haihua Frontier Seminar Series】

Speaker: Ruosong Wang Carnegie Mellon University
Time: 2019-12-30 14:00-2019-12-30 15:00
Venue: Block D 15th floor,Science & Technology Mansion, Tsinghua Science Park

Abstract:

Modern deep learning methods provide an effective means to learn good representations. However, is a good representation itself sufficient for efficient reinforcement learning? This question is largely unexplored, and the extant body of literature mainly focuses on conditions which permit efficient reinforcement learning with little understanding of what are necessary conditions for efficient reinforcement learning. This work provides strong negative results for reinforcement learning methods with function approximation for which a good representation (feature extractor) is known to the agent, focusing on natural representational conditions relevant to value-based learning and policy-based learning. For value-based learning, we show that even if the agent has a highly accurate linear representation, the agent still needs to sample exponentially many trajectories in order to find a near-optimal policy. For policy-based learning, we show even if the agent's linear representation is capable of perfectly representing the optimal policy, the agent still needs to sample exponentially many trajectories in order to find a near-optimal policy. These lower bounds highlight the fact that having a good (value-based or policy-based) representation in and of itself is insufficient for efficient reinforcement learning.

 

Short Bio:

Ruosong Wang is currently a third-year Ph.D. student at Carnegie Mellon University, advised by Prof. Ruslan Salakhutdinov. He received B.Eng. from IIIS, Tsinghua University. He has broad interest in the theory and applications of modern machine learning. His recent research interests include theoretical foundations for reinforcement learning and deep learning.