Tuesday, October 31, 2017. 12:00PM. NSH 3305.
David Abel -- Abstraction and Lifelong Reinforcement Learning
Abstract: Lifelong Reinforcement Learning (RL) presents a diversity of challenges. Agents must effectively transfer knowledge across tasks while simultaneously addressing exploration, credit assignment, and generalization. Abstraction can help overcome these hurdles by compressing the state space or empowering the action space of a learning agent, thereby reducing the computational and statistical burdens of learning. In this talk, I summarize our new results on the effect of abstractions on lifelong RL. First, we introduce a new class of value-preserving state abstractions whose optimal form can be computed efficiently, improving over existing NP-Hardness results. Second, we provide a generic sample bound for computing high confidence state abstractions in the lifelong setting. Third, we show experimentally that state abstractions only offer marginal improvements to lifelong learning on their own, but when paired with action abstraction, can enable efficient learning. Further, joint state-action abstractions induce a closed operator on representations, thereby yielding a simple recipe for constructing and analyzing hierarchies for RL.