Abstract
We consider an MDP setting in which the reward function is allowed to
change during each time step of play (possibly in an adversarial
manner), yet the dynamics remain fixed. Similar to the experts
setting, we address the question of how well can an agent do when
compared to the reward achieved under the best stationary policy over
time. We provide efficient algorithms, which have regret bounds
with no dependence on the size of state space. Instead, these
bounds depend only on a certain horizon time of the process and
logarithmically on the number of actions. We also show that in the
case that the dynamics change over time, the problem becomes
computationally hard.
Joint work with Sham Kakade and Yishay Mansour.
|
Pradeep Ravikumar Last modified: Mon Jan 10 12:34:34 EST 2005