Reinforcement Learning for Agents with Many Sensors and Actuators Acting in Categorizable Environments
Josep M Porta
porta@science.uva.nl
IAS Group, Informatics Institute
University of Amsterdam
Kruislaan 403, 1098SJ, Amsterdam, The Netherlands
Enric Celaya
celaya@iri.upc.edu
Institut de Robòtica i Informàtica Industrial
Spanish Council of Scientific Research (CSIC)
Llorens i Artigas 4-6, 08028, Barcelona, Spain
Abstract:
In this paper, we confront the problem of applying reinforcement learning to
agents that perceive the environment through many sensors and
that can perform parallel actions using many actuators as is the case in
complex autonomous robots.
We argue that reinforcement learning can only be
successfully applied to this case if strong
assumptions are made on the characteristics of the environment in which the learning
is performed, so that the relevant sensor readings and motor commands can
be readily identified.
The introduction of such assumptions
leads to strongly-biased learning systems that can eventually lose the
generality of traditional reinforcement-learning algorithms.
In this line, we observe that, in realistic situations, the reward
received by the robot depends only on a reduced subset of all the executed
actions and that only a reduced subset of the sensor inputs (possibly different
in each situation and for each action) are relevant to predict the reward.
We formalize this property in the so called categorizability assumption and
we present an algorithm that
takes advantage of the categorizability of the environment,
allowing a decrease in the learning time with respect to existing
reinforcement-learning algorithms.
Results of the application of the algorithm to a couple of simulated realistic-robotic problems
(landmark-based navigation and the six-legged robot gait generation)
are reported to validate our approach and to compare it to existing
flat and generalization-based reinforcement-learning approaches.
Josep M Porta
2005-02-17