Next: Introduction
Abstract:
Learning how to adjust to an opponent's position is critical to the
success of having intelligent agents collaborating towards the
achievement of specific tasks in unfriendly environments. This paper
describes our work on developing methods to learn to choose an action
based on a continuous-valued state attribute indicating the position
of an opponent. We use a framework in which teams of agents compete
in a simulator of a game of robotic soccer. We introduce a
memory-based supervised learning strategy which enables an agent to
choose to pass or shoot in the presence of a defender. In our memory
model, training examples affect neighboring generalized learned
instances with different weights. We conduct experiments in which the
agent incrementally learns to approximate a function with a continuous
domain. Then we investigate the question of how the agent performs in
nondeterministic variations of the training situations. Our
experiments indicate that when the random variations fall within some
bound of the initial training, the agent performs better with some
initial training rather than from a tabula-rasa.
Keywords:
Robotic Soccer, Memory-Based Learning, Incremental Learning,
Adaptive Learning, Continuous-function Learning
Tech Report Number:
CMU-CS-95-222
Beating a Defender in Robotic Soccer:
Memory-Based Learning of a Continuous Function
Peter Stone and Manuela Veloso
Peter Stone
Mon Dec 11 15:42:40 EST 1995