next up previous
Next: The Simulator Up: A Layered Approach to Previous: The Complete Robotic System

Related Work

 

A ground-breaking system for Robotic Soccer, and the one that served as the inspiration for our work, is the Dynamo System developed at the University of British Columbia [18]. This system was designed to be capable of supporting several robots per team, but most work has been done in a 1 vs. 1 scenario. Sahota used this system to introduce a decision making strategy called reactive deliberation which was used to choose from among seven hard-wired behaviors [17]. Our system differs from the Dynamo system in several ways, most notably in that teams consist of several robots, thus necessitating the development of cooperative behaviors. We also hope to do minimal hard-wiring, instead learning behaviors from the bottom up.

The Robotic Soccer system being developed in Asada's lab is very different from both the Dynamo system and from our own [3, 25]. Asada's robots are larger and are equipped with on-board sensing capabilities. They have been used to develop some low-level behaviors such as shooting and avoiding as well as a RL technique for combining behaviors [3, 25]. While the goals of this research are very similar to our own, the approach is different. Asada has developed a sophisticated robot system with many advanced capabilities, while we have chosen to focus on producing a simple, robust design that will enable us to concentrate our efforts on learning low-level behaviors and high-level strategies. We believe that both approaches are valuable for advancing the state of the art of robotic soccer research.

Although real robotic systems, such as those mentioned above and the many new ones being built for robotic soccer tournaments [22, ], are needed for studying certain robotic issues, it is often possible to conduct research more efficiently in a well-designed simulator. Several researchers have previously used simulated robotic soccer to study ML applications. Using the Dynasim soccer simulator [16, 17], Ford et al. used a Reinforcement Learning (RL) approach with sensory predicates to learn to choose among low-level behaviors [5]. Using a simulator based closely upon the Dynasim system, Stone and Veloso used Memory-based Learning to allow a player to learn when to shoot and when to pass the ball [19]. They then used Neural Networks to teach a player to shoot a moving ball into the goal [21]. In the RoboCup Soccer Server Matsubar et al. used a Neural Network to allow a player to learn when to shoot and when to pass [11] (as opposed to the Memory-based technique used by Stone and Veloso for a simlar task). The work described in this article uses Neural Networks and Decision Trees to learn different behaviors in the RoboCup Soccer Server.

A wide variety of MAS research is related to the layered learning approach espoused in this paper. Most significantly, Mataric uses Brooks' Subsumption Architecture [4] to build multiagent behaviors on top of a set of learned basis behaviors [10]. Mataric's basis behaviors are chosen to be necessary and sufficient for the learning task, while remaining as simple and robust as possible. Since Mataric's robots were to learn social behaviors such as flocking and foraging, they were equipped with basis behaviors such as the ability to follow each other and the ability to wander without running into obstacles. While our approach makes similar use of multiple behavior layers, we are more committed to using ML whenever possible in order to study the interactions between learned behaviors at different levels.

One of the learned behaviors described in this article involves players that assume different roles. Although the roles are fixed in the current implementation, the players will eventually need to change roles as a match progresses. Tambe discusses a framework in which agents can take over the roles of other teammates in a helicopter-combat domain [23]. In the learning context, Prasad et al. have created design agents that can learn which role to fill [13]. We plan to combine role learning with dynamic role assumption as we progress to higher levels of learned behaviors (see Section 7).

In addition to reasoning about roles of teammates, Tambe's combat agents can also reason about the roles that opponents are playing in team behaviors [24]. By recognizing an opponent's action as a part of a larger team action, an agent is able to more easily make sense of the individual opponent's behavior with the goal of being able to predict the opponent's future actions. This work enhances previous work that aims at having agents deduce other agents' intentions through observation [8].



next up previous
Next: The Simulator Up: A Layered Approach to Previous: The Complete Robotic System



Peter Stone
Mon Mar 31 12:26:29 EST 1997