Several multiagent domains have been mentioned throughout the course of this survey, including design, planning, entertainment, games, air-traffic control, air combat, personal assistants, load-balancing, and robotic leg control. In this section a single domain which embodies most multiagent issues is presented.
Robotic soccer is a particularly good domain for studying MAS. It has been gaining popularity in recent years, with international competitions in Korea in 1996 [71] and in Japan in 1997 [72]. If recognized as a standard testbed, it can be used to evaluate different MAS techniques in a straightforward manner: teams implemented with different techniques can play against each other.
Although the pursuit domain serves us well for purposes of illustration, robotic soccer is much more complex and interesting as a general testbed for MAS. Even with many predators and several prey, the pursuit domain is not complex enough to simulate the real world. Although robotic soccer is a game, it is by no means a ``toy domain.'' As described below, most real-world complexities are retained. A key aspect of soccer's complexity is the need for agents not only to control themselves, but also to control the ball which is a passive part of the environment.
In terms of the reasons to use MAS presented in Table 3, robotic soccer systems usually require separate agents for controlling the separate players, and they can benefit from the parallelism, robustness, and simpler programming of MAS. Systems whose players have onboard sensors are necessarily multiagent, since no single agent has access to all of the players' sensory inputs. Some competitions also stipulate in their rules that the robots must be controlled by separate agents. At the very least, the two teams must be controlled by separate agents. Even teams that could theoretically be controlled by a single agent stand to gain by using MAS. By processing the sensory inputs of the different players separately, multiple agents can control their players in parallel, perhaps contending with different tasks on the field. One player might be in position to defend its goal, while another is preparing an offensive attack. These players need not be controlled by the same agent: they can go about their tasks in parallel. Furthermore, if any of the agents fails for some reason (as often happens in real robotic systems), the other agents can attempt to compensate and continue playing. Finally, it is empirically much easier to program a single agent per player than it is to control an entire team centrally.
Robotic soccer can be played either with real robots or in a simulator. Although more costly and time consuming to develop, a number of groups have developed real robotic systems. The first robotic soccer system was the Dynamo system [73]. Sahota et al. built a 1 vs. 1 version of the game. Asada et al. have used vision based RL with their soccer playing robots [74]. Achim et al. discuss some of the robotic issues involved in building robotic soccer players [75].
Some robotic issues can only be studied in the real-world instantiation, but there are also many issues that can be studied in simulation. A particularly good simulator for this purpose is the ``soccerserver'' developed by Noda [76] and pictured in Figure 13.
Figure 13: The soccerserver system
This simulator is realistic in many ways: the players' vision is limited; the players can communicate by posting to a blackboard that is visible to all players; all players are controlled by separate processes; each player has 10 teammates and 11 opponents; each player has limited stamina; actions and sensors are noisy; and play occurs in real time. The simulator provides a domain and supports users who wish to build their own agents. Furthermore, evaluation of agents is straightforward: they can compete against each other, or perhaps against standard teams. Thus robotic soccer satisfies Decker's criteria for DAI testbeds [77].
The main goal of any testbed is to facilitate the trial and evaluation of ideas that have promise in the real world. A wide variety of MAS issues can be studied in simulated robotic soccer. In fact, of the seventeen MAS issues listed in Table 2, only one cannot be feasibly studied in the soccer simulator. All of the issues associated with heterogeneous non-communicating and heterogeneous communicating agents can be isolated. Within the homogeneous non-communicating scenario, only the ``local or global perspective'' issue does not arise: since the players have limited views of the world, they inherently have local rather than global perspectives.
Homogeneous non-communicating MAS can be studied using soccerserver by fixing the behavior of the opposing team and populating the team being studied with identical, mute players. To keep within the homogeneous agent scenario, the opponents must not be modeled as agents. In this context, the players can be reactive or deliberative to any degree. The extremely reactive agent might simply look for the ball and move straight at it, shooting whenever possible. At this extreme, the players may or may not have any knowledge that they are part of a team. On the other hand, players might model each other, thus enabling deliberative reasoning about whether to approach the ball or whether to move to a different part of the field in order to defend or to receive a pass. With players modeling each other, they may also reason about how to affect each other's behaviors in this inherently dynamic environment.
Robotic soccer is also useful for studying the issues associated with heterogeneous non-communicating agents. Since each player has several teammates with the same global goal and several opponents with the diametrically opposed goal, each player is both benevolent and competitive at the same time. This possibility for combination of collaborative and adversarial reasoning is a major feature of the domain. When trying to collaborate, players' actions are usually interdependent: to execute a successful pass, both the passer and the receiver must execute the appropriate actions. Thus modeling each other for the purpose of coordination is helpful. Social conventions, such as programmed notions of when a given agent will pass or which agents should play defense, can also help coordination. Since communication is still not allowed, the players must have a reliable method for filling the different team roles needed on a soccer team (defense, offense, goalie). Ideally, the players are able to switch roles during the course of a game when appropriate. Finally, if the teams are learning during the course of a single game or over several games, all the issues of evolving agents, including the ``arms race'' possibility and the credit/blame problem, arise.
Robotic soccer is perhaps best suited for the study of the most complex multiagent scenario: heterogeneous communicating agents. Since the agents can indeed communicate, the full potential of the domain is realized in this scenario. With players posting messages to the blackboard, they must have a language in order to understand each other. Protocols are also needed for commitment to team plays: the passer and receiver in a pass play must both agree to execute the pass. For more complex team plays, several players may need to commit to participate. But then the issue arises of how single-mindedly they must adhere to the committed play: when may they react to more pressing situations and ignore the commitment? For any team play, including a simple pass, timing is very important in such a real-time scenario. Thus, players must coordinate their actions very carefully. Finally, speech acts are particularly interesting in the environment that is both collaborative and adversarial. If the opponents can understand the same language, a planned utterance can affect the knowledge of both teammates and opponents. The utility of communication must be carefully considered and the possibility of lying in order to fool the opponent arises. Therefore, planned communicative acts, along with most of the other issues from Table 2, turn up in robotic soccer.
As well as addressing most of the issues inherent in MAS, robotic soccer is a great domain for multiagent Machine Learning. In another soccer simulator, Stone and Veloso use Memory-based Learning to allow a player to learn when to shoot and when to pass the ball [78]. They then use Neural Networks to teach a player to shoot a moving ball into the goal [79]. They use similar techniques in the soccerserver system as well. Matsubar et al. also use a Neural Network to allow a player to learn when to shoot and when to pass in the soccerserver system [80]. Once low-level behaviors have been developed, the opportunity to use ML techniques at the strategy level is particularly exciting.
The advantages of robotic soccer as a testbed for MAS are summarized in Table 9.
Table 9: Advantages of (simulated) robotic soccer as a MAS testbed