We have been pursuing research in the development of teams of autonomous agents that need to act in adversarial environments. In these domains, single agents cannot achieve the overall team goals individually. Goal achievement necessarily requires the collaboration between the members of the team. We have used three different testbeds in the robotic soccer domain to pursue this investigation: a rich simulation environment using the RoboCup soccer server [18], our own-built small wheeled robots [19], and Sony's fully autonomous legged robots [22]. We participated at the RoboCup-98 competitions at ICMAS in Paris, and we came in first place in each of these three leagues.
Although the three platforms are in the domain of robotic soccer, the technical challenges presented by each one for building effective multi-agent teams are quite different. In particular, in both the simulator and the small-sized robots we have been capable of developing robust teamwork approaches. Instead, with the Sony autonomous legged robots, we have so far concentrated primarily on an automated color calibration algorithm and probabilistic localization to allow individual agents to perceive the surrounding world effectively [22]. Teamwork is still minimal in this platform. In this paper, therefore we focus on the team organization and teamwork of our CMUnited-98 simulation and small-robot teams.
One main focus of our research is on algorithms for collaboration between agents in a team. An agent, as a member of a team, needs to be capable of individual autonomous decisions while, at the same time, its decisions must contribute towards the team goals.
We introduce a flexible team architecture in which agents are organized in formations and units. Each agent plays a role in a unit and in a formation. In many multi-agent systems, one or a few agents are assigned, or assign themselves, the specific task to be solved at a particular moment. We view these agents as the active agents. Other team members are passive waiting to be needed to achieve some task. Concretely, in the robotic soccer domain, we view the agent that goes to the ball as the active agent, while the other teammates are passive. While the active agent has a clear task assigned and therefore a clear plan to follow (e.g. move towards the ball), it is less clear what is the plan for the passive agents. As the team agents most probably will need to collaborate, it seemed to us that passive agents could not simply be ``passive.''
Our initial team architecture allowed for the passive agents to flexibly vary their positions within their role only as a function of the position of the ball. In so doing, their goal was to anticipate where they would be most likely to find the ball in the near future. In our CMUnited-97 teams, both simulation and real robots, we effectively used this ball-dependent role-adjustment strategy. This is a first-level of single-agent anticipation towards a better individual goal achievement.
However we recently investigated a more elaborate team behavior for the passive agents. For this year's CMUnited-98 teams, we introduced a team-based notion of anticipation, which goes beyond individual single-agent anticipation. The passive team agents position themselves strategically so as to optimize the chances that their teammates can successfully collaborate with them, in particular pass to them. By considering the positions of other agents and the attacking goal, in addition to only that of the ball, they are able to position themselves more usefully: they anticipate their future contributions to the team. This strategic anticipation is the main contribution of this paper. We believe that this new team collaboration algorithm was one of the main improvements from our last year's CMUnited-97 robot champion team [21, 20].
The paper is organized as follows. Section 2 describes the simulation and the robotic soccer frameworks. Section 3 describes our base initial team architecture contributing a flexible role-based team organization. Section 4 contributes the anticipation algorithm as a key behavior for the success of team of agents, and reports on the results obtained at RoboCup-98. Section 5 concludes the paper.