Robotic Soccer is a rich domain for the study of multiagent learning issues. There are opportunities to study both collaborative and adversarial situations. However, in order to study these situations, the agents must first learn some basic behaviors that are necessitated by the multiagent nature of the domain. Similar to human soccer players, they can first learn to make contact with a moving ball, then learn to aim it, and only then start thinking about trying to beat an opponent and about team-level strategies.
This article presents a robust, low-level learned behavior and presents several ways in which it can be extended to and incorporated into collaborative and adversarial situations. Our ongoing research agenda includes improving the low-level behaviors while simultaneously working on the presented collaborative and adversarial learning issues. The goal is to create high-level learned strategic behaviors by continuing to layer learned behaviors.