CORAL Research Publications

By DateBy Publication TypeBy Research CategoryBy Author (Currently at CMU)By Author (Past and Current CORAL Contributors)

Multiagent Collaborative Task Learning through Imitation

Sonia Chernova and Manuela Veloso. Multiagent Collaborative Task Learning through Imitation. In 4th International Symposium on Imitation in Animals and Artifacts, Newcastle, UK, April 2007.

Download

[PDF]219.6kB  

Abstract

Learning through imitation is a powerful approach for acquiring new behaviors. Imitation-based methods have been successfully applied to a wide range of single agent problems, consistently demonstrating faster learning rates compared to exploration-based approaches such as reinforcement learning. The potential for rapid behavior acquisition from human demonstration makes imitation a promising approach for learning in multiagent systems. In this work, we present results from our single agent demonstration-based learning algorithm, aimed at reducing demonstration demand of a single agent on the teacher over time. We then demonstrate how this approach can be applied to effectively train a complex multiagent task requiring explicit coordination between agents. We believe that this is the first application of demonstration-based learning to simultaneously training distinct policies to multiple agents. We validate our approach with experiments in two complex simulated domains.

BibTeX Entry

@inproceedings{Chernova07aisb,
  title="Multiagent Collaborative Task Learning through Imitation",
  author="Sonia Chernova and Manuela Veloso",
  booktitle="4th International Symposium on Imitation in Animals and Artifacts",
  place="Newcastle upon Tyne, UK", month="April",
  year="2007",
  abstract={Learning through imitation is a powerful approach for acquiring new behaviors.  Imitation-based methods have been successfully 
applied to a wide range of single agent problems, consistently demonstrating faster learning rates compared to exploration-based approaches 
such as reinforcement learning.  The potential for rapid behavior acquisition from human demonstration makes imitation a promising approach 
for learning in multiagent systems.  In this work, we present results from our single agent demonstration-based learning algorithm, aimed at 
reducing demonstration demand of a single agent on the teacher over time.  We then demonstrate how this approach can be applied to effectively 
train a complex multiagent task requiring explicit coordination between agents.  We believe that this is the first application of 
demonstration-based learning to simultaneously training distinct policies to multiple agents.  We validate our approach with experiments in 
two complex simulated domains.},
  bib2html_pubtype = {Workshop},
  bib2html_rescat = {Learning from Demonstration,Multi-Agent Systems, Multiagent Learning},
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Oct 09, 2007 00:00:13

 

 
[home] [news] [people] [robosoccer] [projects] [multimedia] [download] [publications] [contact us]