Tuesday, January 26, 2016. 12:00PM. NSH 3305.
Gus Xia - Interactive Artificial Music Performers via Machine Learning
As both a computer scientist and a musician, I design intelligent systems to understand and extend human musical expression. To understand means to model the musical expression conveyed through acoustic, gestural, and emotional signals. To extend means to use this understanding to create expressive, interactive, and autonomous agents, serving both amateur and professional musicians. In particular, I create interactive artificial performers that are able to perform expressively in concert with humans by learning musicianship from rehearsal experience. This study unifies machine learning and knowledge representation of music structure and performance skills in an HCI framework. In this talk, I will go over the learning techniques and present robot musicians capable of playing collaboratively and reacting to musical nuance with facial and body gestures.