Guide to the Carnegie Mellon University Multimodal Activity (CMU-MMAC) Database

People
- Fernando de la Torre
- Jessica Hodgins
- Adam Bargteil
- Xavier Martin
- Justin Macey
- Alex Collado
- Pep Beltran
Abstract
This document summarizes the technology, procedures, and database organization of the CMU Multi-Modal Activity Database (CMU-MMAC). The CMU-MMAC database contains multimodal measures of the human activity of subjects performing the tasks involved in cooking and food preparation. The CMU-MMAC database was collected in Carnegie Mellon University’s Motion Capture Lab. A kitchen was built and to date five subjects have been recorded cooking five different recipes: brownies, pizza, sandwich, salad and scrambled eggs. The following modalities were recorded: • Video: (1) Three high spatial resolution (1024 × 768) color video cameras at low temporal resolution (30 Hertz). (2) Two low spatial resolution (640 × 480) color video cameras at high temporal resolution (60 Hertz). (3) One wearable low spatial resolution (640×480) camera at low temporal resolution (12 Hertz). • Audio: (1) Five balanced microphones. (2) Wearable watch. • Motion capture: A Vicon motion capture system with 12 infrared MX-40 cameras. Each camera records images of 4 megapixel resolution at 120 Hertz. • Five 3-axis accelerometers and gyroscopes. Several computers were used for recording the various modalities. The computers were synchronized using the Network Time Protocol (NTP).
Citation
![]() |
Fernando de la Torre, Jessica Hodgins, Adam Bargteil, Alex Collado, Xavier Martin, Justin Macey and Pep Beltran,
"Guide to the Carnegie Mellon University Multimodal Activity (CMU-MMAC) Database", Tech. report CMU-RI-TR-08-22, Robotics Institute, Carnegie Mellon University, April, 2008. [PDF] [Bibtex] |
![]() |
Fernando de la Torre, Jessica Hodgins, Javier Montano and Sergio Valcarcel,
"Detailed Human Data Acquisition of Kitchen Activities: the CMU-Multimodal Activity Database (CMU-MMAC)", CHI 2009 Workshop. Developing Shared Home Behavior Datasets to Advance HCI and Ubiquitous Computing Research. Boston. April 4th, 2009. [PDF] [Bibtex] |
Results
Grand Challenge Data CollectionAcknowledgements and Funding
This research is supported by: The National Science Foundation under Grant No. EEEC-0540865. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Thanks to Minh Hoai and Joan Perez for helping with the camera calibration software. Thanks to Tomas Maldonado for providing the LabView software to record wearable sensors and camera. Thanks to Joan Perez, Ricardo Cervera and Francisco Perez for volunteering to be captured and making such good brownies!
Copyright notice
Human Sensning Lab |