IUBA Contractor Page


Organization: Carnegie Mellon University
Department: Robotics Institute

Subcontractors: David Sarnoff Research Center

Title of Effort: Cooperative Multi-Sensor Video Surveillance
Principal Investigators: Takeo Kanade (CMU), Peter Burt (Sarnoff)
Technical Leads: Robert Collins (CMU), Alan Lipton (CMU), Lambert Wixson (Sarnoff)

Technical Area: VSAM, Integrated Feasibility Demonstration

Technical Objectives:
  • Cooperative surveillance by multiple ground and airborne sensors to seamlessly track moving targets as they enter and leave the field of views of individual sensors, or become temporarily occluded from one or more sensor viewpoints.
  • Scene-level representation of targets and their environment by integrating evolving visual, geometric, and symbolic sensor observations together with collateral scene data.
  • Active control of sensor parameters, sensor processing, and platform deployment in response to mission and task needs based on the evolving wide area representation.
  • Development of an experimental testbed that includes the sensors, hardware platforms, and software architecture needed to support data collection and experimental evaluation of VSAM technologies developed by the DARPA IU community.
  • Approach:
    CMU and Sarnoff will jointly build an IFD testbed system consisting of multiple sensor units deployed in the field, communicating with a base control unit.
  • Sensor units:
    - multiple ground-based and airborne platforms
    - visible-light and IR sensing for day/night operation
    - Sarnoff's real-time video hardware for stabilization and mosaicing
    - pan, tilt and zoom capabilities for active vision
    - onboard data compression and communication
  • Base control unit:
    - multisensor fusion with collateral scene data
    - estimating geolocation of targets and platforms
    - sensor planning and control for cooperative surveillance
    - graphical operator control interface
  • Military/Battlefield Relevance:
    The cooperative multi-sensor surveillance system will significantly enhance battlefield awareness by providing the commander complete and continuous coverage of troop movements and target activities within a broad area. The approach also improves national security by enabling roving ground and air platforms to effectively patrol long perimeters, such as national borders and demilitarized zones. Furthermore, the ability to provide large area coverage in cluttered environments with a small number of mobile platforms will spur technology transfer to commercial applications, such as building and parking lot security, warehouse guard duty, and monitoring restricted access areas in airports. Combined ground and air surveillance capabilities also have promising applications in civilian law-enforcement operations.
    Demonstrations Scheduled:
    On, November 12, 1997, the first VSAM integrated feasibility demonstration was successfully held out at the CMU Bushy Run facility , in Murraysville PA. Click here for the VSAM IFD demo photo album.
    The 1998 VSAM demo will be held October 8-9, on the campus of Carnegie Mellon University.
    Recent Publications:
    "Moving Target Classification and Tracking from Real-time Video"
    Lipton, Fujiyoshi and Patil (gzipped postscript, 3213352 bytes)
    to appear, WACV 98, Princton NJ, October 1998.
    "Real-time Human Motion Analysis by Image Skeletonization"
    Fujiyoshi and Lipton (gzipped postscript, 298017 bytes)
    to appear, WACV 98, Princton NJ, October 1998.
    "Using a DEM to Determine Geospatial Object Trajectories"
    Collins, Tsin, Miller and Lipton (gzipped postscript, 1063603 bytes)
    to appear, CMU technical report CMU-RI-TR-98-19, 1998.
    IUW97 PI Overview (gzipped postscript, 183963 bytes)
    DARPA Image Understanding Workshop, New Orleans, LA, May 11-14, 1997, pp. 3-10.
    VSAM IFD Specification Documents (html link)
    Relevant Images:
    VSAM IFD Research Overview
    Links to Additional Sites:
    The VSAM HomePage
    Darpa
    CMU Robotics Institute


    This page is maintained by the Robotics Institute at Carnegie Mellon University
    Please address comments to rcollins@cs.cmu.edu