VSAM IFD Web Presentation
Under the three-year Video Surveillance and Monitoring (VSAM) project [1997-1999],
the Robotics Institute at Carnegie Mellon University (CMU) and the Sarnoff
Corporation developed a suite of video understanding technologies for autonomous
video surveillance. These algorithms automatically "parse" people
and vehicles from raw video, determine their geolocations, and insert them
into a dynamic scene visualization. A prototype, end-to-end surveillance
system consisting of a network of active video sensors has been constructed.
Within this testbed, multiple sensors cooperate to provide continuous coverage
of people and vehicles moving throughout a cluttered urban environment.
This web presentation presents an overview of the testbed system and automated
surveillance technologies. More details can be found in a set of
published papers.
View movies only
IFD Testbed System
A variety of SPUs have
been incorporated into the VSAM IFD testbed system.
NVESD's Islander aircraft
provides an airborne SPU platform.
Operator Control Room
located in PRB on the CMU campus.
Single-Camera Surveillance
Hybrid detection algorithm
(adaptive background subtraction and three-frame differencing) for moving
object detection.
Object detection
using temporally layered adaptive background subtraction.
Object detection
from a rotating camera by perspective alignment with a collection of reference
images.
Tracking people and
vehicles from Wean. Active tracking keeps a person within the field
of view.
Tracking objects
from Smith cam and display of trajectory trails.
One minute in the
life of Smith cam.
Multi-Camera Surveillance
Two sensors cooperate
to actively track a vehicle through a cluttered environment.
Example of multi-camera
slaving -- tracking a person.
Example of multi-camera
slaving -- tracking a vehicle.
Airborne Surveillance
NVESD's Islander aircraft.
NVESD air support
"bread truck" and receiving dish.
Tracking from airborne
SPU using real-time image stabilization.
Acquisition
of a reference mosaic and its use in sensor fixation on a geodetic scene
point.
Footprints
of airborne sensor being autonomously multi-tasked between three geodetic
scene coordinates.
Site Models and Geolocatiion
CMU campus model (CTDB).
Geolocation by intersecting
viewing rays with the terrain.
Geolocation to determine
a vehicle's trajectory.
Human-Computer Interface
Using the GUI to set a
region of interest (ROI) in the scene.
Three soldiers.
Insertion into ModSAF.
Raju leaves town.
Insertion into ModStealth.
Thermal 1. Insertion
into ModStealth.
Thermal 2. Insertion
into ModStealth.
Thermal 3. Insertion
into ModStealth.
Thermal 4. Insertion
into ModStealth.
Acknowledgments: The VSAM IFD team would like to thank the U.S.
Army Night Vision and Electronic Sensors Directorate Lab at Davison Airfield,
Ft. Belvoir, Virginia for their help with the airborne operations.
We would also like to thank Chris Kearns and Andrew Fowles for their assistance
at the Fort Benning MOUT site, and Steve Haes and Joe Findley at BDM/TEC
for their help with the CTDB site model and distributed simulation visualization
software.
How this presentation was created: The text/html for this presentation
was created using Netscape Composer. Movies were edited using Asymmetrix
Digital Video Producer version 4.0, with occasional use of VideoMach
(shareware) to crop clips and adjust their brightness/contrast/gamma.