The goal of this work
is to build video cameras whose spatial and temporal resolutions
can be changed post-capture depending on the scene. Building such
cameras is difficult due to two reasons. First, current video cameras
allow the same spatial resolution and frame rate for the entire captured
spatio-temporal volume. Second, both these parameters are fixed before
the scene is captured. We propose different components of video camera
design: a sampling scheme, processing of captured data and hardware
that offer post-capture variable spatial and temporal resolutions,
independently at each image location. Using the motion information in
the captured data, the correct resolution for each location is decided
automatically. Our techniques make it possible to capture fast moving
objects without motion blur, while simultaneously preserving high-spatial
resolution for static scene parts within the same video sequence.
Our sampling scheme requires a fast per-pixel shutter on the sensor-array,
which we have implemented using a co-located camera-projector system.
|
High Spatial Resolution |
High Frame Rate |
Motion-Aware Video |
|
Given a fixed voxel budget, a high spatial resolution (SR) camera
results in large motion blur and aliasing. A high-speed camera
results in low SR even for the static/slow-moving parts of the
scene (drums). With our sampling and reconstruction scheme, the
spatio-temporal resolution can be decided post-capture, independently
at each location in a content-aware manner: notice the reduced motion
blur for the hands and high SR for the slow-moving and static parts
of the scene.
|
Publications
"Flexible Voxels for Motion-Aware Videography"
Mohit Gupta, Amit Agrawal, Ashok Veeraraghavan, Srinivasa G. Narasimhan,
European Conference on Computer Vision (ECCV),
September 2010.
|
Video
(Video Result Playlist)
Use Apple Quicktime 7.5
|
Acknowledgements
This research was supported in parts by ONR grants N00014-08-1-0330
and DURIP N00014-06-1-0762, Okawa Foundation Grant and NSF CAREER award
IIS-0643628 and Mitsubishi Electrical Research Labs.
Authors thank Jinwei Gu and Shree K. Nayar for use of the MULE projector.
|
|