Warning:
This page is
provided for historical and archival purposes
only. While the seminar dates are correct, we offer no
guarantee of informational accuracy or link
validity. Contact information for the speakers, hosts and
seminar committee are certainly out of date.
A geometric method called "space-sweep stereo" has recently been developed to perform correspondenceless 3D scene reconstruction from multiple images. This novel multi-image stereo algorithm rapidly derives a coarse 3D scene segmentation by backprojecting image features onto a virtual planar surface that is swept through object space, methodically bringing image patches from all potential multi-image correspondences into proximity where they can be tested for compatibility. By combining information and making occupancy decisions in 3D object space, we directly extract 3D scene structure without explicitly deriving feature correspondences across the multiple views, thereby avoiding the inherent combinatorial and theoretical limitations iof multi-image epipolar matching.
My talk presents a technical overview of the space-sweep stereo method along with three illustrative applications: 1) reconstruction of 3D building and road edges from aerial imagery using binary Canny edge maps extracted from the images. 2) computation of a structural salience measure that can determine whether a given volume of space contains a statistically significant number of structural edges without first performing precise reconstruction of those edges. This application is also illustrated within an aerial image scenario. 3) close-range, moving object reconstruction using video sequences from the RI Virtualized Reality dome. Interframe disparity vectors from 52 video sequences are simultaneously combined to generate a coarse three-dimensional delination of a moving person immersed in the dome.
This abstract appears on the World Wide Web at http://www.frc.ri.cmu.edu/~mcm/seminar/feb.14.html