The Dense Estimation of Motion and Appearance
|
Hulya Yalcin,
Michael Black,
Ronan Fablet
|
Segmenting image sequences into meaningful layers is fundamental to many applications such as surveillance, tracking, and video summarization. Background subtraction techniques are popular for their simplicity and, while they provide a dense (pixelwise) estimate of foreground/background, they typically ignore image motion which can provide a rich source of information about scene structure. Conversely, layered motion estimation techniques typically ignore the temporal persistence of image appearance and provide parametric (rather than dense) estimates of optical flow. Recent work adaptively combines motion and appearance estimation in a mixture model framework to achieve robust tracking. Here we extend mixture model approaches to cope with dense motion and appearance estimation. We develop a unified Bayesian framework to simultaneously estimate the appearance of multiple image layers and their corresponding dense flow fields from image sequences. Both the motion and appearance models adapt over time and the probabilistic formulation can be used to provide a segmentation of the scene into foreground/background regions. This extension of mixture models includes priors for the spatial and temporal coherence of motion and appearance. Experimental results show that the simultaneous estimation of appearance models and flow fields in multiple layers improves the estimation of optical flow at motion boundaries and provides a better segmentation of the scene than either motion or background subtraction alone.                                                                       [ More about this... ]                                                                       Download the paper [pdf] |
Resume
| Research
| Main Page
Carnegie Mellon University, Robotics Institute
5000 Forbes Av., Pittsburgh, PA, 15213
hulya@ri.cmu.edu