Detecting Ground Shadows in Outdoor Consumer Photographs

Teaser
From an input image (left), ground shadow boundaries are detected (middle),
and eventually removed (right).

Probability of shadow
Probability of shadow

Detecting shadows from images can significantly improve the performance of several vision tasks such as object detection and tracking. Recent approaches have mainly used illumination invariants which can fail severely when the qualities of the images are not very good, as is the case for most consumer-grade photographs, like those on Google or Flickr. We present a practical algorithm to automatically detect shadows cast by objects onto the ground, from a single consumer photograph. Our key hypothesis is that the types of materials constituting the ground in outdoor scenes is relatively limited, most commonly including asphalt, brick, stone, mud, grass, concrete, etc. As a result, the appearances of shadows on the ground are not as widely varying as general shadows and thus, can be learned from a labelled set of images. Our detector consists of a three-tier process including (a) training a decision tree classifier on a set of shadow sensitive features computed around each image edge, (b) a CRF-based optimization to group detected shadow edges to generate coherent shadow contours, and (c) incorporating any existing classifier that is specifically trained to detect grounds in images. Our results demonstrate good detection accuracy (85%) on several challenging images. Since most objects of interest to vision applications (like pedestrians, vehicles, signs) are attached to the ground, we believe that our detector can find wide applicability.

Publications

Paper thumbnail "Detecting Ground Shadows in Outdoor Consumer Photographs"
Jean-François Lalonde, Alexei A. Efros, and Srinivasa G. Narasimhan
European Conference on Computer Vision (ECCV),
September 2010.
[PDF] [BibTeX]

Poster

Poster thumbnail Download the poster that was presented at ECCV 2010 here: [PDF, 9MB]

Dataset


Download the dataset (with shadow boundary annotations) used to train and evaluate the shadow classifier presented in this paper. Please cite the paper if you use the data in a publication.

Code


Download the shadow detector and use it on your own images! Please cite the paper if you use the code in a publication.

Funding


This research is supported by:

- Okawa Foundation
- NSF IIS-0546547
- ONR N00014-08-1-0330
- ONR DURIP N00014-06-1-0762
- NSF IIS-0643628
- Microsoft Research Fellowship

Copyright notice


The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder, except when identified by Creative Commons License 2.0, in which case the license applies to both the original and modified versions of the images.