Byron SpiceTuesday, August 7, 2012Print this page.
Visual Data Mining of Google Street View Identifies Cities' Distinctive Details
PITTSBURGH-Paris is one of those cities that has a look all its own, something that goes beyond landmarks such as the Eiffel Tower or Notre Dame. Researchers at Carnegie Mellon University and INRIA/Ecole Normale Supérieure in Paris have developed visual data mining software that can automatically detect these sometimes subtle features, such as street signs, streetlamps and balcony railings, that give Paris and other cities a distinctive look.
The software analyzed more than 250 million visual elements gleaned from 40,000 Google Street View images of Paris, London, New York, Barcelona and eight other cities to find those that were both frequent and could be used to discriminate one city from the others. This yielded sets of geo-informative visual elements unique to each city, such as cast-iron balconies in Paris, fire escapes in New York City and bay windows in San Francisco.
The discovered visual elements can be useful for a variety of computational geography tasks. Examples include mapping architectural correspondences and influences within and across cities, or finding representative elements at different geo-spatial scales such as a continent, a city, or a specific neighborhood.
Researchers will present their findings Aug. 9 at SIGGRAPH 2012, the International Conference on Computer Graphics and Interactive Techniques, at the Los Angeles Convention Center.
Alexei Efros, associate professor of robotics and computer science at CMU, noted that although finding patterns in very large databases - so-called Big Data mining -is widely used, it has so far been limited to text or numerical data. "Visual Data is much more difficult, so the field of visual data mining is still in its infancy, but I believe it holds a lot of promise. Our data mining technique was able to go through millions of image patches automatically - something that no human would be patient enough to do," said Efros, who collaborated with colleagues including Abhinav Gupta, assistant research professor of robotics, and Carl Doersch, a Ph.D. student in CMU's Machine Learning Department. "In the long run, we wish to automatically build a digital visual atlas of not only architectural but also natural geo-informative features for the entire planet."
For this study, the researchers started with 25,000 randomly selected visual elements from city images gathered from Google Street View. A machine learning program then analyzed these visual elements to determine which details made them different from similar visual elements in other cities. After several iterations, the software identified the top-scoring patches for identifying a city. For Paris, those patches corresponded to doors, balconies, windows with railings, street signs (the shape and color of the signs, not the street names on the signs), and special Parisian lampposts. It had more trouble identifying geo-informative elements for U.S. cities, which the researches attributed to the relative lack of stylistic coherence in American cities with their melting pot of styles and influences.
"We let the data speak for itself," said Gupta, noting the entire process is automated, yet produces a set of images that convey a better stylistic feel for a city than a set of random images.
Doersch said this process requires a significant amount of computing time, keeping 150 processors working overnight. By comparison, art directors for the 2007 Pixar movie "Ratatouille" spent a week running around Paris taking photos so they could capture the look and feel of Paris in their computer model of the city.
More information, including a video, is available on the project web site, http://graphics.cs.cmu.edu/projects/whatMakesParis/.
In addition to Efros, Gupta and Doersch, the research team included Saurabh Singh, a former research assistant in the Robotics Institute, and Josef Sivic, a researcher at INRIA / Ecole Normale Supérieure.
Byron Spice | 412-268-9068 | bspice@cs.cmu.edu