CANE: Context Aware Navigation Environment
The goal of this project is to develop the next generation navigation systems that combine GPS (Global Positioning System),
computer vision, pattern recognition, and visualization technologies. We will integrate developed technologies into a context
aware navigation environment (CANE) for a driver to manage information during the navigation. The system will be capable of
utilizing a collection of data on the driver, the vehicle, and the environment to make intelligent decisions on delivery of information
content, media, and timing. The designed system will output audio information and/or display visual information on a full
windshield head-up display (HUD). The displayed content might be a combination of an easily maneuverable map, navigation
turn directions, illustration of Points of Interest (POI), adjacent traffic, etc. The system will process such data as driver¡¯s
behavioral information, vehicle status, road condition, route plan, real-time traffic, weather, etc., and generate displays
suitable for the driving scenario and supportive of the driver¡¯s needs.
Trip Planning & Route Sharing
People usually have a good degree of knowledge about their local driving environments. They know the best way to
a certain destination (shopping mall, park, etc). When people share a route face-to-face, they often draw a rough map
with street names on the route and directions. We have developed a multimodal system that allows a user to share the
route electronically as s/he does on the paper. The system offers route sharing for trip planning through drawing on a map,
showing landmark images and providing synthesized voice instructions. It naturally incorporates human knowledge into trip planning.
Road Sign Detection & Recognition
Text on road signs carries much useful information for driving; it describes the current traffic situation, defines right-of-way, provides
warnings about potential risks, and permits or prohibits roadway access. We have developed a system that can automatically detect
text on road signs from video and provide real-time traffic information to the driver. The system utilizes a fast and robust framework
for incrementally detecting text on road signs from video. The framework applies a divide-and-conquer strategy to decompose the
original task into two subtasks, that is, localization of road signs and detection of text on the signs. The framework provides a novel
way to detect text from video by integrating 2D image features in each video frame (e.g., color, edges) with 3D geometric structure
information of road signs extracted from video sequence.
Landmark-Based Navigation
Current in-vehicle systems give turn-by-turn guidance with abstract visual instruction and do not reach the potential of minimizing driver's
cognitive load. However, human drivers used to use landmarks for navigation. We are developing multimedia techniques that can be used
for achieving landmark-based navigation in the next generation of in-vehicle route guidance systems. Our current work focuses on three
main classes of landmarks: 1) road signs, 2) store signs and 3) buildings. We will develop technologies for labeling, detecting and recognizing
these landmarks from images and videos. Furthermore, we will combine these technologies and evaluate the resulting system using a
full-windshield display.
Related Publications:
-
W. Wu, F. Blaicher, J. Yang, T. Seder and D. Cui, A Prototype of Landmark-Based
Car Navigation Using a Full-Windshield Head-Up Display System, ACM Intl.
Conference on Multimedia - Workshop on Ambient Media Computing, 2009.
- W. Wu, J. Yang, Semi-Automatically Labeling Objects in Images, IEEE Transaction on Image
Processing, vol.18, No. 6, pp. 1340-1349, 2009.
- W. Wen, J. Yang, Object Fingerprints for Content Analysis with Applications to Street
Landmark Localization, Proceedings of ACM International Conference on Multimedia 2008.
- W. Wu and J. Yang, Semi-Supervised Learning of Object Categories from Paired Local
Features, Proceedings of ACM International Conference on Image and Video Retrieval (CIVR 2008).
- H. Cheng, Z. Liu, N. Zheng, J. Yang, Enhancing a Driver's Situation Awareness Using a Global
View Map, Proceedings of 2007 IEEE International Conference on Multimedia and Expo (ICME 2007),
pp.1019-1022, 2007.
- W. Wu, J. Yang, SmartLabel: An Object Labeling Tool Using Iterated Harmonic Energy Minimization,
Proceedings of the 14th ACM international conference on Multimedia(MM), pp. 891-900, 2006.
- W. Wu, J. Yang, J. Zhang, A Multimedia System for Route Sharing and Video-based Navigation,
Proceedings of IEEE International Conference on Multimedia & Expo 2006 (ICME 2006),
pp. 73-76, 2006.
-
W. Wu, X. Chen, J. Yang, Detection of Text on Road Signs from Video,
IEEE Transactions on Intelligent Transportation Systems, Vol 6, No. 4,
pp. 378-390, 2005.
-
W. Wu, X. Chen, J. Yang, Incremental Detection of Text on Road Signs from
Video with Application to a Driving Assistant System, Proceedings of ACM
Multimedia 2004 (MM2004), pp. 852 - 859 2004.
Back to the previous page
Last updated January, 2010