CANE: Context Aware Navigation Environment 

==========

The goal of this project is to develop the next generation navigation systems that combine GPS (Global Positioning System), computer vision, pattern recognition, and visualization technologies. We will integrate developed technologies into a context aware navigation environment (CANE) for a driver to manage information during the navigation. The system will be capable of utilizing a collection of data on the driver, the vehicle, and the environment to make intelligent decisions on delivery of information content, media, and timing. The designed system will output audio information and/or display visual information on a full windshield head-up display (HUD). The displayed content might be a combination of an easily maneuverable map, navigation turn directions, illustration of Points of Interest (POI), adjacent traffic, etc. The system will process such data as driver¡¯s behavioral information, vehicle status, road condition, route plan, real-time traffic, weather, etc., and generate displays suitable for the driving scenario and supportive of the driver¡¯s needs.

Trip Planning & Route Sharing

People usually have a good degree of knowledge about their local driving environments. They know the best way to a certain destination (shopping mall, park, etc). When people share a route face-to-face, they often draw a rough map with street names on the route and directions. We have developed a multimodal system that allows a user to share the route electronically as s/he does on the paper. The system offers route sharing for trip planning through drawing on a map, showing landmark images and providing synthesized voice instructions. It naturally incorporates human knowledge into trip planning.

Road Sign Detection & Recognition

Text on road signs carries much useful information for driving; it describes the current traffic situation, defines right-of-way, provides warnings about potential risks, and permits or prohibits roadway access. We have developed a system that can automatically detect text on road signs from video and provide real-time traffic information to the driver. The system utilizes a fast and robust framework for incrementally detecting text on road signs from video. The framework applies a divide-and-conquer strategy to decompose the original task into two subtasks, that is, localization of road signs and detection of text on the signs. The framework provides a novel way to detect text from video by integrating 2D image features in each video frame (e.g., color, edges) with 3D geometric structure information of road signs extracted from video sequence.

Landmark-Based Navigation

Current in-vehicle systems give turn-by-turn guidance with abstract visual instruction and do not reach the potential of minimizing driver's cognitive load. However, human drivers used to use landmarks for navigation. We are developing multimedia techniques that can be used for achieving landmark-based navigation in the next generation of in-vehicle route guidance systems. Our current work focuses on three main classes of landmarks: 1) road signs, 2) store signs and 3) buildings. We will develop technologies for labeling, detecting and recognizing these landmarks from images and videos. Furthermore, we will combine these technologies and evaluate the resulting system using a full-windshield display.

Related Publications:

Back to the previous page

Last updated January, 2010

----------