Call for Papers | Submission | Committees | Invited Talks | Best Paper Award | Program US Lawmakers have recently passed legislation that allows fully autonomous vehicles to share public roads. With their potential to revolutionize the transport experience — and to improve road safety and traffic efficiency — there is a strong push by vehicle manufacturers and government agencies to bring autonomous to the broad market. The recent demonstrations at the DARPA Grand Challenges and by industry leaders has established that the core technical barrier to achieving autonomous vehicles is road scene understanding. However, although vehicle infrastructure, signage, and rules of the road have been designed to be interpreted fully by visual inspection, the use of computer vision in current autonomous vehicles is minimal. There is a perception that a wide gap exists between what is needed by the automotive industry to successfully deploy camera-based autonomous vehicles and what is currently possible using computer vision techniques. The goal of this workshop is to bring together leaders from both academia and industry to determine the true extent of this gap, to identify the most relevant aspects of computer vision problems to solve, and to learn from others about proposed avenues and solutions. Within the scope of the workshop will be core computer vision tasks such as dynamic 3D reconstruction, pedestrian and vehicle detection, and predictive scene understanding — all required capabilities for an autonomous vehicle. In particular, we will cover (but not limit ourselves to) the following questions in this workshop:
Papers should describe original and unpublished work about the above or closely related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:
The author kit provides a LaTeX2e and Word template for submissions, and an example paper to demonstrate the format. Please refer to this example for detailed formatting instructions. A paper ID will be allocated to you during submission. Please replace the asterisks in the example paper with your paper's own ID before uploading your file. Important DatesSubmissions deadline:
Author notification:
Camera-ready:
Workshop:
September 10, 2013
October 1, 2013
October 10, 2013
December 2, 2013, Room 105
General ChairsBart Nabbe
Yaser Sheikh
Tandent Vision Science, USA
Carnegie Mellon, USA
Program ChairsUwe Franke
Martial Hebert
Fernando De la Torre
Raquel Urtasun
Daimler AG, Germany
Carnegie Mellon, USA
Carnegie Mellon, USA
Toyota Technological Institute, USA
Program CommitteeIjaz Akhter
Mykhaylo Andriluka
Alper Ayvaci
Hernan Badino
Alexander Barth
Paulo Borges
Goksel Dedeoglu
Frank Dellaert
Andras Ferencz
Andreas Geiger
Abdelaziz Khiat
Sanjeev Koppal
Dirk Langer
Philip Lenz
Dan Levi
Jesse Levinson
Simon Lucey
Srinivasa Narasimhan
Michael Samples
Bernt Schiele
Jianbo Shi
Christoph Stiller
Wende Zhang
MPI, Tübingen, Germany
TU Darmstadt, Germany
Honda Research Institute, USA
NREC, USA
Daimler AG, USA
CSIRO Brisbane, Australia
Texas Instruments, USA
Georgia Tech., USA
Mobileye, USA
KIT, Germany
Nissan, Japan
Texas Instruments, USA
Volkswagen, USA
KIT, Germany
General Motors, USA
Stanford University, USA
CSIRO Brisbane, Australia
Carnegie Mellon, USA
Toyota, USA
Max Planck Institut Informatik, Germany
Upenn, USA
KIT, Germany
GM, USA
Uwe Franke
Srinivasa Narasimhan
Raquel Urtasun
Daimler AG, Germany
Carnegie Mellon University
University of Toronto
Making Bertha See Bio: Uwe Franke received the Ph.D. degree in electrical engineering from the Technical University of Aachen, Germany in 1988. Since 1989 he has been with Daimler Research and Development and has been constantly working on the development of vision based driver assistance systems. Since 2000 he has been head of Daimler’s Image Understanding Group and is a well known expert in real-time stereo vision and image understanding. Recent work is on optimal fusion of stereo and motion, called 6D-Vision. The stereo technology developed by his group is the basis for the stereo camera system of the new Mercedes S- and E-class vehicles introduced in 2013. Besides fully autonomous emergency breaking these cars offer autonomous driving in traffic jams.
Programmable Headlights: Smart and Safe Lighting Solutions for the Road Ahead Bio: Srinivasa Narasimhan is an Associate Professor in the Robotics Institute at Carnegie Mellon University. His group focuses on novel techniques for imaging, illumination and light transport to enable applications in vision, graphics, robotics and medical imaging. His works have received several awards: FORD URP Award (2013), Best Paper Runner up Prize (ACM I3D 2013), Best Paper Honorable Mention Award (IEEE ICCP 2012), Best Paper Award (IEEE PROCAMS 2009), the Okawa Research Grant (2009), the NSF CAREER Award (2007), Adobe Best Paper Award (IEEE Workshop on Physics based methods in computer vision, ICCV 2007) and IEEE Best Paper Honorable Mention Award (IEEE CVPR 2000). He is the co-inventor of smart headlights which made several top-10 lists of promising technologies including Car and Driver and Edmunds. He is also the co-inventor of Aqualux 3D display, Assorted-pixels and Motion-aware cameras and low-power outdoor 'kinect'. He co-chaired the International Symposium on Volumetric Scattering in Vision and Graphics in 2007, the IEEE Workshop on Projector-Camera Systems (PROCAMS) in 2010, and the IEEE International Conference on Computational Photography (ICCP) in 2011, is co-editing a special journal issue on Computational Photography in 2013, and serves on the editorial board of the International Journal of Computer Vision.
Visual Scene Understanding for Autonomous Systems Bio: Raquel Urtasun is an Assistant Professor at the University of Toronto. Previously she was an Assistant Professor at TTI-Chicago a philanthropically endowed academic institute located in the campus of the University of Chicago. She was a visiting professor at ETH Zurich during the spring semester of 2010. Before that, she was a postdoctoral research scientist at UC Berkeley and ICSI and a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Raquel Urtasun completed her PhD at the Computer Vision Laboratory, at EPFL, Switzerland in 2006 working with Pascal Fua and David Fleet at the University of Toronto. She has been area chair of multiple learning and vision conferences (i.e., NIPS, UAI, ICML, ICCV, CVPR, ECCV), and served in the committee of numerous international computer vision and machine learning conferences. Her major interests are statistical machine learning and computer vision, with a particular interest in non-parametric Bayesian statistics, latent variable models, structured prediction and their application to semantic scene understanding.
There will be a best paper award recommended during the peer review by the program committee and selected by the workshop chairs. The winner will receive a recognition certificate and a check for $500 USD sponsored by Tandent Vision Science.
|