This tutorial addresses Visual SLAM, the
problem of building a sparse or dense 3D model of the scene while
traveling through it, and simultaneously recovering the trajectory of
the platform/camera. Visual SLAM has received much attention in the
computer vision community in the last few years, as more challenging
data sets become available, and visual SLAM is starting to be
implemented on mobile cameras and used in AR and other applications. We
will provide an introduction to the core concepts underlying current
sparse, dense and semantic visual SLAM systems.
Organizers
Frank Dellaert, Georgia Institute of Technology
Michael Kaess, Carnegie Mellon University
Invited Lecturers
Stephan Weiss, NASA Jet Propulsion Laboratory
Richard Newcombe, University of Washington
Chris Beall, Georgia Institute of Technology
Schedule
7:30 – 8:30 | Breakfast | |
8:30 – 10.15 | AM Session 1: Visual Odometry |
|
10:15-10:45 | Coffee Break | |
10:45-12:30 | AM Session 2: Visual SLAM |
|
12:30-13:30 | Lunch Buffet | |
13:30-15:25 | PM Session 1: Advanced Topics |
|
15:25-15:55 | Coffee Break | |
15:55-17:00 | PM Session 2: Dense SLAM |