FITG experiments: four scanners with occlusion and clutter


Progress this year:

Problem areas:

Inherent limitations of the approach:

(many of these problems could helped by using a real 3d scanner like the XVU laser in some way.  We have a broken proof of this in the Jay-integration of the XUV laser to the 2D scanner framework.  More generally, segmenation could be done in 3D.  There would likely be CPU speed issues due to the large amount of data to be processed.)

Configuration:


Testing was done on XUV2 using 4 SICKs, one on front, back and both sides.  As the scanner FOV is 180 degrees and the orientations are at 90 degree increments, much of the proximity is visible to two scanners, with only rectangular areas extending from the front, back and sides that are only seen by one scanner.

Simple example image:

This image shows the driver walking alongside the robot, passing near a tree.  1 is the track ID (in this case), and V1.1 tells us the speed is 1.1 meters/sec.  The blue arc is the two-second projection of the current motion, and the red trail is the previous path (up to 20 seconds worth.)  The front of the XUV is the pointy end, though the XUV doesn't actually have a point in the front.



Complex situation:

Here's a more complex situation with three walkers, trees, brush, and ground returns:


un-annotated version:


Smaller 400x400 version of this picture zooomed in around the robot, showing only two walkers.


Occlusion:



Un-annotated version:


Clutter:

This shows a close approach of the walker to the front of a tree.  This is not a maximally close approach, but the best one I found in this particular dataset.


un-annotated:


Still pictures

Videos: