FITG experiments: Occlusion and Clutter
Configuration:
Testing was done on the XUV using the XUV scanner. This is
a two-axis scanner giving 3D volume images. The tracker operates
on 2D range data, similar to what would be returned by a single-axis
scanner with a horizontal scan plane. The 3D data was converted
into a virtual 2D scan by, at each azimuth, reporting the closest
feature that was sufficiently steep and tall as the range return.
This is conversion is similar to a simple obstacle detector.
Interpretation of Tracker Display:
In these videos, the robot position (here represented by the car icon)
is fixed in the center of the screen, so appears to be non-moving, even
though it is. All of the fixed objects (trees, etc.) appear to be
moving. A person is walking in front of or on the right
side of the robot, and appears largely motionless because he is
deliberately walking so as to remain in the field of view of the
scanner.
The XUV scanner has a 90 degree field of view, so obviously cannot
provide 360 degree surround sensing. We set the scanner head in a
fixed orientation and coached the test subject to walk so as to remain
in the field of view. We plan future tests where a SICK is
mounted on the side, which will greatly reduce the field-of-view
problem, but will create problems with ground-returns.
Each tracked object is assigned a track ID. To ease
visualization, data related to different tracks is displayed in
different colors. The individual points associated with
each track are displayed, along
with superimposed features, shown as a straight line, 90 degree corner,
bounding box or X. If we are unable to classify a track as either
fixed or moving, then it is shown in gray.
When we are reasonably confident of our motion estimate, we display a
red curve showing the predicted motion over the next two
seconds. Then additional text motion data is displayed
after
the ID: the velocity (meters/sec), acceleration (meters/sec^2) and turn
rate (degrees/sec.) For small objects such as a walking
person, the acceleration is forced to zero, so we use a constant
velocity/constant turn rate model.
If no data was associated with a track on the last scan, then a "?" is
printed before the track ID. This happens when a track is briefly
occluded or lost in clutter, and also when the track passes out of the
scanner field of view. In this test, tracks are dropped
when no data is associated for two seconds.
Vegetation:
Vegetation such as brush and tall grasses produces noisy scan
data. In this image, the points representing the person (circled
in red) are buried in clutter:
(mpeg)
We can successfully track through this clutter as long as the
track
trajectory does not change too dramatically during the time it is
obscured.
(mpeg)
Simple occlusion:
If the person walks between the scanner and the tree more than 0.3
meters away from the tree, and does not make any dramatic changes in
direction, then we can track through this occlusion:
(mpeg)
Blooper reel:
We tested under conditions which we knew would be difficult for the
current tracker. After some parameter tweaks, the overall
performance was somewhat better than expected. However, there is
still room for more work. In this
case, the person walked very close behind the tree, then hugged the
tree so as to reverse direction by the time he had fully rounded the
tree. Track 135:129 is the person approaching the
tree. The :129 indicates that this track split off from track
129. The tree is 125 (white). As they closely
approach, we can see from the change in color from yellow to white that
the points for the person are segmented together with the
tree. Track 135:129 keeps on coasting with the original
velocity, and eventually dies. Then the tree track (125)
splits as the person once again moves far enough away to be segmented
seperately. Somewhat arbitrarily, the ID 125 is now
assigned to the person, while the tree is assigned a new ID 136:125