|
|
|
Contents: Overview, MapBuilder requests, MapBuilderNode, Request parameters, Local and world maps, Navigation markers, Gaze points
You tell the MapBuilder what you want it to do by constructing a MapBuilderRequest instance and passing it to the MapBuilder. When the MapBuilder has completed your request, it posts an event whose generator ID is mapbuilderEGID. The results of the MapBuilder operation will be a collection of shapes in camera, local, or world space, as specified in the request.
blobDataType
are
defined in the file DualCoding/ShapeTypes.h.
MapBiulderRequest mapreq(MapBuilderRequest::cameraMap); mapreq.addObjectColor(blobDataType,"pink"); const unsigned int request_id = mapbuilder.executeRequest(mapreq); erouter->addListener(this, EventBase::mapbuilderEGID, request_id); |
Take a moment to browse the MapBuilderRequest class documentation to see the options available.
mapreq
that
holds the request, which you can modify.Here is a sample program written in the state machine shorthand notation that uses the MapBuilder to find pink blobs in the current camera image, and then reports the number of blobs that were found. Note: whenever you define a state machine that uses any part of the DualCoding vision system, which includes the MapBuilder, the behavior's parent node (CountPinkBlobs in the example below) should be a subclass of VisualRoutinesStateNode.
$nodeclass CountPinkBlobs : VisualRoutinesStateNode { $nodeclass MyRequest : MapBuilderNode($,MapBuilderRequest::cameraMap) : constructor { mapreq.addObjectColor(blobDataType,"pink"); } $nodeclass ReportResult : VisualRoutinesStateNode : doStart { NEW_SHAPEVEC(blobs, BlobData, select_type<BlobData>(camShS)); ostringstream os; os << "I saw " << blobs.size() << " pink blobs"; sndman->speak(os.str()); } $setupmachine{ startnode: MyRequest =MAP=> ReportResult } } |
clearShapes
tells the MapBuilder to clear the shape
space before extracting shapes into it. If your behavior does
repetitive map building operations, you'll want to clear out the shape
space at the start of each request. clearShapes
defaults
to true. Set it to false if you want to manually control the
contents of the shape space using operations like
camShS.clear()
, or by explicitly deleting the shapes you
don't want to retain.
rawY
tells the MapBuilder whether to leave an
intensity (Y-channel) image named "rawY" in camera space, which can be
useful for debugging using the SketchGUI. The rawY
parameter defaults to true.
minBlobAreas
specifies, for various colors, the
minimum number of pixels a blob must have to be recognized by the Map
Builder. The type of minBlobAreas
is
std::map<color_index, int>
. To ease the task of
specifying minimum areas, the addMinBlobArea method can be used. The
default setting is no minimum for any color.
pursueShapes
tells the MapBuilder that it is free to
move the camera in order to get a better view of shapes that are only
partly visible in the current image. For example, if a line runs our
of the camera frame, the only way to determine the line's true extent
is to move the camera until the endpoint is found. In visually noisy
environments, the MapBuilder might think it sees something but, upon
further inspection, it turns out to be just a shadow or reflection or
false edge. When pursueShapes
is true, the MapBuilder
tries to confirm each shape by getting several views of it, moving the
camera at least once. Note: this function was originally created for
the AIBO, which had a lot of camera noise, so it is very conservative
and tries to get strong confirmatoin for each shape.. The algorithm
needs to be tuned for more efficient performance on the Create and
Chiara platforms, as presently it's rather slow. The default value of
pursueShapes
is false.
The second reason that camera space might not be suitable for a map request is that most robot cameras have a narrow field of view, typically about 60 degrees. (Compare this to humans' 200 degree field of view; for rodents it's 300 degrees.) When the camera is pointed straight ahead, the robot can see very little to its side. The vertical field of view is similarly limited. If we want the MapBuilder to search a larger area by moving the camera around, its results will have to be expressed in some other coordinate system than the camera frame. Usually a local map is used.
If you're building a local map repetitively without moving the body,
just the camera, you can safely set clearShapes
to false,
because the MapBuilder automatically matches new shapes against
existing local map contents, avoiding the creation of multiple copies
of shapes with the same body-centered (local) coordinates.
The planar world assumption says that shapes such as lines and ellipses are assumed to lie in the ground plane. This assumption allows the MapBuilder to translate from camera-centered coordinates to body-centered (local) coordinates given the current camera pose. However, we do not always want to make this assumption for blobs, and generally we cannot make it for navigation markers. The MapBuilder includes special provisions for dealing with these cases.
Navigation markers do not obey the planar world assumption, because they do not usually lie in the ground plane. They may be affixed to the walls of the environment, or they may be free-standing, like the cylinders with colored bands used in RoboCup and Tapia robotics competitions. Since they don't lie in the ground plane, we need another way to determine the distance of a marker from the robot based on its camera coordinates and the camera pose. If we know the height of the marker above the ground plane, we can calculate its distance with good accuracy provided that the camera height is not too close to the marker height. (If the camera and marker are at the same height, any small error in position measurement will result in a large change in estimated distance, rendering the result unstable and unreliable.)
searchArea
option. (This only makes sense if the camera
is moveable, i.e., if the robot's "head" has pan/tilt capability. If
your robot uses a camera that is fixed relative to the body, skip this
section.) The value of searchArea
must be a shape in
local or world space.
The simplest search area specification is a point. If you set
mapreq.searchArea
to a
Shape<PointData>
, the MapBuilder will point the
camera at that location before taking an image and processing it.
Usually points are given in local (body-centered) coordinates. While
cameraMap points are specified in pixels, localMap and worldMap points
are specified in millimeters.
In the example below, we want to search for blue ellipses on the left side of the robot. We construct a gaze point in local space that the robot should fixate on before grabbing a camera frame and looking for ellipses. We don't want the MapBuilder to clear the local shape space because that will destroy the gaze point, so we clear the space manually. Note that this must be done in MyRequest's DoStart method, not the constructor, because every time we reenter the node we will need to construct a fresh gaze point, having erased the old one with localShS.clear().
$nodeclass FindLeftBlue : VisualRoutinesStateNode { $nodeclass MyRequest : MapBuilderNode($,MapBuilderRequest::localMap) : doStart { localShS.clear(); NEW_SHAPE(gazePt, PointData, new PointData(localShS, Point(300,1000,0,egocentric))); mapreq.searchArea = gazePt; mapreq.clearShapes = false; mapreq.addObjectColor(ellipseDataType,"blue"); } $nodeclass Report : VisualRoutinesStateNode : doStart { cout << "I see " << localShS.allShapes().size() << " objects." << endl; } $setupmachine{ startnode: MyRequest =MAP=> Report =T(5000)=> startnode } } |
Instead of specifying a single gaze point, it is often more useful to
specify a series of points the robot should look at in order to
efficiently search a region of space. You can do this by setting the
request's searchArea
field to a Shape<PolygonData>.
The vertices of the polygon will serve as a series of fixation points
for the MapBuilder. If you're searching the ground around the front
of the robot, the function Lookout::groundSearchPoints() will return
a vector of points that you can use to form your polygon:
$nodeclass MyRequest : MapBuilderNode($,MapBuilderRequest::localMap) : doStart { localShS.clear(); vector<Point> gazePts = Lookout::groundSearchPoints(); NEW_SHAPE(gazePoly, PolygonData, new PolygonData(localShS, gazePts)); mapreq.searchArea = gazePoly; mapreq.clearShapes = false; // keep the gaze poly around mapreq.addObjectColor(ellipseDataType,"blue"); } |
For more efficient searching, you can construct a vector of gazepoints yourself instead of relying on the default list of ground search points.
|
|
|