Vision and Mobile Robotics Laboratory | Software
Home | Members | Projects | Publications | Software | Videos Internal

Model Building Tutorial

Model buiding is the process where multiple range images are registered and then integrated into a single seamless surface model. Multiple programs from Mesh Toolbox are iteratively applied during model building.

Step 1: Take Range Images

Before model building, range images of the object to be modeled should be taken. The range images should cover all surfaces of the object, and each range image should have at least one other range image with which it has at least 50% overlap. For single objects, good results have been obtained with 10 to 20 views, each taken about 45 degrees apart.

If a textured model is desired, then a camera image along with the projection matrix that maps sensed 3-D points to image coordinates (before any alignment) need to be available for each range image. Since the sensor origin is expected to come from a projection matrix during integration by IntegrateMeshes, each range image must have a projection matrix associated with it even if no camera image exists for the range image. See Data Formats for hints on how to construct projection matrices that only encode the sensor origin.

Step 2: Convert Range Images to Surface Meshes

The next step is to convert the range images into surface meshes. For dense range images, this is done by connecting range pixels across the rows, columns, and diagonally across rows and columns. Some sensors or sensing algorithms do not produce dense range images (e.g., feature based stereo). In this case the connectivity of the 3-D points should come from a triangulation of the rows and columns of the range pixels. LEDA has algorithms to compute triangulations from 2-D points. To maintain sensor independence, Mesh Toolbox does not have any functionality for creating surface meshes from range images.

Step 3: Preprocess Surface Meshes

Once all of the range images have been taken and turned into surface meshes, all of the surface meshes should be preprocessed to remove long edges (from surface discontinuities), smoothed and resampled for registration. For model building, mesh preprocessing should create a coarse mesh and a fine mesh for each range view.

Naming conventions are required for some programs in Mesh ToolBox. For the rest of this tutorial we will use the following conventions. Suppose that 3 views of a robot have been taken and are to be integrated into a single model, then

Using this convention, a c-shell scriptcan be used to simplify preprocessing of meshes

This script assumes that the resolution of the input data is about 0.2 and the resolution of the coarse mesh should be around 0.4. Modify the resolutions in the script if necessary. The output of PreprocessMeshScript are two meshes; for view 1 the meshes are named robot1.wrl and robotCoarse1.wrl.

Step 4: Transform Meshes to Single Coordinate System

Before integration, all of the meshes need to be put in a single coordinate system. This is accomplished by first aligning the coarse meshes using SpinRecognize and then refining the transformations using ICP. Since not all views overlap, it may be necessary to concatenate transformations (ConcatenateTrans) to put all views in a single coordinate system.

Suppose, we would like to put all of the views in the coordinate system of robot1 and suppose that robot1 overlaps with robot2 and robot2 overlaps with robot3, but robot1 does not overlap with robot3. First align robot1 and robot2 using the coarse meshes

If the alignment is correct, then refine the transformation

If the alignment is not correct, check that the normals for both views are pointing in the same direction and possibly adjust the SpinRecognize parameters from the defaults. Next, repeat for the remaining views

The second registration puts robot3 in the coordinate system of robot2. To put robot3 in the coordinate system of robot1, concatenate the transformations robot.1.2.trans and robot.2.3.trans

Which will result in the transformation robot.1.3.trans.

Step 4: Integrate Meshes

Once the transformations that map each surface mesh into a single coordinate system are known, the meshes can be integrated using IntegrateMeshes. In the example, the meshes are located with respect to robot1, so a suitable command for integration without appearance blending is

If the integrated mesh is too coarse or too fine, increase or decrease the size parameter. If the integrated model is too noizy, increase the sigma values.

To create an integrated model with appearance use

For IntegrateMeshes to work properly, the transformations that align each mesh with the base mesh must be known along with the projection matrices that encode sensor origin for each view.

The example presented is simple because it deals with only three views. To prevent accumulation of errors in transformations due to concatenation, consult Chapter 4 of Spin-Images: A Representation for 3-D Surface Matching. This chapter describes how integration and registration of meshes can be used iteratively to limit accumulation of error during surface registration. Essentially the idea is to register a group of views that are adjacent to each other. Follow this by integration of the group of views. Repeat registration and integration for a different of views. Next, register the integrated surface meshes with each other. Follow this by integration. This process is repeated unti a complete model of the object is obtained.

up


The VMR Lab is part of the Vision and Autonomous Systems Center within the Robotics Institute in the School of Computer Science, Carnegie Mellon University.