Vision and Mobile Robotics Laboratory | Software
|
|
Home | Members | Projects | Publications | Software | Videos | Internal |
IntegrateMeshes is a program that integrates multiple aligned meshes into a single seamless surface mesh. It reads in a set of meshes, transformations that align all of the single meshes to a base mesh and projection matrices that determine the viewing direction for the meshes. It outputs the integrated mesh. IntegrateMeshes can also be used to blend appearace information collected from multiple views. The theory behind this integration algorithm is given in Digital Equipment Corporation Cambridge Research Laboratory Technical Report CRL-TR-96-4 Registration and Integration of Textured 3-D Data. This technical report also appears in shorter form as a conference paper at 3DIM '97.
Integrate meshes uses a naming convention to simplify the command line options. All files read in for integration must start with the same model prefix. This same model prefix is used to name output files. Suppose that we have three views of a robot to be integrated, then a reasonable way to name the surface meshes is robot1.wrl, robot2.wrl and robot3.wrl. Integrate meshes also requires that the three views be aligned with a single view. Suppose that the alignment of robot2.wrl and robot3.wrl with robot1.wrl are known. The transformation matrices must then be named robot.1.2.trans, which aligns robot2.wrl with robot1.wrl, and robot.1.3.trans, which aligns robot3.wrl with robot1.wrl. Furthermore, the projection matrices which define the sensor origin of the views (in view coordinates) must be named robot1.pa, robot2.pa and robot3.pa. If appearance blending is not used then IntegrateMeshes outputs two files: the integrated surface robot.wrl and the integrated points and sensor origins in the base coordinate system robot.points.wrl.
If appearanceis being blended, then the projection matrices must map 3-D coordinates (x,y,z,1) of each view into image coordinates (u,v,w) before alignment of the view with the base view. I.e., the projection matrix should describe the projection in view coordinates, not world coordinates. Each view must also have a 24-bit RGB TIFF image describing the appearance of the view. For our example above, images named robot1.tiff,robot2.tiff and robot3.tiff must exist. The outputs after appearance blending are the integrated points and sensor origins in the base coordinate system robot.points.wrl and the texture mapped model robot.texture.wrl with texture images robot.*.tiff which can be viewed with vrweb.
main.c this file contains the main controlling function for the integration of surface meshes
marchingcubes.c this file contains all of the functions for creating marching cubes cases and then deciding when to apply each case to a cube of implicit function values.
probability.c this file contains functions to compute probability contribution of points inserted into the voxel space.
texture.c this file contains all of the functions for reading panormic images and creating a cell of texture for each face in the surface mesh created from integration.
voxel.c this file Contains all of the functions for updating and maintaining a binary tree indexed space of voxels.
vrml.c this fileContains functions for outputing VRML files describing integration algorithm.
cube.h this file defines all classes used for marching cubes.
integrateMesh.h this file defines global variables for IntegrateMeshes function.
probability.h this file defines protypes of functions in probability.c.
texture.h this file defines prototypes for functions in texture.c.
viewMesh.h this file defines viewpoint, viewface and view_mesh classes. These clases are less memory expensive versions of meshpoints and meshfaces and surface_mesh optimized for describing texture information.
voxel.h this file defines voxel and voxel_space classes.
vrml.h this file contains vrml.c function prototypes.
By typing IntegrateMeshes - the following options (format description default) are printed:
Usage: IntegrateMeshes (See IntegrateMeshes.html for complete usage)
%S set model prefix name [required]
%d set base view index [required]
-views ... view indices [1 2]
-size %F set voxel size of object [0.25]
-sigmas %F %F %F %F set point probability stdevs (as bs ae be) [.375 .375 .375 .375]
-lambda %F error(=1) vs surface balancer(=0) [0.5]
-pt %F minimum probability threshold
-bb %F %F %F %F %F %F bounding box min max [-10 -10 -10 30 30 30 ]
-texture turn on texture integration [off]
-tcw %d set texture cell width (2^n) [8]
-max_wt turn on max weight texture blending [off]
-slices %S output slices with this prefix [off]
-dThresh %F distance threshold
-transDir %S transform directory
-wrlDir %S input wrl directory
%S set model prefix name [required]
IntegrateMeshes reads in surface meshes, transformation matrices and projection matrices. All of the files read in must start with the model prefix given by this option. For example if the model prefix is robot, then IntegrateMeshes will read a surface mesh file like robot1.wrl, a transfromation matrix like robot.1.2.trans and a projection matrix like robot1.pa. See the Object Modeling Tutorial for more details.
%d set base view index [required]
IntegrateMeshes expects all of the surface meshes to be aligned with the base view. In other words, transformation matrices that map each view to be integrated into the coordinate system of the base view must be known. If the base view index is 1 then and the model prefix is robot, then all surface meshes to be integrated must be aligned with robot1.wrl and transformation matrix robot.1.*.trans must be presentin the current directory.
-views ... view indices [1 2]
This option specifies the indices of the views to be integrated by IntegrateMeshes. For example, if the views option is followed by 1 2 3 8, the model prefix is robot and the base view is 1then the files robot1.wrl, robot2.wrl,robot3.wrl, robot8.wrl,robot1.pa, robot2.pa, robot3.pa, robot8.pa, robot.1.2.trans, robot.1.3.trans and robot1.8.trans. must be present in the current directory. Note that the transformation matrix robot.1.1.trans is not necessary, because it is automatically the identity transformation.
-size %F set voxel size of object [0.25]
The size of the voxels in the voxel space. Decreasing this number creates finer integrated surface meshes. Since the memory requirements of the mesh increase with the number of voxels, decreasing this number will increase the memory needed. Size is set in absolute data units.
-sigmas %F %F %F %F set point probability stdevs (as bs ae be) [.375 .375 .375 .375]
Sets the standard deviations of the cylindrical gaussian probability functions that are used to insert points into the voxel space. as and bs are the surface spread sigmas: as is the sigma along the tangent plane and bs is the spread along the surface normal. Usually as should be bigger than bs. ae and be are the error standard deviations that are aligned with the viewing direction: ae is the spread perpendicular to the viewing direction and be is the spread along the viewing direction. Usually be is greater than ae. The maximum sigma should not exceed 3 times the size of the voxels to prevent excessive running times.
-lambda %F error(=1) vs surface balancer(=0) [0.5]
Sets the relative weight of the two cylindrical gaussian probability functions that determine how a point contributes to the surface probability field. lambda can vary between 0 and 1 with 0 meaning use only the surface probability gaussian and 1 meaning use only the error probability gaussian and anything between being a bilinear blending of the two gaussians.
-pt %F minimum probability threshold [1]
The threshold on probabilty that voxels must surpass to be considered for surface generation. Increase this number for sparser but more likely meshes. Values between 1 and 4 woork well.
-bb %F %F %F %F %F %F bounding box min max [-1e8 -1e8 -1e8 1e8 1e8 1e8]
The bounding box of the voxel space. Points must be withing this bounding box to contribute to the voxel space. TO minimize memory requirements, the bounding box should be close to the minimum volume occupied by the data.
-texture turn on texture integration [off]
Perform texture integration in addition to shape integration. Since texture integration is time consuming, only engage this option when you are satisfied with the surface shape.
-tcw %d set texture cell width [8]
Set the width in pixels of the texture cell used to apply texture to each face in the integrated surface mesh. This number must be a power of 2. Unless the texture created is fuzzy, keep this number set to 8.
-max_wt turn on max weight texture blending [off]
This option changes the blending of texture from a linear blending at of color at each pixel in a texture cell to one that takes the color of the panorama with maximum weight.
-slices %S output slices with this prefix [off]
This option will force output of a sequence of implicit surface function and probability of surface slice images (TIFF) along the z-axis through the voxel space. The argument to slices is the prefix attached to the slices. Used for analysis and not much else.
-dThresh %F distance threshold
This option controls the maximum distance from the consensus surface a point on any of the component meshes may be, and still contribute to the color of the surface. When determining the color of a vertex on the consensus surface, we see which vertices on the individual views are closer than this threshold to it, and take a linear combination of the colors.
-transDir %S transform directory
Sets the directory where the transform files are located. Without this option, the transforms for each view are assumed to be in the current directory.
-wrlDir %S input wrl directory
Sets the directory where the wrl files are located. Without this option, the wrl files are assumed to be in the current directory.
Assume that the current directory contains robot1.wrl, robot2.wrl,robot3.wrl, robot8.wrl, robot1.pa, robot2.pa, robot3.pa, robot8.pa, robot1.tiff, robot2.tiff, robot3.tiff, robot8.tiff, robot.1.2.trans, robot.1.3.trans and robot1.8.trans. An example of usage that will integrate the four aligned robot views described above using the default settings is:
IntegrateMeshes robot 1 -views 1 2 3 8
This command will place the integrated model in robot.wrl and the aligned points sets in robot.points.wrl. The same example as above that will produce a coarser model by changing the size and sigmas values is:
IntegrateMeshes robot 1 -views 1 2 3 8 -size .5 -sigmas .75 .375 .375 .75
The same example as above that uses only the surface probability gaussian for its sensor model and has a modified bounding box is:
IntegrateMeshes robot 1 -views 1 2 3 8 -size .5 -sigmas .75 .375 .375 .75 -lambda 0 -bb -5 -5 -10 25 15 25
The same example as above with the addition of linear appearance blending is:
IntegrateMeshes robot 1 -views 1 2 3 8 -size .5 -sigmas .75 .375 .375 .75 -lambda 0 -bb -5 -5 -10 25 15 25 -texture
Finally, the same example as above but with max weighted texture blending nad the output of slice images with finer texture maps is:
IntegrateMeshes robot 1 -views 1 2 3 8 -size .5 -sigmas .75 .375 .375 .75 -lambda 0 -bb -5 -5 -10 25 15 25 -texture -tcw 16 -max_wt -slices robot.slice