ShapeMap 3-D: Efficient shape mapping through dense touch and vision

Download: PDF.

“ShapeMap 3-D: Efficient shape mapping through dense touch and vision” by S. Suresh, Z. Si, J. Mangelson, W. Yuan, and M. Kaess. In Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA, (Philadelphia, PA, USA), May 2022, pp. 7073-7080.

Abstract

Knowledge of 3-D object shape is of great importance to robot manipulation tasks, but may not be readily available in unstructured environments. While vision is often occluded during robot-object interaction, high-resolution tactile sensors can give a dense local perspective of the object. However, tactile sensors have limited sensing area and the shape representation must faithfully approximate non-contact areas. In addition, a key challenge is efficiently incorporating these dense tactile measurements into a 3-D mapping framework. In this work, we propose an incremental shape mapping method using a GelSight tactile sensor and a depth camera. Local shape is recovered from tactile images via a learned model trained in simulation. Through efficient inference on a spatial factor graph informed by a Gaussian process, we build an implicit surface representation of the object. We demonstrate visuo-tactile mapping in both simulated and real-world experiments, to incrementally build 3-D reconstructions of household objects.

Download: PDF.

BibTeX entry:

@inproceedings{Suresh22icra,
   author = {S. Suresh and Z. Si and J. Mangelson and W. Yuan and M. Kaess},
   title = {{ShapeMap 3-D}: Efficient shape mapping through dense touch
	and vision},
   booktitle = {Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA},
   pages = {7073-7080},
   address = {Philadelphia, PA, USA},
   month = may,
   year = {2022}
}
Last updated: November 10, 2024