AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion

“AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion” by M. Qadri, K. Zhang, A. Hinduja, M. Kaess, A. Pediredla, and C.A. Metzler. In Proc. ACM SIGGRAPH, (Denver, CO, USA), July 2024.

Abstract

Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring. Treacherous operating conditions, fragile surroundings, and limited navigation control often dictate that submersibles restrict their range of motion and, thus, the baseline over which they can capture measurements. In the context of 3D scene reconstruction, it is well-known that smaller baselines make reconstruction more challenging. Our work develops a physics-based multimodal acoustic-optical neural surface reconstruction framework (AONeuS) capable of effectively integrating high-resolution RGB measurements with low-resolution depth-resolved imaging sonar measurements. By fusing these complementary modalities, our framework can reconstruct accurate high-resolution 3D surfaces from measurements captured over heavily-restricted baselines. Through extensive simulations and in-lab experiments, we demonstrate that AONeuS dramatically outperforms recent RGB-only and sonar-only inverse-differentiable-rendering–based surface reconstruction methods. A website visualizing the results of our paper is located at this address: https://aoneus.github.io/.

BibTeX entry:

@inproceedings{Qadri24siggraph,
   author = {M. Qadri and K. Zhang and A. Hinduja and M. Kaess and A.
	Pediredla and C.A. Metzler},
   title = {{AONeuS}: A Neural Rendering Framework for Acoustic-Optical
	Sensor Fusion},
   booktitle = {Proc. ACM SIGGRAPH},
   address = {Denver, CO, USA},
   month = jul,
   year = {2024}
}
Last updated: November 10, 2024