PointRecon: Online Point-based 3D Reconstruction via Ray-based 2D-3D Matching


1Oregon State University
2Adobe Research

Abstract

We propose a novel online, point-based 3D reconstruction method from posed monocular RGB videos. Our model maintains a global point cloud representation of the scene, continuously updating the features and 3D locations of points as new images are observed. It expands the point cloud with newly detected points while carefully removing redundancies. The point cloud updates and depth predictions for new points are achieved through a novel ray-based 2D-3D feature matching technique, which is robust against errors in previous point position predictions. In contrast to offline methods, our approach processes infinite-length sequences and provides real-time updates. Additionally, the point cloud imposes no pre-defined resolution or scene size constraints, and its unified global representation ensures view consistency across perspectives. Experiments on the ScanNet dataset show that our method achieves state-of-the-art quality among online MVS approaches.

Workflow of PointRecon. We begin with monocular depth prediction for the first image, lifting 2D points into 3D space to form the initial point cloud. For each subsequent image, we perform feature matching between the 2D image features and the 3D point cloud features to update the features and positions of the point cloud and to predict depth for the 2D image. Finally, the new points are merged with the existing point cloud.

More ScanNet Reconstruction Videos

ul: input image, ur: rendered depth
ll: first-person view reconstruction, lr: birdeye-view reconstruction



BibTeX

@article{ziwen2024pointrecon,
title={PointRecon: Online Point-based 3D Reconstruction via Ray-based 2D-3D Matching},
author={Ziwen, Chen and Xu, Zexiang and Fuxin, Li},
journal={arXiv preprint 2410.23245},
year={2024}
}