We propose a novel online, point-based 3D reconstruction method from posed monocular RGB videos. Our model maintains a global point cloud representation of the scene, continuously updating the features and 3D locations of points as new images are observed. It expands the point cloud with newly detected points while carefully removing redundancies. The point cloud updates and depth predictions for new points are achieved through a novel ray-based 2D-3D feature matching technique, which is robust against errors in previous point position predictions. In contrast to offline methods, our approach processes infinite-length sequences and provides real-time updates. Additionally, the point cloud imposes no pre-defined resolution or scene size constraints, and its unified global representation ensures view consistency across perspectives. Experiments on the ScanNet dataset show that our method achieves state-of-the-art quality among online MVS approaches.
ul: input image, ur: rendered depth
ll: first-person view reconstruction, lr: birdeye-view reconstruction
@article{ziwen2024pointrecon,
title={PointRecon: Online Point-based 3D Reconstruction via Ray-based 2D-3D Matching},
author={Ziwen, Chen and Xu, Zexiang and Fuxin, Li},
journal={arXiv preprint 2410.23245},
year={2024}
}