3D reconstruction from multi-view images
Project Description
This project aims to develop a novel algorithm that reconstructs a 3D model of a scene from multiple RGB images captured from different viewpoints. We will enhance the existing Gaussian Splatting framework to achieve more accurate and detailed geometric representations. To improve reconstruction quality, we will investigate the incorporation of various constraints, such as a multi-view photometric consistency loss and a curvature-based regularizer.
Supervisor
TAN, Ping
Quota
1
Course type
UROP1000
UROP1100
Applicant's Roles
The applicant is expected to optimize 3D primitive representations to reconstruct geometry from multi-view images. The applicant will work closely with an experienced senior PhD student to design loss functions, implement the proposed methods in code, and evaluate performance on multi-view image datasets.
Applicant's Learning Objectives
Through this project, the applicant will gain foundational knowledge in 3D computer vision and computer graphics, including camera geometry and 3D transformations—key concepts for understanding 3D vision and robotics. The applicant will also acquire hands-on experience in multi-view 3D reconstruction, from theoretical design to practical implementation.
Complexity of the project
Moderate