Total Pageviews

Sunday, March 6, 2011

how to acquire 3D point cloud from multiple camera views

Microsoft's photosynth can do that. How do they calibrate the camera?

In CV class, I learned to reconstruct 3D scenes from two views. In two camera views case, without camera calibration, projected structure is best you can get. If you want to recover the original structure, you have to suppress your knowledge of the real structure such as parallel, perpendicular, or more than five points' coordinates.  Usually we need to use check-boards to calibrate the camera.

In photosynth, we are not asked to choose the lines that a orthogonal or parallel, neither to take photos with a check-board. But we offer more than two views. What will we benefit from that? Can we calibrate the camera without providing any extra information beyond the pixels? Andrew says maybe they can get the parameters of camera from the photo itself. Well, it is a possible way, but what if we don't have the information?

No comments:

Post a Comment