3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009
Introduction This paper introduces Viewpoint-invariant patch(VIP) for robust registration and large scale scene reconstruction.
Outline Viewpoint-Invariant Patch(VIP) Hierarchical estimation of 3D similarity transformation Experimental results and evaluation Conclusion
VIP- Viewpoint normalization Warp the image texture Project the texture Extract the VIP descriptor
VIP- VIP generation VIP is defined as (x, σ, n, d, s) x : 3D position σ : patch size n : surface normal d : dominant orientation s : SIFT descriptor
Hierarchical estimation of 3D similarity transformation 3D similarity transformation from a single VIP correspondence (x 1, σ 1, n 1, d 1, s 1 ), (x 2, σ 2, n 2, d 2, s 2 ) scaling : rotation : translation :
Hierarchical estimation of 3D similarity transformation Hierarchical Efficient Hypothesis- Test(HEHT) method 3 stages : Scaling Rotation Translation Using RANSAC with VIP.
Experimental result and evaluation The number of inlier correspondences. The re-detection rate.
Experimental result and evaluation Use Structure from Motion(SfM) to compute its depths map and camera positions for each sequence. Camera positions were defined relative to the pose of the first camera in each sequence.
Experimental result and evaluation Number of inliers Re-detection rate
Experimental result and evaluation Scene 1 :
Experimental result and evaluation Scene 2 :
Experimental result and evaluation Scene 3 :
Experimental result and evaluation
Conclusion Their evaluation demonstrates that VIP features are an improvement on current methods for robust and accurate 3D model alighment.