Presentation is loading. Please wait.

Presentation is loading. Please wait.

MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.

Similar presentations


Presentation on theme: "MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of."— Presentation transcript:

1 MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California, Merced E: guimei.zh@163.com Phone:209-658-4838 Lab: CAS Eng 820 (T: 228-4398) June 16, 2014. Monday 4:00-6:00 PM Applied Fractional Calculus Workshop Series @ MESA Lab @ UCMerced

2 MESA LAB Introduction 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-2/1024 Why to do some work on stitching? 1.Generate one panoramic image from a series of smaller, overlapping images. 2.The stitched image can also be of higher resolutions than a panoramic image acquired by a panoramic camera. The other is a panoramic camera is more expensive.

3 MESA LAB Introduction

4 MESA LAB Applications: interactive panoramic viewing of images , architectural walk-through , multi-node movies and other applications associated with modelling the 3D environment using images acquired from the real world (digital surface models/digital terrain models/ true orthophoto and full 3D models) 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-4/1024 Introduction

5 MESA LAB 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-5/1024 Introduction true orthophoto and full 3D models

6 MESA LAB What is multi-view? capture images at different time, at different view points or using different sensor, such as camera, laser scanner, radar, multispectral camera….. 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-6/1024 Introduction

7 MESA LAB 2. Method The flowchart of producing a panoramic image

8 MESA LAB 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-8/1024 The main work is as following: 1. image acquired 2.How to perform image effective registration 3. How to fulfill image merging Introduction

9 MESA LAB Image acquisition 1.Use one camera, at different time or different viewpoints, to capture images, so there is rotation or translation transformation or both of them. (R,T) 2.Use several cameras located in different viewpoint to capture images. (R,T) 3.Use different sensors, such as camera, laser scanner, radar, or multispectral scanner ….(fuse multi sensor information) 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-9/1024 Introduction

10 MESA LAB 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-10/1024 Introduction Camera and tripod for acquisition by camera rotations Geometry of overlapping images Need perform coordination transformation

11 MESA LAB Since the orientation of each imaging plane is different in acquisition method. Therefore, images acquired need to be projected onto the same surface, such as the surface of a cylinder or a sphere, before image registration can be performed. that means we have to perform coordinate transformation. 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-11/1024 Introduction

12 MESA LAB Image registration: To form a larger image with a set of overlapping images, it is necessary to find the transformations matrix(usually rigid transformation, only parameters R and T) to align the images. The process of image registration aims to find the transformations matrix to align two or more overlapping images. Because the projection( from the view point through any position in the aligned images into the 3D world) is unique. 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-12/1024 Introduction

13 MESA LAB 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-13/1024 5 th frame image6 th frame image Registrated image Introduction

14 MESA LAB Multi-view point clouds registration and stitching based on SIFT feature 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-14/1024 1.Motivation 2.Method 3.Experiments 4.Conclusion 5.Discussion SIFT: scale invariant feature transform

15 MESA LAB 1. Motivation Problems: (Existed method for multi-view point clouds registration in large scene) 1. Be restricted by the view angle of camera, single or two viewpoint image can only obtain local information of local scene; 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced

16 MESA LAB 2.Existed methods need to add special markers in large reconstruction scenes. 3.Existed method also need ICP(iterative closest point ) iterating calculation, and can’t eliminate interference of holes and invalid 3D point clouds. 06/16/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-16/1024 1. Motivation

17 MESA LAB 2. Method Our work based on Bendels[8] work, put forward a new algorithm of multi-view registration and stitching. 1.Generation 2D texture image of effective point clouds; 2.we extract SIFT features and match them in 2D effective texture image;

18 MESA LAB 2. Method 3. Then we map the extracted SIFT features and matching relationship into the 3D point clouds data and obtain features and matching relationship of multi-view 3D point clouds; 4. Finally we achieve multi-view point clouds stitching.

19 MESA LAB 2.1 Generation texture image of effective point clouds data Why: 3D point clouds can’t avoid holes and noise, in order to decrease the effect of registration and stitching precision about multi-view point clouds, we use mutual mapping method between 3D point clouds and 2D texture image to obtain texture image of effective point clouds data. 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-19/1024 2. Method

20 MESA LAB 2.1 Generation texture image of effective point clouds data How: Firstly, we projected the 3D point clouds to 2D plane, secondly, used projection binary graph to make 8 neighborhood seed filling and area filling, so that we can obtain projection graph of effective point clouds data. 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-20/1024 2. Method

21 MESA LAB 2. Mehtod 2.1 Generation texture image of effective point clouds data Texture image of effective point clouds data of a scene

22 MESA LAB 2.2 Extraction and matching with SIFT feature 1.Extract SIFT feature SIFT is a local feature which is proposed by David[7], we can extract the SIFT feature which is invariant under translation, scale and rotation. This paper used SIFT algorithm to extract 2D features, then used RANSAC method[9]to eliminate error matching. 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-22/1024 2. Mehtod

23 MESA LAB 2. 3 3D feature point extraction Pixel point of effective texture image, which is obtained by point clouds texture mapping has one-to-one correspondence relationship to 3D point clouds[10], as shown in Fig. 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-23/1024 2. Mehtod correspondence relationship

24 MESA LAB the method is as follows: (1) We extract SIFT feature points in 2D texture image, then calculated the coordinate of the point. (2) Because 2D feature points and 3D point clouds have one-to-one correspondence relationship, we can calculate coordinate of corresponding feature points of 3D point clouds. 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-24/1024 2. Method

25 MESA LAB 2.4 3D point clouds stitching The Multi-view 3D point clouds stitching is to make coordinate transformation of point clouds in different coordinate systems, the main problem is to estimate the coordinate transformation R (rotation matrix) and T(translation matrix). 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-25/1024 2. Mehtod

26 MESA LAB According to the matching point pairs which are obtained through the above step, we can estimate coordinate transformation relationship, that is to estimate parameters R and T, which make the objective function get minimum: 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-26/1024 Where pi and qi are matching points pairs of 3D point clouds in two consecutive viewpoint. 2. Method

27 MESA LAB 3. Experiments (a) SIFT feature effective texture image 1 (b) effective texture image 2 (c) SIFT feature of 3D cloud points 1 (d) SIFT feature of 3D cloud points 2

28 MESA LAB 3. Experiments Our method Ref [8]Ref [1] Experimental results

29 MESA LAB 3. Experiments Performances evaluation criteria  Accuracy: registration rate and stitching error rate  Efficiency: time consume

30 MESA LAB 3. Experiments

31 MESA LAB 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-31/1024

32 MESA LAB 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-32/1024

33 MESA LAB 4. Conclusion 1.Use SIFT feature of effective texture image to achieve registration and stitching of dense multi-view point clouds, we obtain texture image of effective point clouds through mutual mapping between 3D point clouds and 2D texture image, this algorithm eliminate interference of holes and invalid point clouds;

34 MESA LAB 4. Conclusion 2.Ensure that effective 3D feature which corresponding to every 2D feature can be found in the 3D point clouds, it can eliminate the unnecessary error matching, so matching efficiency and matching precision have been improved;

35 MESA LAB 3. Our algorithm use the correct matching point pairs to stitch, so it can avoid stepwise iterative of ICP algorithm, and decrease computational complexity of matching, it can also reduce stitching error which is brought by error matching.

36 MESA LAB 05/05/2014 AFC Workshop Series @ MESALAB @ UCMerced Slide-36/1024

37 MESA LAB Thanks


Download ppt "MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of."

Similar presentations


Ads by Google