Presentation is loading. Please wait.

Presentation is loading. Please wait.

stereo Outline : Remind class of 3d geometry Introduction

Similar presentations


Presentation on theme: "stereo Outline : Remind class of 3d geometry Introduction"— Presentation transcript:

1 stereo Outline : Remind class of 3d geometry Introduction
How human see depth The principle of triangulation Epipolar geometry Fundamental Matrix Rectification correspondence Correlation Feature based Other 3d reconstruction methods

2 Remind from the last lecture
Single view modeling

3 Projection in 2D

4 Vanishing points (2D)

5 Two point perspective

6 Vanishing lines Multiple Vanishing Points
Any set of parallel lines on the plane define a vanishing point The union of all these vanishing points is the horizon line Also called vanishing line Note that different planes (can) define different vanishing lines

7 Vanishing point Vanishing point

8 Perspective cues

9 Perspective cues

10 Perspective cues

11 Comparing heights Vanishing point Vanishing point

12 Measuring height

13 Cross ratio scene cross ratio Image cross ratio

14 Cross ratio

15 stereo What is stereo in computer vision? What it’s god for?

16 Introduction There is no sense of depth in image that seen from one camera. We will see in this chapter how to produce images that have sense of depth.

17 Stereo Many slides adapted from Steve Seitz

18 Introduction Computer stereo vision is the extraction of 3D information from digital images. Human being use stereo to sense the distance.

19 How human see depth

20 Motivation-Application
Stereo vision is highly important in fields such as  robotics: Robot navigation. Car navigation. Extract information about the relative position of 3D objects. Making the 3d moves. Vedieo games that use stereo .

21 Motivation .con

22 goal Given two or more images of a same object (from different points) we want a recovery of the object in the real word .

23 Goal con. “Demo”

24 The principle of triangulation con.
Given projections of a 3D point “x” in two or more images (with known focal length), find the coordinates of the point. The 3d points at the intersection of the rays passing through the matching the image points and the associated optical center .

25 Depth from disparity f x x’ Baseline B z O O’ X We can see that we have Similarity between the triangles (X,O,O’) and (X,x,x’) So we can reduce discussion to Horizontal line. Eq from considerin similar triangles Oxf and OzXand B/2 𝛽 𝛼

26 Stereo Vision Two cameras: Left and Right Optical centers: OL and OR
Virtual image plane is projection of actual image plane through optical center Baseline, b, is the separation between the optical centers Scene Point, P, imaged at pL and pR pL = 9 pR = 3 Disparity, d = pR – PL = 6 Disparity is the amount by which the two images of P are displaced relative to each other bf Depth, z = pd p = pixel width

27 Depth from disparity Disparity is inversely proportional to depth!
x x’ Baseline B z O O’ X Small disparity => large depth Z’ So we can reduce discussion to Horizontal line. Eq from considerin similar triangles Oxf and OzXand B/2

28 Depth from disparity Disparity is inversely proportional to depth!
x x’ Baseline B z O O’ X Large disparity => small depth Z So we can reduce discussion to Horizontal line. Eq from considerin similar triangles Oxf and OzXand B/2

29 Reconstruction Sometimes the rays will never intersect because we have feature localization error, so we look for the closest point to the rays .

30 Stereo with convergence cameras
short base line: large common filed of view . large depth error.

31 Stereo with convergence cameras
Large base line: Small depth error . Small common filed of view.

32 Verging optical axes Two optical axes intersect at a fixation point:
the common filed of view increased. small depth error. Corresponding is more difficult.

33 Problems in stereo vision
We have a problem to match a point in the two images. correspondence So we use epipolar geometry.

34 Epipolar geometry p For each point in image plane 1 there is a set of points that can match (q’,p’). q P’ q’ O O’

35 Epipolar geometry Baseline – line connecting the two camera centers x

36 Epipolar geometry Baseline – line connecting the two camera centers
Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the other camera center = vanishing points of the motion direction

37 Epipolar line Epipolar plan epipole

38 The epipole

39 Epipolar constraint

40 Potential matches for p have to lay on the corresponding epiploar line l’
Potential matches for p’ have to lay on the corresponding epipolar line l. e , e’ called epipoles.

41 Vector cross product to matrix vector multiplication

42 Essential matrix T=translation R=rotation

43 Essential matrix Coplanarity constraint between vectors T, ,

44 Essential matrix Essential Matrix E=R

45 Homogeneous coordinates Conversion
Converting to homogeneous coordinates homogeneous image coordinates homogeneous scene coordinates Append coordinate with value 1; proportional coordinate system Converting from homogeneous coordinates

46 Intrinsic transformation - Principal Point

47 Camera transformation
2D point (3x1) Camera to pixel coord. trans. matrix (3x3) Perspective projection matrix (3x4) World to camera coord. trans. matrix (4x4) 3D point (4x1)

48 Intrinsic transformation

49 Fundamental matrix We assume that camera calibration is known what if we don’t know the camera calibration. We use fundamental matrix to solve this problem

50 Fundamental matrix Assume we know the camera calibration: Then: is the un-normalized coordinates . = K=

51 Fundamental matrix

52 Fundamental matrix M is 4*3 matrix , so how we get 𝑀 − ?
Assume we get M. We use pseudo inverse to get it.

53 Fundamental matrix Given appoint on the left camera what can we know from the fundamental matrix. Every point on the epipolar line of that correspond the point is accomplish the equation.

54 Assume We got a equation of the epipolar line .

55 Fundamental Matrix. con
Given a point in left camera x , epiploar line in right camera is

56 Fundamental Matrix. con
3*3 matrix with 9 components Rank 2 matrix( due to S ) 7 degrees of freedom Given a point in left camera x , epiploar line in right camera is

57 Computing fundamental matrix
F (the fundamental matrix ) has 9 variables. In order to compute F we must get some pointes. How much pointes we need.

58 Computing fundamental matrix
we can set i=1; All epipolar lines intersect in the epipole point . =epipole in the left image. No mater what is x’ the equation Is always true. Epipole is an eigenvector of F with eigenvalue =0

59 Computing fundamental matrix
F has 7 degree of freedom . We need minimum 4 points to calculate i={1,2,3,4}.

60 Computing fundamental matrix

61 Fundamental Matrix.con
One equation for one point correspondence

62 Computing fundamental matrix

63 Epipolar lines

64 Epipolar lines

65 Rectification Why we need rectification ?
Because the matching point will be in the same row.

66 Rectification Rectification: warping the input images (perspective transformation) so that epipolar lines are horizontal

67 Rectification Image Reprojection
reproject image planes onto common plane parallel to baseline Notice, only focal point of camera really matters (Seitz)

68 Rectification slide: R. Szeliski

69 Rectification Any stereo pair can be rectified by rotating and scaling the two image planes (=homography) We will assume images have been rectified so Image planes of cameras are parallel. Focal points are at same height. Focal lengths same. Then, epipolar lines fall along the horizontal scan lines of the images

70 Correspondence For every point in image plane 1 we have a set of point that may match in the image plane2. How we find the best matching point ? Correlation based Attempt to establish correspondence by matching image intensities-usually over a window of pixels in each image. Feature based attempt to establish a correspondence by matching a sparse sets to image features (edges…).

71 Correspondence search-via correlation
Left Right scanline Matching cost disparity Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation

72 Correlation methods Sum of squares different(SSD)= ΣΣ
Absolute difference (AD) = CC = Normalized correlation (NC) = MC =

73 Window size If we take a too small window then we will more details but more noise ! If we take a too large window the matching will be less sensitive to noise. W=3 W=20

74 Disparity map Disparity map is a map that helps us to express our match after we have done the correspondence ( correlation ) between the left and the right images, that it uses the intensity of gray color to express the disparity between the two matching windows (bright color mention that disparity is large, and dark color express small disparity) .

75 Disparity map.con If we were to perform this matching process for every pixel in the left hand image, finding its match in the right hand frame and computing the distance between them you would end up with an image where every pixel contained the distance/disparity value for that pixel in the left image.

76 Disparity map Left image Right image

77 Correlation method Not working good enough in some cases (images that have few details). Easy to implement . We can use dense disparity map.

78 Failures of correspondence search
Textureless surfaces Occlusions , repetition

79 Feature based approach

80 Feature based Features Matching algorithm Edge points Lines Corners
Extract features in the stereo pair Define similarity measure Search correspondence using similarity measure and the epipolar geometry

81 Feature based methods l-length - orientation
m-coordinates of the midpoint i-average intensity along the line W- the weights

82 Feature based approach
Pros : Relatively insensive to illumination changes Good for man made scenes with strong lines but weak texture or texurless surfaces Work well on edges Faster than correlation approach Cons: Only sparse depth map May be tricky

83 Winner take all Tow pixels(in image plan1) may correesponed to the same pixel in image plane2. c b a

84 Ordering

85 Global approach Use dynamic programing to force pixels along scan line. If a is in the left side of b then will match to a pixel that in the liftest side. We will match every pixel

86 Global approach

87 Global approach We want to minimize the coast of the correspondence(starting from the left to the right). min 𝑑 𝑥=1 𝑛 𝑐 𝑥,𝑦,𝑑 .

88

89 Three Views Matches between points in the first two images can be checked by re-projecting the corresponding three-dimensional point in the third image

90


Download ppt "stereo Outline : Remind class of 3d geometry Introduction"

Similar presentations


Ads by Google