stereo Outline : Remind class of 3d geometry Introduction How human see depth The principle of triangulation Epipolar geometry Fundamental Matrix Rectification correspondence Correlation Feature based Other 3d reconstruction methods
Remind from the last lecture Single view modeling
Projection in 2D
Vanishing points (2D)
Two point perspective
Vanishing lines Multiple Vanishing Points Any set of parallel lines on the plane define a vanishing point The union of all these vanishing points is the horizon line Also called vanishing line Note that different planes (can) define different vanishing lines
Vanishing point Vanishing point
Perspective cues
Perspective cues
Perspective cues
Comparing heights Vanishing point Vanishing point
Measuring height
Cross ratio scene cross ratio Image cross ratio
Cross ratio
stereo What is stereo in computer vision? What it’s god for?
Introduction There is no sense of depth in image that seen from one camera. We will see in this chapter how to produce images that have sense of depth.
Stereo Many slides adapted from Steve Seitz
Introduction Computer stereo vision is the extraction of 3D information from digital images. Human being use stereo to sense the distance.
How human see depth
Motivation-Application Stereo vision is highly important in fields such as robotics: Robot navigation. Car navigation. Extract information about the relative position of 3D objects. Making the 3d moves. Vedieo games that use stereo .
Motivation .con
goal Given two or more images of a same object (from different points) we want a recovery of the object in the real word .
Goal con. “Demo”
The principle of triangulation con. Given projections of a 3D point “x” in two or more images (with known focal length), find the coordinates of the point. The 3d points at the intersection of the rays passing through the matching the image points and the associated optical center .
Depth from disparity f x x’ Baseline B z O O’ X We can see that we have Similarity between the triangles (X,O,O’) and (X,x,x’) So we can reduce discussion to Horizontal line. Eq from considerin similar triangles Oxf and OzXand B/2 𝛽 𝛼
Stereo Vision Two cameras: Left and Right Optical centers: OL and OR Virtual image plane is projection of actual image plane through optical center Baseline, b, is the separation between the optical centers Scene Point, P, imaged at pL and pR pL = 9 pR = 3 Disparity, d = pR – PL = 6 Disparity is the amount by which the two images of P are displaced relative to each other bf Depth, z = pd p = pixel width
Depth from disparity Disparity is inversely proportional to depth! x x’ Baseline B z O O’ X Small disparity => large depth Z’ So we can reduce discussion to Horizontal line. Eq from considerin similar triangles Oxf and OzXand B/2
Depth from disparity Disparity is inversely proportional to depth! x x’ Baseline B z O O’ X Large disparity => small depth Z So we can reduce discussion to Horizontal line. Eq from considerin similar triangles Oxf and OzXand B/2
Reconstruction Sometimes the rays will never intersect because we have feature localization error, so we look for the closest point to the rays .
Stereo with convergence cameras short base line: large common filed of view . large depth error.
Stereo with convergence cameras Large base line: Small depth error . Small common filed of view.
Verging optical axes Two optical axes intersect at a fixation point: the common filed of view increased. small depth error. Corresponding is more difficult.
Problems in stereo vision We have a problem to match a point in the two images. correspondence So we use epipolar geometry.
Epipolar geometry p For each point in image plane 1 there is a set of points that can match (q’,p’). q P’ q’ O O’
Epipolar geometry Baseline – line connecting the two camera centers x
Epipolar geometry Baseline – line connecting the two camera centers Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the other camera center = vanishing points of the motion direction
Epipolar line Epipolar plan epipole
The epipole
Epipolar constraint
Potential matches for p have to lay on the corresponding epiploar line l’ Potential matches for p’ have to lay on the corresponding epipolar line l. e , e’ called epipoles.
Vector cross product to matrix vector multiplication
Essential matrix T=translation R=rotation
Essential matrix Coplanarity constraint between vectors T, ,
Essential matrix Essential Matrix E=R
Homogeneous coordinates Conversion Converting to homogeneous coordinates homogeneous image coordinates homogeneous scene coordinates Append coordinate with value 1; proportional coordinate system Converting from homogeneous coordinates
Intrinsic transformation - Principal Point
Camera transformation 2D point (3x1) Camera to pixel coord. trans. matrix (3x3) Perspective projection matrix (3x4) World to camera coord. trans. matrix (4x4) 3D point (4x1)
Intrinsic transformation
Fundamental matrix We assume that camera calibration is known what if we don’t know the camera calibration. We use fundamental matrix to solve this problem
Fundamental matrix Assume we know the camera calibration: Then: is the un-normalized coordinates . = K=
Fundamental matrix
Fundamental matrix M is 4*3 matrix , so how we get 𝑀 − ? Assume we get M. We use pseudo inverse to get it.
Fundamental matrix Given appoint on the left camera what can we know from the fundamental matrix. Every point on the epipolar line of that correspond the point is accomplish the equation.
Assume We got a equation of the epipolar line .
Fundamental Matrix. con Given a point in left camera x , epiploar line in right camera is
Fundamental Matrix. con 3*3 matrix with 9 components Rank 2 matrix( due to S ) 7 degrees of freedom Given a point in left camera x , epiploar line in right camera is
Computing fundamental matrix F (the fundamental matrix ) has 9 variables. In order to compute F we must get some pointes. How much pointes we need.
Computing fundamental matrix we can set i=1; All epipolar lines intersect in the epipole point . =epipole in the left image. No mater what is x’ the equation Is always true. Epipole is an eigenvector of F with eigenvalue =0
Computing fundamental matrix F has 7 degree of freedom . We need minimum 4 points to calculate i={1,2,3,4}.
Computing fundamental matrix
Fundamental Matrix.con One equation for one point correspondence
Computing fundamental matrix
Epipolar lines
Epipolar lines
Rectification Why we need rectification ? Because the matching point will be in the same row.
Rectification Rectification: warping the input images (perspective transformation) so that epipolar lines are horizontal
Rectification Image Reprojection reproject image planes onto common plane parallel to baseline Notice, only focal point of camera really matters (Seitz)
Rectification slide: R. Szeliski
Rectification Any stereo pair can be rectified by rotating and scaling the two image planes (=homography) We will assume images have been rectified so Image planes of cameras are parallel. Focal points are at same height. Focal lengths same. Then, epipolar lines fall along the horizontal scan lines of the images
Correspondence For every point in image plane 1 we have a set of point that may match in the image plane2. How we find the best matching point ? Correlation based Attempt to establish correspondence by matching image intensities-usually over a window of pixels in each image. Feature based attempt to establish a correspondence by matching a sparse sets to image features (edges…).
Correspondence search-via correlation Left Right scanline Matching cost disparity Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation
Correlation methods Sum of squares different(SSD)= ΣΣ Absolute difference (AD) = CC = Normalized correlation (NC) = MC =
Window size If we take a too small window then we will more details but more noise ! If we take a too large window the matching will be less sensitive to noise. W=3 W=20
Disparity map Disparity map is a map that helps us to express our match after we have done the correspondence ( correlation ) between the left and the right images, that it uses the intensity of gray color to express the disparity between the two matching windows (bright color mention that disparity is large, and dark color express small disparity) .
Disparity map.con If we were to perform this matching process for every pixel in the left hand image, finding its match in the right hand frame and computing the distance between them you would end up with an image where every pixel contained the distance/disparity value for that pixel in the left image.
Disparity map Left image Right image
Correlation method Not working good enough in some cases (images that have few details). Easy to implement . We can use dense disparity map.
Failures of correspondence search Textureless surfaces Occlusions , repetition
Feature based approach
Feature based Features Matching algorithm Edge points Lines Corners Extract features in the stereo pair Define similarity measure Search correspondence using similarity measure and the epipolar geometry
Feature based methods l-length - orientation m-coordinates of the midpoint i-average intensity along the line W- the weights
Feature based approach Pros : Relatively insensive to illumination changes Good for man made scenes with strong lines but weak texture or texurless surfaces Work well on edges Faster than correlation approach Cons: Only sparse depth map May be tricky
Winner take all Tow pixels(in image plan1) may correesponed to the same pixel in image plane2. c b a
Ordering
Global approach Use dynamic programing to force pixels along scan line. If a is in the left side of b then will match to a pixel that in the liftest side. We will match every pixel
Global approach
Global approach We want to minimize the coast of the correspondence(starting from the left to the right). min 𝑑 𝑥=1 𝑛 𝑐 𝑥,𝑦,𝑑 .
Three Views Matches between points in the first two images can be checked by re-projecting the corresponding three-dimensional point in the third image