Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.

Slides:



Advertisements
Similar presentations
CSE473/573 – Stereo and Multiple View Geometry
Advertisements

Three Dimensional Viewing
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Two-view geometry.
Lecture 8: Stereo.
Camera calibration and epipolar geometry
3D Computer Vision and Video Computing 3D Vision Topic 3 of Part II Stereo Vision CSc I6716 Spring 2011 Zhigang Zhu, City College of New York
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
CS485/685 Computer Vision Prof. George Bebis
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
Stereopsis Mark Twain at Pool Table", no date, UCR Museum of Photography.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
3D Computer Vision and Video Computing 3D Vision Topic 4 of Part II Stereo Vision CSc I6716 Spring 2008 Zhigang Zhu, City College of New York
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D Computer Vision and Video Computing 3D Vision Lecture 15 Stereo Vision (II) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
Stereo Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision (with slides by James Rehg and.
Visualization- Determining Depth From Stereo Saurav Basu BITS Pilani 2002.
COMP322/S2000/L271 Stereo Imaging Ref.V.S.Nalwa, A Guided Tour of Computer Vision, Addison Wesley, (ISBN ) Slides are adapted from CS641.
Stereo vision A brief introduction Máté István MSc Informatics.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Image formation & Geometrical Transforms Francisco Gómez J MMS U. Central y UJTL.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alexander Norton Advisor: Dr. Huggins April 26, 2012 Senior Capstone Project Final Presentation.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Integral University EC-024 Digital Image Processing.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
3D Sensing Camera Model Camera Calibration
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
stereo Outline : Remind class of 3d geometry Introduction
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Feature Matching. Feature Space Outlier Rejection.
Making Panoramas. Input: Output: … Input:  A set of images taken from the same optical center.  For this project, the images will also have the same.
12/24/2015 A.Aruna/Assistant professor/IT/SNSCE 1.
Solving for Stereo Correspondence Many slides drawn from Lana Lazebnik, UIUC.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
John Morris These slides were adapted from a set of lectures written by Mircea Nicolescu, University of Nevada at Reno Stereo Vision Iolanthe in the Bay.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Introduction To IBR Ying Wu. View Morphing Seitz & Dyer SIGGRAPH’96 Synthesize images in transition of two views based on two images No 3D shape is required.
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Stereo.
제 5 장 스테레오.
: Chapter 11: Three Dimensional Image Processing
Common Classification Tasks
Two-view geometry.
Computer Vision Stereo Vision.
Course 6 Stereo.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: 13
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods

2 Introduction  We can see objects in depth by utilizing the difference between the images in our left and right eyes.  Stereo is one of many depth cues, but easiest to understand.  Points on the surfaces of the objects are imaged in different relative positions on their distances from the viewer.

3 Disparity between the two images  Suppose that we rigidly attach two cameras to each other so that their optical axis are parallel and separated by a distance T. The line connecting the lens centers is called the baseline.  Assume that the baseline is perpendicular to the optical axes and orient the x-axis so that it is parallel to the baseline.

4 Disparity between the two images

5  Distance is inversely proportional to disparity. (The distance to near objects can therefore be measured accurately, while that to far objects cannot.)  The disparity is directly proportional to T, the distance between lens centers. (The accuracy of the depth determination increases with increasing baseline T. Unfortunately, as the separation of the cameras increases, the two images become less similar.)  The disparity is also proportional to the effective focal distance f, because the images are magnified as the focal length is increased.

6

7 Disparity between the two images  A point in the environment visible from both camera stations gives rise to a pair of image points called a conjugate pair.  Note that a point in the right image corresponding to a specified point in the left image must lie somewhere on a particular line, because the two have the same y- coordinate. This line is the epipolar line.

8 Photogrammetry  In practice, the two cameras used to obtain a stereo pair will not be aligned exactly, as we have assumed so far in our simplified analysis.  It is difficult to arrange for the optical axes to be exactly parallel and for the baseline to be exactly perpendicular to the optical axes.  In fact, if the two cameras are to be exposed to more or less the same collection of objects, they may have to be turned.

9 Photogrammetry  One of the most important practical applications of the stereo is in photogrammetry. In this field the shape of the surface of an object is determined from overlapping photographs taken by carefully calibrated cameras.  Adjacent pairs of photographs are presented to the left and right eye in a device called a stereo comparator that makes it possible for an observer to accurately measure the disparity of identifiable points on the surface.  We must determine the relation between the camera’s positions and orientation when the exposures were made. This process, called relative orientation, determines the transformation between coordinate systems.

10  Transformation between two camera stations can be treated as a rigid body motion and can be decomposed into a rotation and translation.  If r l =(x l,y l,z l ) T is the position of P measured in the left camera coordinate system and r r =(x r,y r,z r ) T is the position of the same point measured in the right camera coordinate system, then, r r =Rr l +r 0,  Where R is a 3x3 orthonormal matrix representing the rotation and r 0 is an offset vector corresponding to the translation. R T R=I where I is the 3x3 identity matrix. Photogrammetry

11 Finding Corresponding Points  We will consider the corresponding point problem to determine which point in one image corresponds to a given point in the other image. Correlation-based methods Feature-based methods

12 Correlation Based Stereo Methods  In the correlation based method, depth is computed at each pixel.  A gray level patch around a pixel in the left image is correlated with the corresponding pixel in the right image. The disparity for the best match is determined.

13 Algorithm CORR-MATCHING The input is a stereo pair of images Il(left) and Ir (right). Let pl and pr be pixels in the left and right image, 2W+1 the width (in pixel) of the correlation window, R(pl) the search region in the right image associated with pl, and  (u,v) a function of two pixel values, u, v. For each pixel pl = [i, j] T of the left image: 1. for each displacement d = [d 1,d 2 ] T  R(pl) compute 2. the disparity of pl is the vector that maximizes c(d) over R(pl): The output is an array of disparities (the disparity map), one per each pixel of Il.

14 Two widely adopted choices for the function  (u,v) are  (u,v)=u.v which yields the cross-correlation between the window in the left image and the search region in the right image, and  (u,v)= (u-v) 2 which perform the so called SSD (sum of squered distance) or block matching.