1 Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1.

Slides:



Advertisements
Similar presentations
3D Head Mesh Data Stereo Vision Active Stereo 3D Reconstruction 3dMD System 1.
Advertisements

CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision : CISC 4/689
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
MSU CSE 803 Stockman1 CV: Perceiving 3D from 2D Many cues from 2D images enable interpretation of the structure of the 3D world producing them.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
CSE473/573 – Stereo Correspondence
What is an image? Light changes its path after reflection or refraction. Thus when we see something, we may not seeing the REAL thing at its true location.
COMP322/S2000/L271 Stereo Imaging Ref.V.S.Nalwa, A Guided Tour of Computer Vision, Addison Wesley, (ISBN ) Slides are adapted from CS641.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
1 Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Introduction to 3D Vision How do we obtain 3D image data? What can we do with it?
Recap Low Level Vision –Input: pixel values from the imaging device –Data structure: 2D array, homogeneous –Processing: 2D neighborhood operations Histogram.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Structure from images. Calibration Review: Pinhole Camera.
Ray Model A useful model under certain circumstances to explain image formation. Ray Model: Light travels in straight-line paths, called rays, in ALL.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Binocular Stereo #1. Topics 1. Principle 2. binocular stereo basic equation 3. epipolar line 4. features and strategies for matching.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
3D Shape Inference Computer Vision No.2-1. Pinhole Camera Model the camera center Principal axis the image plane.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
3D Sensing Camera Model Camera Calibration
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Development of a laser slit system in LabView
stereo Outline : Remind class of 3d geometry Introduction
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
October 19, STEREO IMAGING (continued). October 19, RIGHT IMAGE PLANE LEFT IMAGE PLANE RIGHT FOCAL POINT LEFT FOCAL POINT BASELINE d FOCAL.
Computer vision: models, learning and inference M Ahad Multiple Cameras
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
3D Reconstruction Using Image Sequence
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
John Morris These slides were adapted from a set of lectures written by Mircea Nicolescu, University of Nevada at Reno Stereo Vision Iolanthe in the Bay.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Noah Snavely, Zhengqi Li
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
3D Vision Yang Wang National ICT Australia
Processing visual information for Computer Vision
제 5 장 스테레오.
Computational Vision CSCI 363, Fall 2016 Lecture 15 Stereopsis
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
3D Head Mesh Data Stereo Vision Active Stereo 3D Reconstruction
Epipolar geometry.
Common Classification Tasks
Thanks to Richard Szeliski and George Bebis for the use of some slides
Filtering Things to take away from this lecture An image as a function
Computer Vision Stereo Vision.
3D Shape Inference Computer Vision No.2-1.
Course 6 Stereo.
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: 13
Filtering An image as a function Digital vs. continuous images
Presentation transcript:

1 Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation that stores some 3D properties of the scene 2. 3D shape from X: methods of inferring 3D depth information from various sources

2 What can you determine about 1. the sizes of objects 2. the distances of objects from the camera? What knowledge do you use to analyze this image?

3 What objects are shown in this image? How can you estimate distance from the camera? What feature changes with distance?

4 Intrinsic Images: 2.5 D The idea of intrinsic images is to label features of a 2D image with information that tells us something about the 3D structure of the scene. occluding edge convex edge

5 Contour Labels for Intrinsic Images convex crease (+) concave crease (-) blade (>) limb (>>) shadow (S) illumination boundary (I) reflectance boundary (M) M S I

6 Labeling Simple Line Drawings Huffman and Clowes showed that blocks world drawings could be labeled (with +, -, >) based on real world constraints. Labeling a simple blocks world image is a consistent labeling problem! Waltz extended the work to cracks and shadows and developed one of the first discrete relaxation algorithms, known as Waltz filtering.

7 Problems with this Approach Research on how to do these labelings was confined to perfect blocks world images There was no way to extend it to real images with missing segments, broken segments, nonconnected junctions, etc. It led some groups down the wrong path for a while.

8 3D Shape from X shading silhouette texture stereo light striping motion mainly research used in practice

9 Perspective Imaging Model: 1D xi xf f This is the axis of the real image plane. O O is the center of projection. This is the axis of the front image plane, which we use. zc xc xi xc f zc = camera lens 3D object point B D E image of point B in front image real image point

10 Perspective in 2D (Simplified) P=(xc,yc,zc) =(xw,yw,zw) 3D object point xc yc zw=zc yi Yc Xc Zc xi F f camera P´=(xi,yi,f) xi xc f zc yi yc f zc = = xi = (f/zc)xc yi = (f/zc)yc Here camera coordinates equal world coordinates. optical axis ray

11 3D from Stereo left imageright image 3D point disparity: the difference in image location of the same 3D point when projected under perspective to two different cameras. d = xleft - xright

12 Depth Perception from Stereo Simple Model: Parallel Optic Axes f f L R camera baseline camera b P=(x,z) Z X image plane xl xr z z x f xl = x-b z x-b f xr = z y y f yl yr == y-axis is perpendicular to the page.

13 Resultant Depth Calculation For stereo cameras with parallel optical axes, focal length f, baseline b, corresponding image points (xl,yl) and (xr,yr) with disparity d: z = f*b / (xl - xr) = f*b/d x = xl*z/f or b + xr*z/f y = yl*z/f or yr*z/f This method of determining depth from disparity is called triangulation.

14 Finding Correspondences If the correspondence is correct, triangulation works VERY well. But correspondence finding is not perfectly solved. for the general stereo problem. For some very specific applications, it can be solved for those specific kind of images, e.g. windshield of a car. °°

15 3 Main Matching Methods 1. Cross correlation using small windows. 2. Symbolic feature matching, usually using segments/corners. 3. Use the newer interest operators, ie. SIFT. dense sparse

16 Epipolar Geometry Constraint: 1. Normal Pair of Images x y1 y2 z1 z2 C1 C2 b P P1 P2 epipolar plane The epipolar plane cuts through the image plane(s) forming 2 epipolar lines. The match for P1 (or P2) in the other image, must lie on the same epipolar line.

17 Epipolar Geometry: General Case P P1 P2 y1 y2 x1 x2 e1 e2 C1 C2

18 Constraints P e1 e2 C1 C2 1. Epipolar Constraint: Matching points lie on corresponding epipolar lines. 2. Ordering Constraint: Usually in the same order across the lines. Q

19 Structured Light 3D data can also be derived using a single camera a light source that can produce stripe(s) on the 3D object light source camera light stripe

20 Structured Light 3D Computation 3D data can also be derived using a single camera a light source that can produce stripe(s) on the 3D object light source x axis f (x´,y´,f) 3D point (x, y, z)  b b [x y z] = [x´ y´ f] f cot  - x´ (0,0,0) 3D image

21 Depth from Multiple Light Stripes What are these objects?

22 Our (former) System 4-camera light-striping stereo projector rotation table cameras 3D object