Review Previous section: – Feature detection and matching – Model fitting and outlier rejection.

Slides:



Advertisements
Similar presentations
CSE473/573 – Stereo and Multiple View Geometry
Advertisements

Stereo Vision Reading: Chapter 11
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Stereo and Projective Structure from Motion
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Lecture 8: Stereo.
Stereo.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2012.
Camera calibration and epipolar geometry
Last Time Pinhole camera model, projection
Computer Vision : CISC 4/689
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
The plan for today Camera matrix
Stereo and Structure from Motion
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
CSE473/573 – Stereo Correspondence
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
Wednesday March 23 Kristen Grauman UT-Austin
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/12/11 Many slides adapted from Lana Lazebnik,
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/05/15 Many slides adapted from Lana Lazebnik,
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Epipolar Geometry and Stereo Vision Computer Vision ECE 5554 Virginia Tech Devi Parikh 10/15/13 These slides have been borrowed from Derek Hoiem and Kristen.
Structure from images. Calibration Review: Pinhole Camera.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
Miniature faking In close-up photo, the depth of field is limited.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
Stereo Many slides adapted from Steve Seitz.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Computer Vision, Robert Pless
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Geometric Transformations
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
stereo Outline : Remind class of 3d geometry Introduction
Feature Matching. Feature Space Outlier Rejection.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Solving for Stereo Correspondence Many slides drawn from Lana Lazebnik, UIUC.
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
3D Reconstruction Using Image Sequence
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Multiple View Geometry and Stereo. Overview Single camera geometry – Recap of Homogenous coordinates – Perspective projection model – Camera calibration.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
CSE 185 Introduction to Computer Vision Stereo 2.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Epipolar Geometry and Stereo Vision
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Miniature faking In close-up photo, the depth of field is limited.
제 5 장 스테레오.
Computational Vision CSCI 363, Fall 2016 Lecture 15 Stereopsis
Stereo CSE 455 Linda Shapiro.
Stereo and Structure from Motion
Miniature faking In close-up photo, the depth of field is limited.
What have we learned so far?
Depth Estimation (Sz 11) “3d” movies are trendy lately, but arguably they should be called “stereo 3d” movies because movies already have many 3d cues.
Multiple View Geometry for Robotics
Binocular Stereo Vision
Binocular Stereo Vision
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Review Previous section: – Feature detection and matching – Model fitting and outlier rejection

Project 2 questions?

Review: Interest points Keypoint detection: repeatable and distinctive – Corners, blobs, stable regions – Harris, DoG, MSER

Harris Detector [ Harris88 ] Second moment matrix 4 1. Image derivatives 2. Square of derivatives 3. Gaussian filter g(  I ) IxIx IyIy Ix2Ix2 Iy2Iy2 IxIyIxIy g(I x 2 )g(I y 2 ) g(I x I y ) 4. Cornerness function – both eigenvalues are strong har 5. Non-maxima suppression (optionally, blur first)

Review: Local Descriptors Most features can be thought of as templates, histograms (counts), or combinations Most available descriptors focus on edge/gradient information – Capture texture information – Color rarely used K. Grauman, B. Leibe

x y b m x ym b Slide from S. Savarese Review: Hough transform

Review: RANSAC Algorithm: 1. Sample (randomly) the number of points required to fit the model (#=2) 2. Solve for model parameters using samples 3. Score by the fraction of inliers within a preset threshold of the model Repeat 1-3 until the best model is found with high confidence

Review: 2D image transformations Szeliski 2.1

CS143, Brown James Hays Stereo: Epipolar geometry Slides by Kristen Grauman

Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow

Why multiple views? Structure and depth are inherently ambiguous from single views. Images from Lana Lazebnik

Why multiple views? Structure and depth are inherently ambiguous from single views. Optical center P1 P2 P1’=P2’

What cues help us to perceive 3d shape and depth?

Shading [Figure from Prados & Faugeras 2006]

Focus/defocus [figs from H. Jin and P. Favaro, 2002] Images from same point of view, different camera parameters 3d shape / depth estimates

Texture [From A.M. Loh. The recovery of 3-D structure using visual texture patterns. PhD thesis]A.M. Loh. The recovery of 3-D structure using visual texture patterns.

Perspective effects Image credit: S. Seitz

Motion Figures from L. Zhanghttp://

Occlusion Rene Magritt'e famous painting Le Blanc-Seing (literal translation: "The Blank Signature") roughly translates as "free hand" or "free rein".

Estimating scene shape “Shape from X”: Shading, Texture, Focus, Motion… Stereo: –shape from “motion” between two views –infer 3d shape of scene from two (multiple) images from different viewpoints scene point optical center image plane Main idea:

Outline Human stereopsis Stereograms Epipolar geometry and the epipolar constraint –Case example with parallel optical axes –General case with calibrated cameras

Human eye Fig from Shapiro and Stockman Pupil/Iris – control amount of light passing through lens Retina - contains sensor cells, where image is formed Fovea – highest concentration of cones Rough analogy with human visual system:

Human stereopsis: disparity Human eyes fixate on point in space – rotate so that corresponding images form in centers of fovea.

Disparity occurs when eyes fixate on one object; others appear at different visual angles Human stereopsis: disparity

Disparity: d = r-l = D-F. Human stereopsis: disparity Forsyth & Ponce

Random dot stereograms Julesz 1960: Do we identify local brightness patterns before fusion (monocular process) or after (binocular)? To test: pair of synthetic images obtained by randomly spraying black dots on white objects

Random dot stereograms Forsyth & Ponce

Random dot stereograms

When viewed monocularly, they appear random; when viewed stereoscopically, see 3d structure. Conclusion: human binocular fusion not directly associated with the physical retinas; must involve the central nervous system Imaginary “cyclopean retina” that combines the left and right image stimuli as a single unit High level scene understanding not required for Stereo

Stereo photography and stereo viewers Invented by Sir Charles Wheatstone, 1838 Image from fisher-price.com Take two pictures of the same subject from two slightly different viewpoints and display so that each eye sees only one of the images.

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Autostereograms Images from magiceye.com Exploit disparity as depth cue using single image. (Single image random dot stereogram, Single image stereogram)

Images from magiceye.com Autostereograms

Estimating depth with stereo Stereo: shape from “motion” between two views We’ll need to consider: Info on camera pose (“calibration”) Image point correspondences scene point optical center image plane

Two cameras, simultaneous views Single moving camera and static scene Stereo vision

Camera parameters Camera frame 1 Intrinsic parameters: Image coordinates relative to camera  Pixel coordinates Extrinsic parameters: Camera frame 1  Camera frame 2 Camera frame 2 Extrinsic params: rotation matrix and translation vector Intrinsic params: focal length, pixel sizes (mm), image center point, radial distortion parameters We’ll assume for now that these parameters are given and fixed.

Outline Human stereopsis Stereograms Epipolar geometry and the epipolar constraint –Case example with parallel optical axes –General case with calibrated cameras

Geometry for a simple stereo system First, assuming parallel optical axes, known camera parameters (i.e., calibrated cameras):

baseline optical center (left) optical center (right) Focal length World point image point (left) image point (right) Depth of p

Assume parallel optical axes, known camera parameters (i.e., calibrated cameras). What is expression for Z? Similar triangles (p l, P, p r ) and (O l, P, O r ): Geometry for a simple stereo system disparity

Depth from disparity image I(x,y) image I´(x´,y´) Disparity map D(x,y) (x´,y´)=(x+D(x,y), y) So if we could find the corresponding points in two images, we could estimate relative depth…

Basic stereo matching algorithm If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image – Find corresponding epipolar scanline in the right image – Examine all pixels on the scanline and pick the best match x’ – Compute disparity x-x’ and set depth(x) = fB/(x-x’)

Matching cost disparity LeftRight scanline Correspondence search Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation

LeftRight scanline Correspondence search SSD

LeftRight scanline Correspondence search Norm. corr

Effect of window size W = 3W = 20 Smaller window +More detail – More noise Larger window +Smoother disparity maps – Less detail

Failures of correspondence search Textureless surfaces Occlusions, repetition Non-Lambertian surfaces, specularities

Results with window search Window-based matchingGround truth Data

How can we improve window-based matching? So far, matches are independent for each point What constraints or priors can we add?

Summary Depth from stereo: main idea is to triangulate from corresponding image points. Epipolar geometry defined by two cameras –We’ve assumed known extrinsic parameters relating their poses Epipolar constraint limits where points from one view will be imaged in the other –Makes search for correspondences quicker Terms: epipole, epipolar plane / lines, disparity, rectification, intrinsic/extrinsic parameters, essential matrix, baseline

Coming up –Stereo Algorithms –Structure from Motion