Download presentation

Presentation is loading. Please wait.

Published byNia Seaver Modified over 2 years ago

1
**875: Recent Advances in Geometric Computer Vision & Recognition**

Jan-Michael Frahm Spring 2014

2
Introductions

3
**Grade Requirements Presentation of 2 papers in class**

30 min talk, 10 min questions Papers for selection must come from: top journals: IJCV, PAMI, CVIU, IVCJ top conferences: CVPR (2010,2011), ICCV (2011), ECCV (2010), approval for all other venues is needed Final project evaluation, extension of a recent method from the above

4
**Grading 20% first presentation 20% second presentation**

30% final project 30% attendance & class participation

5
**Schedule Jan. 7th, Introduction**

Jan 7th, Uncertainty in Stereo (guest Philippos Mordohai) (substitute for Jan 13th class) Jan 15th , Large-scale image localization basic concepts, First paper selection (Large –scale localization) Jan 20th, MLK holiday no class Jan 22nd-29th, Large-scale localization basic concepts Feb. 3rd, 1. round of presentations starts Mar. 10th, 12th Spring break (no class) Mar. 17th, Modeling dynamic objects/scenes basic concepts, Second paper selection, final project definition Mar. 19st, Modeling dynamic objects Mar. 24th, 2. round of presentations starts Apr. 21st, 23rd , final project presentation

6
**How to give a great presentation**

Structure of the talk: Motivation (motivate and explain the problem) Overview Related work (short concise discussion) Approach Experiments Conclusion and future work

7
**How to give a great presentation**

Use large enough fonts 5-6 one line bullet items on a slide max Keep it simple No complex formulas in your talk Bad Powerpoint slides How to for presentations

8
**How to give a great presentation**

Abstract the material of the talk provide understanding beyond details Use pictures to illustrate find pictures on the internet create a graphic (in ppt, graph tool) animate complex pictures

9
**How to give a good presentation**

Avoid bad color schemes no red on blue looks awful Avoid using laser pointer (especially if you are nervous) Add pointing elements in your presentation Practice to stay within your time! Don’t rush through the talk!

10
**Brush up on Stereo Reconstruction**

11
**Stereo Stereo Extraction of 3D information from 2D images Images**

3D Point Cloud Stereo

12
Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Humans can do it Stereograms: Invented by Sir Charles Wheatstone, 1838

13
**Depth Recovery by Stereo**

Search Space d7 d6 d5 d4 d3 d2 d1 reference image matching image Depth Photo-consistency Epipolar line

14
**Depth Recovery from Stereo**

Depth Map d9 d8 Search Space d7 d6 d5 d4 Ground Truth Pixel Matching d3 d2 d1 reference image matching image Depth Epipolar line Matching Cost Pixel similarity: measured by color differences depth

15
**Matching criteria Raw pixel values (correlation)**

Band-pass filtered images [Jones & Malik 92] “Corner” like features [Zhang, …] Edges [many people…] Gradients [Seitz 89; Scharstein 94] Rank statistics [Zabih & Woodfill 94] Intervals [Birchfield and Tomasi 96] Overview of matching metrics and their performance: H. Hirschmüller and D. Scharstein, “Evaluation of Stereo Matching Costs on Images with Radiometric Differences”, PAMI 2008 slide: R. Szeliski

16
Adaptive Weighting Boundary Preserving More Costly

17
**Simplest Case: Parallel images**

Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same slide: S. Lazebnik

18
**Simplest Case: Parallel images**

Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same Then, epipolar lines fall along the horizontal scan lines of the images slide: S. Lazebnik

19
**Essential matrix for parallel images**

Epipolar constraint: R = I t = (T, 0, 0) x x’ t

20
**Essential matrix for parallel images**

Epipolar constraint: R = I t = (T, 0, 0) x x’ t

21
**Aggregation Structure**

Matching Cost depth Search Space Pixelwise Costs

22
**Aggregation Structure**

Search Space Search Space Cost aggregation: cutting the cost volume. Cost Volume

23
**Aggregation Structure**

Cost of the center pixel Treat neighbors equally Costs of neighboring pixels Fronto-Parallel Plane Sum of Absolute Differences (SAD) Depth Map Cost Volume

24
**Aggregation Structure**

Weighted cost of the center pixel Weighted costs of neighboring pixels Color differences Spatial distances Adaptive Weight Yoon and Kweon, PAMI 2006 Depth Map Cost Volume

25
**Aggregation Structure**

Adaptive Weight Oriented Plane Lu et al., CVPR 2013 Depth Map Cost Volume

26
**Stereo: epipolar geometry**

Match features along epipolar lines epipolar plane viewing ray epipolar line slide: R. Szeliski

27
**Your basic stereo algorithm**

Improvement: match windows This should look familar... For each epipolar line For each pixel in the left image compare with every pixel on same epipolar line in right image pick pixel with minimum match cost slide: R. Szeliski

28
**Depth Map Computation Local methods Global methods**

Image Resolution : the total number of pixels Local methods Depth with the minimum cost Complexity: Global methods Pairwise interactions bN pixels aN pixels N pixels Scharstein and Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms", IJCV 2002

29
**Depth from disparity f x x’ Baseline B z O O’ X**

Disparity is inversely proportional to depth!

30
**Quadratic precision loss with depth!**

Depth Sampling Depth sampling for integer pixel disparity Quadratic precision loss with depth!

31
Depth Sampling Depth sampling for wider baseline

32
**Depth sampling is in O(resolution6)**

accuracy is closeness to solution

33
**Finding correspondences**

apply feature matching criterion (e.g., correlation or Lucas-Kanade) at all pixels simultaneously search only over epipolar lines (many fewer candidate positions) slide: R. Szeliski

34
**Correspondence search**

Left Right scanline Matching cost disparity Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation slide: S. Lazebnik

35
**Correspondence search**

Left Right scanline SSD slide: S. Lazebnik

36
**Correspondence search**

Left Right scanline Norm. corr slide: S. Lazebnik

37
**Neighborhood size Smaller neighborhood: more details**

Larger neighborhood: fewer isolated mistakes w = 3 w = 20 slide: R. Szeliski

38
**Failures of correspondence search**

Occlusions, repetition Textureless surfaces Non-Lambertian surfaces, specularities slide: S. Lazebnik

39
**How can we improve window-based matching?**

The similarity constraint is local (each reference window is matched independently) Need to enforce non-local correspondence constraints slide: S. Lazebnik

40
**Non-local constraints**

Uniqueness For any point in one image, there should be at most one matching point in the other image slide: S. Lazebnik

41
**Non-local constraints**

Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views slide: S. Lazebnik

42
**Non-local constraints**

Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Ordering constraint doesn’t hold slide: S. Lazebnik

43
**Non-local constraints**

Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Smoothness We expect disparity values to change slowly (for the most part) slide: S. Lazebnik

44
**Multiple-baseline stereo results**

M. Okutomi and T. Kanade, “A Multiple-Baseline Stereo System,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 15(4): (1993).

45
**Plane Sweep Stereo Choose a reference view**

Sweep family of planes at different depths with respect to the reference camera input image input image reference camera Each plane defines a homography warping each input image into the reference view R. Collins. A space-sweep approach to true multi-image matching. CVPR

46
**Real-time 3D reconstruction from video**

“Real-Time Plane-sweeping Stereo with Multiple Sweeping Directions", CVPR 2007 warped images 3D scene SAD as similarity (darker is higher similarity)

47
**Real-time 3D reconstruction from video**

“Real-Time Plane-sweeping Stereo with Multiple Sweeping Directions", CVPR 2007 warped images 3D scene SAD as similarity (darker is higher similarity)

48
**Real-time 3D reconstruction from video**

“Real-Time Plane-sweeping Stereo with Multiple Sweeping Directions", CVPR 2007 warped images not formal enough not general enough either more precise with sweeping multiple orthogonal directions 3D scene SAD as similarity (darker is higher similarity)

49
**Real-time 3D reconstruction from video**

“Real-Time Plane-sweeping Stereo with Multiple Sweeping Directions", CVPR 2007 warped images Multi-way sweep not formal enough not general enough either more precise with sweeping multiple orthogonal directions 3D scene SAD as similarity (darker is higher similarity)

50
**3D reconstruction from video**

view 1 view N

51
**3D reconstruction from video**

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google