Structured light and active ranging techniques Class 8.

Slides:



Advertisements
Similar presentations
High-Resolution Three- Dimensional Sensing of Fast Deforming Objects Philip Fong Florian Buron Stanford University This work supported by:
Advertisements

Kawada Industries Inc. has introduced the HRP-2P for Robodex 2002
www-video.eecs.berkeley.edu/research
--- some recent progress Bo Fu University of Kentucky.
Stereo Vision Reading: Chapter 11
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Structured light and active ranging techniques Class 11.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #17.
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Structured Light + Range Imaging Lecture #17 (Thanks to Content from Levoy, Rusinkiewicz, Bouguet, Perona, Hendrik Lensch)
Structure from motion Class 9 Read Chapter 5. 3D photography course schedule (tentative) LectureExercise Sept 26Introduction- Oct. 3Geometry & Camera.
Structured Lighting Guido Gerig CS 6320, 3D Computer Vision Spring 2012 (thanks: some slides S. Narasimhan CMU, Marc Pollefeys UNC)
Shape-from-X Class 11 Some slides from Shree Nayar. others.
Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Self-calibration and multi-view geometry Class 10 Read Chapter 6 and 3.2.
Last Time Pinhole camera model, projection
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Acknowledgement: some content and figures by Brian Curless
Multi-view stereo Many slides adapted from S. Seitz.
Overview of 3D Scanners Acknowledgement: some content and figures by Brian Curless.
CS 223b 1 More on stereo and correspondence. CS 223b 2 =?f g Mostpopular For each window, match to closest window on epipolar line in other image. (slides.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
3D photography Marc Pollefeys Fall 2007
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Single View Metrology Class 3. 3D photography course schedule (tentative) LectureExercise Sept 26Introduction- Oct. 3Geometry & Camera modelCamera calibration.
Stereo Guest Lecture by Li Zhang
Project 1 artifact winners Project 2 questions Project 2 extra signup slots –Can take a second slot if you’d like Announcements.
Epipolar geometry Class 5
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Stereo matching “Stereo matching” is the correspondence problem –For a point in Image #1, where is the corresponding point in Image #2? C1C1 C2C2 ? ? C1C1.
Stereo matching Class 10 Read Chapter 7 Tsukuba dataset.
3D photography Marc Pollefeys Fall 2008
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
Epipolar geometry Class 5. Geometric Computer Vision course schedule (tentative) LectureExercise Sept 16Introduction- Sept 23Geometry & Camera modelCamera.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Structured light and active ranging techniques Class 8
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Announcements Project 1 artifact winners Project 2 questions
Structure from images. Calibration Review: Pinhole Camera.
Sensors Ioannis Stamos. Perspective projection Ioannis Stamos – CSCI F08 Pinhole & the Perspective Projection SCREEN SCENE (x,y) Is there an image.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Stereo Class 7 Read Chapter 7 of tutorial Tsukuba dataset.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Stereo Readings Szeliski, Chapter 11 (through 11.5) Single image stereogram, by Niklas EenNiklas Een.
Stereo Many slides adapted from Steve Seitz.
Project 2 code & artifact due Friday Midterm out tomorrow (check your ), due next Fri Announcements TexPoint fonts used in EMF. Read the TexPoint.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Triangulation Scanner Design Options
Paper presentation topics 2. More on feature detection and descriptors 3. Shape and Matching 4. Indexing and Retrieval 5. More on 3D reconstruction 1.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
AAFS 2004 Dallas Zeno Geradts
제 5 장 스테레오.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Acknowledgement: some content and figures by Brian Curless
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Structured light and active ranging techniques Class 8

per-pixel optimization per-scanline optimization full image optimization last Wednesday: stereo

polar rectification planar rectification original image pair

Plane-sweep multi-view matching Simple algorithm for multiple cameras No rectification necessary, but also no gain Doesn’t deal with occlusions Collins’96; Roy and Cox’98 (GC); Yang et al.’02/’03 (GPU)

3D photography course schedule (tentative) LectureExercise Sept 26Introduction- Oct. 3Geometry & Camera modelCamera calibration Oct. 10Single View MetrologyMeasuring in images Oct. 17Feature Tracking/matching (Friedrich Fraundorfer) Correspondence computation Oct. 24Epipolar GeometryF-matrix computation Oct. 31Shape-from-Silhouettes (Li Guan) Visual-hull computation Nov. 7Stereo matchingProject proposals Nov. 14Structured light and active range sensing Papers Nov. 21Structure from motionPapers Nov. 28Multi-view geometry and self-calibration Papers Dec. 5Shape-from-XPapers Dec. 123D modeling and registrationPapers Dec. 19Appearance modeling and image-based rendering Final project presentations

Today’s class unstructured light structured light time-of-flight (some slides from Szymon Rusinkiewicz, Brian Curless)

A Taxonomy

A taxonomy

Unstructured light project texture to disambiguate stereo

Space-time stereo Davis, Ramamoothi, Rusinkiewicz, CVPR’03

Space-time stereo Davis, Ramamoothi, Rusinkiewicz, CVPR’03

Space-time stereo Zhang, Curless and Seitz, CVPR’03

Space-time stereo results Zhang, Curless and Seitz, CVPR’03

Light Transport Constancy Davis, Yang, Wang, ICCV05

Triangulation

Triangulation: Moving the Camera and Illumination Moving independently leads to problems with focus, resolution Most scanners mount camera and light source rigidly, move them as a unit

Triangulation: Moving the Camera and Illumination

(Rioux et al. 87)

Triangulation: Extending to 3D Possibility #1: add another mirror (flying spot) Possibility #2: project a stripe, not a dot Object Laser CameraCamera

Triangulation Scanner Issues Accuracy proportional to working volume (typical is ~1000:1) Scales down to small working volume (e.g. 5 cm. working volume, 50  m. accuracy) Does not scale up (baseline too large…) Two-line-of-sight problem (shadowing from either camera or laser) Triangulation angle: non-uniform resolution if too small, shadowing if too big (useful range: 15  -30  )

Triangulation Scanner Issues Material properties (dark, specular) Subsurface scattering Laser speckle Edge curl Texture embossing

Space-time analysis Curless ‘95

Space-time analysis Curless ‘95

Projector as camera

Multi-Stripe Triangulation To go faster, project multiple stripes But which stripe is which? Answer #1: assume surface continuity e.g. Eyetronics’ ShapeCam

Real-time system Koninckx and Van Gool

Multi-Stripe Triangulation To go faster, project multiple stripes But which stripe is which? Answer #2: colored stripes (or dots)

Multi-Stripe Triangulation To go faster, project multiple stripes But which stripe is which? Answer #3: time-coded stripes

Time-Coded Light Patterns Assign each stripe a unique illumination code over time [Posdamer 82] Space Time

Better codes… Gray code Neighbors only differ one bit

Poor man’s scanner Bouget and Perona, ICCV’98

Pulsed Time of Flight Basic idea: send out pulse of light (usually laser), time how long it takes to return

Pulsed Time of Flight Advantages: Large working volume (up to 100 m.) Disadvantages: Not-so-great accuracy (at best ~5 mm.) Requires getting timing to ~30 picoseconds Does not scale with working volume Often used for scanning buildings, rooms, archeological sites, etc.

Depth cameras 2D array of time-of-flight sensors e.g. Canesta’s CMOS 3D sensor jitter too big on single measurement, but averages out on many (10,000 measurements  100x improvement)

Depth cameras Superfast shutter + standard CCD cut light off while pulse is coming back, then I~Z but I~albedo (use unshuttered reference view) 3DV’s Z-cam

AM Modulation Time of Flight Modulate a laser at frequency m, it returns with a phase shift  Note the ambiguity in the measured phase!  Range ambiguity of 1 / 2 m n

AM Modulation Time of Flight Accuracy / working volume tradeoff (e.g., noise ~ 1 / 500 working volume) In practice, often used for room-sized environments (cheaper, more accurate than pulsed time of flight)

Shadow Moire

Depth from focus/defocus Nayar’95

Next class: structure from motion