--- some recent progress Bo Fu University of Kentucky.

Slides:



Advertisements
Similar presentations
Micro Phase Shifting Se-Hoon, Park -Mohit Gupta and Shree K. Nayar, CVPR2012.
Advertisements

QR Code Recognition Based On Image Processing
3D Head Mesh Data Stereo Vision Active Stereo 3D Reconstruction 3dMD System 1.
Fast Separation of Direct and Global Images Using High Frequency Illumination Shree K. Nayar Gurunandan G. Krishnan Columbia University SIGGRAPH Conference.
Procam and Campro Shree K. Nayar Computer Science Columbia University Support: NSF, ONR Procams 2006 PROCAMS Shree K. Nayar,
Kawada Industries Inc. has introduced the HRP-2P for Robodex 2002
IITB-Monash Research Academy An Indian-Australian Research Partnership IIT Bombay Projection Defocus Correction using Adaptive Kernel Sampling and Geometric.
Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.
Micro Phase Shifting Mohit Gupta and Shree K. Nayar Computer Science Columbia University Supported by: NSF and ONR.
Structured light and active ranging techniques Class 11.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #17.
Depth from Structured Light II: Error Analysis
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
Stereo.
Illumination Model How to compute color to represent a scene As in taking a photo in real life: – Camera – Lighting – Object Geometry Material Illumination.
Matting and Transparency : Computational Photography Alexei Efros, CMU, Fall 2007.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
Robust Super-Resolution Presented By: Sina Farsiu.
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Structured light and active ranging techniques Class 8.
What are Good Apertures for Defocus Deblurring? Columbia University ICCP 2009, San Francisco Changyin Zhou Shree K. Nayar.
CS290 Spring 2000 Slide:1 Structured Light for Laparoscopic Surgery CS290 Computer Vision Jeremy Ackerman CS290 Computer Vision Jeremy Ackerman.
High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga CPSC 643 Presentation # 2 Brien Flewelling March 4 th, 2009.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Structured Light in Scattering Media Srinivasa Narasimhan Sanjeev Koppal Robotics Institute Carnegie Mellon University Sponsor: ONR Shree Nayar Bo Sun.
Projection Defocus Analysis for Scene Capture and Image Display Li Zhang Shree Nayar Columbia University IIS SIGGRAPH Conference July 2006, Boston,
Presented by: Ali Agha March 02, Outline Sterevision overview Motivation & Contribution Structured light & method overview Related work Disparity.
Recovering Shape in the Presence of Interreflections Shree K. Nayar, Katsushi Ikeuchi, Takeo Kanade Presented by: Adam Smith.
3-D Computer Vision Using Structured Light Prepared by Burak Borhan.
Lecture 33: Computational photography CS4670: Computer Vision Noah Snavely.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Image-based Water Surface Reconstruction with Refractive Stereo Nigel Morris University of Toronto.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Structured light and active ranging techniques Class 8
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Real-Time Phase-Stamp Range Finder with Improved Accuracy Akira Kimachi Osaka Electro-Communication University Neyagawa, Osaka , Japan 1August.
The Brightness Constraint
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Stereo Readings Szeliski, Chapter 11 (through 11.5) Single image stereogram, by Niklas EenNiklas Een.
Stereo Many slides adapted from Steve Seitz.
Image Processing Jitendra Malik. Different kinds of images Radiance images, where a pixel value corresponds to the radiance from some point in the scene.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Radiometric Compensation in a Projector-Camera System Based on the Properties of the Human Visual System Dong WANG, Imari SATO, Takahiro OKABE, and Yoichi.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Robust and Accurate Surface Measurement Using Structured Light IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 57, NO. 6, JUNE 2008 Rongqian.
Page 11/28/2016 CSE 40373/60373: Multimedia Systems Quantization  F(u, v) represents a DCT coefficient, Q(u, v) is a “quantization matrix” entry, and.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
Matting and Transparency : Computational Photography Alexei Efros, CMU, Spring 2010.
Capstone Design Implementation of Depth Sensor Based on Structured Infrared Patterns June 11, 2013 School of Information and Communication Engineering,
An H.264-based Scheme for 2D to 3D Video Conversion Mahsa T. Pourazad Panos Nasiopoulos Rabab K. Ward IEEE Transactions on Consumer Electronics 2009.
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
Date of download: 6/21/2016 Copyright © 2016 SPIE. All rights reserved. The measured signal of one pixel is due to the direct illumination from the source.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
제 5 장 스테레오.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
The Brightness Constraint
The Brightness Constraint
The Brightness Constraint
Matting, Transparency, and Illumination
Illumination Model How to compute color to represent a scene
Presentation transcript:

--- some recent progress Bo Fu University of Kentucky

Paper List 1. Structured Light 3D Scanning in the Presence of Global Illumination. (CVPR 11’) [systematic error introduced by ill lighting condition] Extension: A Practical Approach to 3D Scanning in the Presence of Inter-reflections, Subsurface Scattering and Defocus. (IJCV 12’) 2. Implementing High Resolution Structured Light by Exploiting Projector Blur. (WACV 12’) [increase depth resolution] 3. Vision Processing for Real-time 3D Data Acquisition Based on Coded Structured Light. (IEEE Trans. on Image Processing 2008) [reduce acquisition time]

Structured Light 3D Scanning in the Presence of Global Illumination Mohit Gupta, etc Carnegie Mellon University

Introduction Motivation One important assumption of most structured light techniques does not always hold: scene points receive illumination only directly from the light source.

Introduction Main idea Design patterns that modulate global illumination and prevent the errors at capture time itself. Gray code Min-SWXOR-02

Errors due to Global Illumination Short range effect: sub-surface scattering, defocus Long range effect: inter-reflection, diffusion

1. inter-reflection error

solution

2. sub-surface scattering error

solution

Error formulation

α: projector defocus fraction

Error formulation Correct binarization: Note: without global illumination (L g =0), defocus (α = 1), this condition automatically holds

Error formulation Long range effect diffuse and specular inter-reflection Consequence: low frequency decode error. Since the low frequency pattern correspond to the higher-order bits, this results in a large error in the recovered shape.

Error formulation Short-range effect sub-surface scattering and defocus Consequence: loss of depth resolution

Pattern for error prevention How to design pattern to prevent both long range effect and short range effect? pattern with only high frequencies can prevent long range effect. pattern with only low frequencies can prevent short range effect. It is possible to design code with only high frequency patterns, while for short range effect, patterns with large minimum stripe-width can be designed.

Pattern for error prevention

1. For long range effect Base pattern Gray code XOR

Pattern for error prevention 1. For long range effect

Pattern for error prevention 2. For short range effect Design codes with large minimum stripe-width (min-SW) well studied in combinatorial mathematics [1]. [1] binary gray codes with long bit runs. The electronic journal of combinatorics. 2003

Pattern for error prevention 2. Ensemble of codes for general scenes using four codes optimized for different effects: XOR-04 and XOR-02 for long range effect Gray code with maximum min-SW and Gray code for short range effect Rule: if any two agree within a small threshold, that value is returned as true depth,.

Experiment Please refer to the paper (IJCV preferred)

Implementing High Resolution Structured Light by Exploiting Projector Blur CamilloTaylor University of Pennsylvania

Introduction Motivation 1. With standard structured light decoding schemes, one is limited by the resolution of the projector. The quantization of the projector ultimately limits the accuracy of the reconstruction.

Introduction Motivation 2. Growing disparity between the resolution of the image sensor and the resolution of the projector systems. 1600* *480 VS

Introduction Main idea By exploit the blur induced by the optics of the projector, subpixel correspondences between camera frame and projector frame can be established Major comparison: Li Zhang, Shree Nayar. Projection Defocus Analysis for Scene Capture and Image Display. (TOG 06’)

Approach The effective irradiance that a pixel in the projector contributes to a point in the scene is related to the displacement between the projection of that scene point on the projector frame and the center of the pixel. This falloff is modeled as a Gaussian I: observed scene radiance measured at a pixel in the camera f: BRDF at the scene point E: irradiance supplied to the corresponding scene point by the projector

Approach k: stripe index σ: width of the blur kernel at point in the scene δ: projection of scene point in the projector frame (with subpixel precision) k δ

Approach I0: scene irradiance due to ambient illumination (known) δ : floating point offset between -0.5 and 7.5

Experiment For more result, please refer to original paper

Vision Processing for Real-time 3D Data Acquisition Based on Coded Structured Light S. Y. Chen, etc City University of Hong Kong

Motivation Conventional structured light system can not be applied to moving surfaces since multiple patterns must be projected.

Main idea Grid solid pattern for 3d reconstruction with fast matching strategies.

Thank you