What Does Motion Reveal About Transparency ? Moshe Ben-Ezra and Shree K. Nayar Columbia University ICCV Conference October 2003, Nice, France This work.

Slides:



Advertisements
Similar presentations
Modern Optics Lab Experiment 2: REFLECTION AND REFRACTION AT SPHERICAL INTERFACES  Measuring: Radii of mirrors and lenses Focal points of mirrors, spherical.
Advertisements

Lenses in the Paraxial Limit
Advanced Effects CMSC 435/634. General Approach Ray Tracing – Shoot more rays Rasterization – Render more images.
Chapter 31 Images.
Shadow Scanning [Bouguet 1999] Desk Lamp Camera Stick or pencil Desk The idea [Bouguet and Perona’98] J.-Y. Bouguet and P. Perona, “3D Photography on your.
Gerald Schweighofer RIGOROSUM Online SaM for GCM Institute of Electrical Measurement and Measurement Signal Processing Online Structure and.
Lighting affects appearance. What is the Question ? (based on work of Basri and Jacobs, ICCV 2001) Given an object described by its normal at each.
Silhouette Lookup for Automatic Pose Tracking N ICK H OWE.
Chapter 26 Geometrical Optics. Units of Chapter 26 The Reflection of Light Forming Images with a Plane Mirror Spherical Mirrors Ray Tracing and the Mirror.
1 UCT PHY1025F: Geometric Optics Physics 1025F Geometric Optics Dr. Steve Peterson OPTICS.
Reference Book is Geometric Optics.
Light: Geometric Optics
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Computer Vision Optical Flow
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
1 Distributed localization of networked cameras Stanislav Funiak Carlos Guestrin Carnegie Mellon University Mark Paskin Stanford University Rahul Sukthankar.
Copyright © 2009 Pearson Education, Inc. Chapter 32 Light: Reflection and Refraction.
Stefano Soatto (c) UCLA Vision Lab 1 Homogeneous representation Points Vectors Transformation representation.
A Closed Form Solution to Natural Image Matting
360 x 360 Mosaics Shree K. Nayar and Amruta Karmarkar Computer Science Department Columbia University IEEE CVPR Conference June 2000, Hilton Head Island,
Photorealistic Rendering of Rain Streaks Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar SIGGRAPH Conference July 2006,
Structured Light in Scattering Media Srinivasa Narasimhan Sanjeev Koppal Robotics Institute Carnegie Mellon University Sponsor: ONR Shree Nayar Bo Sun.
General Imaging Model Michael Grossberg and Shree Nayar CAVE Lab, Columbia University ICCV Conference Vancouver, July 2001 Partially funded by NSF ITR.
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
Object recognition under varying illumination. Lighting changes objects appearance.
What is an image? Light changes its path after reflection or refraction. Thus when we see something, we may not seeing the REAL thing at its true location.
Image-based Water Surface Reconstruction with Refractive Stereo Nigel Morris University of Toronto.
Imaging Science FundamentalsChester F. Carlson Center for Imaging Science Mirrors and Lenses.
Lecture 14 Images Chapter 34 Geometrical Optics Fermats Principle -Law of reflection -Law of Refraction Plane Mirrors and Spherical Mirrors Spherical refracting.
Light Field Video Stabilization ICCV 2009, Kyoto Presentation for CS 534: Computational Photography Friday, April 22, 2011 Brandon M. Smith Li Zhang University.
Example: An object 3 cm high is placed 20 cm from (a) a convex and (b) a concave spherical mirror, each of 10 cm focal length. Determine the position.
Module 1-4 Basic Geometrical Optics. Image Formation with Lenses Lenses are at the heart of many optical devices, not the least of which are cameras,
Copyright © 2010 Pearson Education, Inc. Lecture Outline Chapter 26 Physics, 4 th Edition James S. Walker.
Physics C Chapter 36 From serway book Prepared by Anas A. Alkanoa M.Sc.( master degree) in Theoretical Physics, Electromagnetic Waves (Optical Science),
1 Chapter 6 More on geometrical optics February 4 Thick lenses Review: Paraxial imaging from a single refracting spherical surface: Thin lens equation:
Shedding Light on the Weather
Multiple Scattering in Vision and Graphics Lecture #21 Thanks to Henrik Wann Jensen.
Ray Tracing Sang Il Park SEjong University With lots of slides stolen from Jehee Lee, Doug James, Steve Seitz, Shree Nayar, Alexei Efros, Fredo Durand.
Flat Refractive Geometry Tali Treibitz, Yoav Y. Schechner Technion, Israel Hanumant Singh Woods Hole Oceanographic Institute.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Chapter 34 Lecture Seven: Images: I HW 3 (problems): 34.40, 34.43, 34.68, 35.2, 35.9, 35.16, 35.26, 35.40, Due Friday, Sept. 25.
Dynamic Refraction Stereo 7. Contributions Refractive disparity optimization gives stable reconstructions regardless of surface shape Require no geometric.
Motion Deblurring Using Hybrid Imaging Moshe Ben-Ezra and Shree K. Nayar Columbia University IEEE CVPR Conference June 2003, Madison, USA.
PCB Soldering Inspection. Structured Highlight approach Structured Highlight method is applied to illuminating and imaging specular surfaces which yields.
Mitsubishi Electric Research Labs (MERL) Super-Res from Single Motion Blur PhotoAgrawal & Raskar Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research.
March Chuck DiMarzio, Northeastern University ECE-1466 Modern Optics Course Notes Part 2 Prof. Charles A. DiMarzio Northeastern University.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
1 Thin Lens Light refracts on the interface of two media, following Snell’s law of refraction: Light bends through a triangular prism: θ 1 and θ 2 are.
112/5/ :54 Graphics II Image Based Rendering Session 11.
the change of direction of a ray of light as it passes obliquely from one medium into another of different transmission speed Optical Density of a medium.
CSCE 441: Computer Graphics Ray Tracing
A tutorial on using mirror to calibrate non-overlapping view cameras
Yannick FranckenChris HermansPhilippe Bekaert Hasselt University – tUL – IBBT Expertise Centre for Digital Media, Belgium
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Lenses Lenses _______ light and are usually used to form ________ There are two types: In practice, light is refracted at both surfaces of the lens but.
Image-Based Modeling of Complex Surfaces Todd Zickler DEAS, Harvard University.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
July © Chuck DiMarzio, Northeastern University ECEG105/ECEU646 Optics for Engineers Course Notes Part 2: Geometric Optics (Reflection,
What kind of light is emitted by regular (not self-luminous) objects?
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
Geometrical Optics.
Refraction & Lenses. Refraction of Light When a ray of light traveling through a transparent medium encounters a boundary leading into another transparent.
Flat Refractive Geometry Tali Treibitz, Yoav Y. Schechner Technion, Israel Hanumant Singh Woods Hole Oceanographic Institute.
Extended Depth of Field For Long Distance Biometrics
Karel Lebeda, Simon Hadfield, Richard Bowden
Image-based Lighting Computational Photography
Lenses.
Rob Fergus Computer Vision
Daisuke Miyazaki Katsushi Ikeuchi The University of Tokyo
Daisuke Miyazaki Katsushi Ikeuchi The University of Tokyo
Presentation transcript:

What Does Motion Reveal About Transparency ? Moshe Ben-Ezra and Shree K. Nayar Columbia University ICCV Conference October 2003, Nice, France This work was supported by an NSF ITR Award IIS

Transparency is Very Challenging Existence of a transparent object. Finding its shape and pose

Real and Virtual Features Lambertian V1V1 V2V2 F` V2V2   V1V1 F Specular F F`F` V1V1 V2V2 Transparent

Environmental Matting* * Zongker, el al. SIGGRAPH 99, Alternating pattern Object Camera Does not recover shape and pose. Requires controlled environment.

Shape from Polarization in Highlight* * Saito et al. CVPR’99. Object Camera Light Rotating Polarizer Limited to a single interface at the object’s surface. Requires controlled environment. N

Shape from Refraction and Motion* * H. Murase. PAMI, 1992 Camera Water Single interface only. Fixed Pattern

Motion is Key to Transparency

Transparent Shape From Motion Given: Views And a Parametric Model (such as super-ellipse) Recover: Shape: Values of parameters (e, n) Pose: Rotation R, Translation T General analytic solution does not exist.

Transparency From Motion Reversed rays are parallel to each other regardless of the complexity of their paths Distant feature

Approach: Initialization Image Plane

Approach: Initial Guess

Approach: Refine

Error Function (0,0,1) r 1,1.. r 1,n r 2,1.. r 2,n  - Object’s shape parameter vector R,T - Object’s pose

Simulation Setup Parallel rays from features Transparent object Camera side rays

Example (Simulation) Single Parameter. Newton-Raphson optimization Initial Guess Symmetric Superellipse (n=e)

Evaluation (Simulation) GTGT Both init Pos res Sphere Ground Truth Initial Guess Computed Result Shape Error GTBoth InitBoth Res Lens GTBoth InitBoth Res Cube GTBoth Init Both Res Water Pipe

Real Experiment: Sphere

Features

Initial Guess

Setup: Initial Guess Initial Guess: Diameter: 8 inch

Setup: Result Ground Truth: Diameter: 3 inch. Computed: 3.18 inch

Result

Real Test: Water Filled Pipe

Features

Initial Guess

Setup: Initial Guess Initial guess: Diameter: 200.0mm Thickness: 20.0mm

Setup: Result Ground Truth: Diameter: 117.0mm Thickness: 3.0mm Computed: Diameter: 116.1mm Thickness: 2.3mm

Result

Real Test: Superquadric

Features

Initial Guess

Result Ground truth: e = ?Computed: e = 0.18

Summary Shape and pose parameters Multiple interfaces No Segmentation required

Parameterizations of Interest Polynomials: modeling surfaces, lenses CAD models: shape of industrial objects Dynamic models: time dependent parameters

Assumptions Camera parameters are known. Features are far* and are trackable. A proper model and a hypothesis (an initial guess) are given. * One possible assumption.

Real Tests Setup

Implementation Features were manually selected and tracked (9 views). Captured rays, a model, refraction index and a hypothesis were given as inputs. Shape and pose were recovered using simple gradient decent (with derivatives).

The Physics of Transparency First Interface: μ 1 → μ 2 Second Interface: μ 2 →μ 1 33 11 11 33 N1N1 N2N2 22 22

Parametric Shape Examples Super-Ellipse 2 parameters Spherical Harmonics 8 parameters No analytic solution