Computational Photography

Slides:



Advertisements
Similar presentations
An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Advertisements

Lytro The first light field camera for the consumer market
Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
Fourier Slice Photography
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Project 3 Results
© 2010 Adobe Systems Incorporated. All Rights Reserved. T. Georgiev, Adobe Systems A. Lumsdaine, Indiana University The Multi-Focus Plenoptic Camera.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Light Field Rendering Shijin Kong Lijie Heng.
Lightfields, Lumigraphs, and Image-based Rendering.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Fall 2014.
Light Readings Forsyth, Chapters 4, 6 (through 6.2) by Ted Adelson.
Modeling Light : Rendering and Image Processing Alexei Efros.
Wrap Up : Rendering and Image Processing Alexei Efros.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Image-Based Rendering Computer Vision CSE576, Spring 2005 Richard Szeliski.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Image-Based Rendering Produce a new image from real images. Combining images Interpolation More exotic methods.
CSCE641: Computer Graphics Image Formation Jinxiang Chai.
Lecture 20: Light, color, and reflectance CS6670: Computer Vision Noah Snavely.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Image or Object? Michael F. Cohen Microsoft Research.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Surface Light Fields for 3D Photography Daniel Wood Daniel Azuma Wyvern Aldinger Brian Curless Tom Duchamp David Salesin Werner Stuetzle.
Computational Photography Light Field Rendering Jinxiang Chai.
Intromission Theory: Plato, Euclid, Ptolemy, da Vinci. Plato for instance, wrote in the fourth century B. C. that light emanated from the eye, seizing.
Graphics research and courses at Stanford
Light field microscopy Marc Levoy, Ren Ng, Andrew Adams Matthew Footer, Mark Horowitz Stanford Computer Graphics Laboratory.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
NVIDIA Lecture 10 Copyright  Pat Hanrahan Image-Based Rendering: 1st Wave Definition: Using images to enhance the realism of 3D graphics Brute Force in.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Wrap Up : Computational Photography Alexei Efros, CMU, Fall 2005 © Robert Brown.
Light field photography and microscopy Marc Levoy Computer Science Department Stanford University.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Computational photography CS4670: Computer Vision Noah Snavely.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Image-based rendering Michael F. Cohen Microsoft Research.
Physiology of Vision: a swift overview : Learning-Based Methods in Vision A. Efros, CMU, Spring 2009 Some figures from Steve Palmer.
Physiology of Vision: a swift overview : Advanced Machine Perception A. Efros, CMU, Spring 2006 Some figures from Steve Palmer.
Computing & Information Sciences Kansas State University Wednesday, 03 Dec 2008CIS 530 / 730: Artificial Intelligence Lecture 38 of 42 Wednesday, 03 December.
Lightfields, Lumigraphs, and Other Image-Based Methods.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
Spring 2015 CSc 83020: 3D Photography Prof. Ioannis Stamos Mondays 4:15 – 6:15
Modeling Light cs129: Computational Photography
What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) – is wavelength Light: Travels.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Panorama artifacts online –send your votes to Li Announcements.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Lecture 34: Light, color, and reflectance CS4670 / 5670: Computer Vision Noah Snavely.
776 Computer Vision Jan-Michael Frahm Fall Capturing light Source: A. Efros.
CS 691B Computational Photography Instructor: Gianfranco Doretto Modeling Light.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Camera surface reference images desired ray ‘closest’ ray focal surface ‘closest’ camera Light Field Parameterization We take a non-traditional approach.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Rendering Pipeline Fall, 2015.
Physiology of Vision: a swift overview
Physiology of Vision: a swift overview
Image-Based Rendering
Sampling and Reconstruction of Visual Appearance
Presentation transcript:

Computational Photography Light Fields Computational Photography Connelly Barnes Slides from Alexei A. Efros, James Hays, Marc Levoy, and others

What is light? Electromagnetic radiation (EMR) moving along rays in space R(l) is EMR, measured in units of power (watts) l is wavelength Useful things: Light travels in straight lines In vacuum, radiance emitted = radiance arriving i.e. there is no transmission loss

What do we see? 3D world 2D image Figures © Stephen E. Palmer, 2002

What do we see? 3D world 2D image Painted backdrop

On Simulating the Visual Experience Just feed the eyes the right data No one will know the difference! Philosophy: Ancient question: “Does the world really exist?” Science fiction: Many, many, many books on the subject, e.g. slowglass from “Light of Other Days” Latest take: The Matrix Physics: Light can be stopped for up to a minute: Scientists stop light Computer Science: Virtual Reality To simulate we need to know: What does a person see?

The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The Plenoptic Function (Adelson & Bergen) Let’s start with a stationary person and try to parameterize everything that he can see…

Grayscale snapshot P(q,f) is intensity of light Seen from a single view point At a single time Averaged over the wavelengths of the visible spectrum (can also do P(x,y), but spherical coordinate are nicer)

Color snapshot P(q,f,l) is intensity of light Seen from a single view point At a single time As a function of wavelength

A movie P(q,f,l,t) is intensity of light Seen from a single view point Over time As a function of wavelength

Holographic movie P(q,f,l,t,VX,VY,VZ) is intensity of light Seen from ANY viewpoint Over time As a function of wavelength

The Plenoptic Function P(q,f,l,t,VX,VY,VZ) Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! It completely captures our visual reality! Not bad for a function…

Sampling Plenoptic Function (top view) Just lookup – Google Street View

Model geometry or just capture images?

How would you recreate this scene?

Ray P(q,f,VX,VY,VZ) Let’s not worry about time and color: 5D 3D position 2D direction P(q,f,VX,VY,VZ) Slide by Rick Szeliski and Michael Cohen

How can we use this? Lighting No Change in Radiance Surface Camera

Ray Reuse Infinite line 4D Assume light is constant (vacuum) 2D direction 2D position non-dispersive medium Slide by Rick Szeliski and Michael Cohen

Only need plenoptic surface

Synthesizing novel views Slide by Rick Szeliski and Michael Cohen

Lumigraph / Lightfield Outside convex space 4D Empty Stuff Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2D direction s q Slide by Rick Szeliski and Michael Cohen

u s Lumigraph - Organization 2D position 2 plane parameterization Slide by Rick Szeliski and Michael Cohen

s,t u,v t s,t v u,v u s Lumigraph - Organization 2D position 2 plane parameterization s,t u,v t v u s Slide by Rick Szeliski and Michael Cohen

s,t u,v Lumigraph - Organization Hold s,t constant Let u,v vary An image s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph / Lightfield The stanford and washington groups reverse their notation with regard to u,v and s,t

s,t u,v Lumigraph - Capture Idea 1 Move camera carefully over s,t plane Gantry see Lightfield paper s,t u,v Slide by Rick Szeliski and Michael Cohen

s,t u,v Lumigraph - Capture Idea 2 Move camera anywhere Find camera pose from markers Rebinning see Lumigraph paper s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Rendering s u For each output pixel determine s,t,u,v either use closest discrete RGB interpolate near values Slide by Rick Szeliski and Michael Cohen

s u Lumigraph - Rendering Nearest Blend 16 nearest closest s closest u draw it Blend 16 nearest quadrilinear interpolation Slide by Rick Szeliski and Michael Cohen

Lumigraph Rendering Use rough depth information to improve rendering quality

Lumigraph Rendering Use rough depth information to improve rendering quality

Lumigraph Rendering Without using geometry Using approximate geometry

Light Field Results Marc Levoy Results Light Field Viewer

Stanford multi-camera array 640 × 480 pixels × 30 fps × 128 cameras synchronized timing continuous streaming flexible arrangement

Light field photography using a handheld plenoptic camera Commercialized as Lytro Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan

Conventional versus light field camera

Conventional versus light field camera uv-plane st-plane

Prototype camera medium format cameras – sensor twice as large as 35mm full frame, quite expensive (10k+) Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lens

Digitally stopping-down aperture (i. e Digitally stopping-down aperture (i.e. decrease aperture size, increase DOF) Σ Σ stopping down = summing only the central portion of each microlens

Digital refocusing Σ Σ refocusing = summing windows extracted from several microlenses

Example of digital refocusing

Digitally moving the observer Σ Σ moving the observer = moving the window we extract from the microlenses

Example of moving the observer

Unstructured Lumigraph Rendering Further enhancement of lumigraphs: do not use two-plane parameterization Store original pictures: no resampling Hand-held camera, moved around an environment

Unstructured Lumigraph Rendering To reconstruct views, assign penalty to each original ray Distance to desired ray, using approximate geometry Resolution Feather near edges of image Construct “camera blending field” Render using texture mapping

Unstructured Lumigraph Rendering Blending field Rendering

Unstructured Lumigraph Rendering Unstructured Lumigraph Rendering Results

3D Lumigraph One row of s,t plane i.e., hold t constant s,t u,v

s u,v 3D Lumigraph One row of s,t plane i.e., hold t constant thus s,u,v a “row of images” s u,v

P(x,t) by David Dewey

2D: Image What is an image? All rays through a point Panorama? Slide by Rick Szeliski and Michael Cohen

Image Image plane 2D position

Spherical Panorama All light rays through a point form a panorama See also: 2003 New Years Eve http://www.panoramas.dk/fullscreen3/f1.html All light rays through a point form a panorama Totally captured in a 2D array -- P(q,f) Where is the geometry???

Other ways to sample Plenoptic Function Moving in time: Spatio-temporal volume: P(q,f,t) Useful to study temporal changes Long an interest of artists: Claude Monet, Haystacks studies

Time-Lapse Spatio-temporal volume: P(q,f,t) Factored Time Lapse Video Google/Time Magazine Time Lapse

Space-time images Other ways to slice the plenoptic function… t y x

Other Lightfield Acquisition Devices Spherical motion of camera around an object Samples space of directions uniformly Second arm to move light source – measure BRDF

The “Theatre Workshop” Metaphor (Adelson & Pentland,1996) desired image Sheet-metal worker Painter Lighting Designer

Painter (images)

Lighting Designer (environment maps) Show Naimark SF MOMA video http://www.debevec.org/Naimark/naimark-displacements.mov

Sheet-metal Worker (geometry) Let surface normals do all the work!

… working together Want to minimize cost clever Italians Want to minimize cost Each one does what’s easiest for him Geometry – big things Images – detail Lighting – illumination effects