Fourier Slice Photography

Slides:



Advertisements
Similar presentations
Interactive Deformation of Light Fields Billy Chen Eyal Ofek Heung-Yeung Shum Marc Levoy.
Advertisements

Exposure Basics Introduction to Photography. What is Exposure  In photography, exposure is the total amount of light allowed to fall on the digital sensor.
Lytro The first light field camera for the consumer market
Spherical Convolution in Computer Graphics and Vision Ravi Ramamoorthi Columbia Vision and Graphics Center Columbia University SIAM Imaging Science Conference:
Measuring BRDFs. Why bother modeling BRDFs? Why not directly measure BRDFs? True knowledge of surface properties Accurate models for graphics.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Project 3 Results
Computational Photography
© 2010 Adobe Systems Incorporated. All Rights Reserved. T. Georgiev, Adobe Systems A. Lumsdaine, Indiana University The Multi-Focus Plenoptic Camera.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Light Field Rendering Shijin Kong Lijie Heng.
A Signal-Processing Framework for Inverse Rendering Ravi Ramamoorthi Pat Hanrahan Stanford University.
New Techniques in Computational photography Marc Levoy Computer Science Department Stanford University.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University.
A Signal-Processing Framework for Forward and Inverse Rendering Ravi Ramamoorthi Ravi Ramamoorthi
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 17: Frequency Analysis and Signal Processing for Rendering Ravi Ramamoorthi
Parallelizing Raytracing Gillian Smith CMPE 220 February 19 th, 2008.
Image Stitching and Panoramas
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Surface Light Fields for 3D Photography Daniel Wood Daniel Azuma Wyvern Aldinger Brian Curless Tom Duchamp David Salesin Werner Stuetzle.
A Theory of Locally Low Dimensional Light Transport Dhruv Mahajan (Columbia University) Ira Kemelmacher-Shlizerman (Weizmann Institute) Ravi Ramamoorthi.
Intromission Theory: Plato, Euclid, Ptolemy, da Vinci. Plato for instance, wrote in the fourth century B. C. that light emanated from the eye, seizing.
Graphics research and courses at Stanford
Spectral Processing of Point-sampled Geometry
Light field microscopy Marc Levoy, Ren Ng, Andrew Adams Matthew Footer, Mark Horowitz Stanford Computer Graphics Laboratory.
 Marc Levoy Synthetic aperture confocal imaging Marc Levoy Billy Chen Vaibhav Vaish Mark Horowitz Ian McDowall Mark Bolas.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Building an Autostereoscopic Display CS448A – Digital Photography and Image-Based Rendering Billy Chen.
Lecture 33: Computational photography CS4670: Computer Vision Noah Snavely.
Measure, measure, measure: BRDF, BTF, Light Fields Lecture #6
Light field photography and microscopy Marc Levoy Computer Science Department Stanford University.
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Photography Parts of a Camera. Aperture size (or width or diameter) of the opening of a lens diaphragm inside a photographic lens regulates the amount.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
Aperture Part 1 of the Photographic Golden Triangle 1Copyright © Texas Education Agency, All rights reserved. Images and other multimedia content.
Camera Lenses There are many types of camera lenses available for SLR cameras. What lens you decide to use depends on what is available to, what you are.
Intro to Photography. Types of Cameras Single Lens Reflex A single-lens reflex (SLR) camera typically uses a mirror and prism system that allows the photographer.
MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research.
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Computational photography CS4670: Computer Vision Noah Snavely.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Photography Presentation. Photograph - The product of a camera when focused film and material are made permanent due to chemicals. Photography -The art.
Photography Seeing through the camera’s eye. Vocabulary Definitions Photography: the art or technique of exposing light to an electronic sensor or film.
1 Plenoptic Imaging Chong Chen Dan Schonfeld Department of Electrical and Computer Engineering University of Illinois at Chicago May
All-Frequency Shadows Using Non-linear Wavelet Lighting Approximation Ren Ng Stanford Ravi Ramamoorthi Columbia SIGGRAPH 2003 Pat Hanrahan Stanford.
Fourier Depth of Field Cyril Soler, Kartic Subr, Frédo Durand, Nicolas Holzschuch, François Sillion INRIA, UC Irvine, MIT CSAIL.
Modeling Light cs129: Computational Photography
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Math in Photography © UNT in partnership with TEA
Speaker : 莊士昌.  1. What is light field?    5-D plenoptic function 4-D plenoptic function 1. What is light field?
 Marc Levoy Using Plane + Parallax to Calibrate Dense Camera Arrays Vaibhav Vaish, Bennett Wilburn, Neel Joshi, Marc Levoy Computer Science Department.
CS 691B Computational Photography Instructor: Gianfranco Doretto Modeling Light.
 Vaibhav Vaish Synthetic Aperture Focusing Using Dense Camera Arrays Vaibhav Vaish Computer Graphics Laboratory Stanford University.
Introduction to Photography To take beautiful photographs you do not need an expensive camera and a bag full of equipment. What is important is the photographer’s.
Digital Photography. Photography Triangle Shutter Speed Speed at which the film or sensor is exposed to light Usually a fraction 1/250 = th of.
Understanding Aperture (a beginner’s guide) Understanding Aperture (a beginner’s guide)
The Techniques.
Radon Transform Imaging
What I Need To Know About Operating A Camera
Sampling and Reconstruction of Visual Appearance
Introduction to Diffraction Tomography
Part 1 of the Photographic Golden Triangle
Presentation transcript:

Fourier Slice Photography Ren Ng Stanford University

Conventional Photograph Okay, so here’s the conventional photograph that we would have gotten. Notice that it’s focused on Matt, and that Jacquie and Matt are out of focus.

Light Field Photography Capture the light field inside the camera body Okay, the first thing that I want to do is define this “light field” term that I’ve been using. The light field is just a representation for the light traveling along all rays in free space. Inside the camera, we can parameterize all the rays by where they originate on the lens plane, and where they terminate on the sensor. So the light traveling from (u,v) on the lens to (x,y) on the sensor is given by L(u,v,x,y). Note that this is a four dimensional function – the space of rays is 4D. The concept of the light field is the second most cited idea in computer graphics. It was introduced in 1996 by Marc and Pat at Stanford, and by Steve Gortler and colleagues at Harvard and Microsoft Research.

Hand-Held Light Field Camera Medium format digital camera Camera in-use 16 megapixel sensor Microlens array

Light Field in a Single Exposure

Light Field in a Single Exposure

Light Field Inside the Camera Body Okay, the first thing that I want to do is define this “light field” term that I’ve been using. The light field is just a representation for the light traveling along all rays in free space. Inside the camera, we can parameterize all the rays by where they originate on the lens plane, and where they terminate on the sensor. So the light traveling from (u,v) on the lens to (x,y) on the sensor is given by L(u,v,x,y). Note that this is a four dimensional function – the space of rays is 4D. The concept of the light field is the second most cited idea in computer graphics. It was introduced in 1996 by Marc and Pat at Stanford, and by Steve Gortler and colleagues at Harvard and Microsoft Research.

Digital Refocusing Okay, so here’s the conventional photograph that we would have gotten. Notice that it’s focused on Matt, and that Jacquie and Matt are out of focus.

Digital Refocusing But by simulating the light light… etc…

Questions About Digital Refocusing What is the computational complexity? Are there efficient algorithms? What are the limits on refocusing? How far can we move the focal plane?

Overview Fourier Slice Photography Theorem Fourier Refocusing Algorithm Theoretical Limits of Refocusing Here’s an overview for what I’ll be talking about in this section. First, I’d like to go over the derivation of the theorem at the heart of this whole analytic approach. The theorem says that in the Fourier domain, photographs are just 2D slice in the 4D light field. That’s much simpler than in the spatial-domain, where photographs are integral projections of the light field. Then I want to show you how the theorem can be applied to make theoretical analysis of our camera easier, and how it gives us a fast algorithm for digital refocusing.

Previous Work Integral photography Lippmann 1908, Ives 1930 Lots of variants, especially in 3D TV Okoshi 1976, Javidi & Okano 2002 Closest variant is plenoptic camera Adelson & Wang 1992 Fourier analysis of light fields Chai et al. 2000 Refocusing from light fields Isaksen et al. 2000, Stewart et al. 2003

Fourier Slice Photography Theorem In the Fourier domain, a photograph is a 2D slice in the 4D light field. Photographs focused at different depths correspond to 2D slices at different trajectories.

Digital Refocusing by Ray-Tracing x Lens Sensor

Digital Refocusing by Ray-Tracing x Imaginary film Lens Sensor

Digital Refocusing by Ray-Tracing x Imaginary film Lens Sensor

Digital Refocusing by Ray-Tracing x Imaginary film Lens Sensor

Digital Refocusing by Ray-Tracing x Imaginary film Lens Sensor

Refocusing as Integral Projection x u u x Imaginary film Lens Sensor

Refocusing as Integral Projection x u u x Imaginary film Lens Sensor

Refocusing as Integral Projection x u u x Imaginary film Lens Sensor

Refocusing as Integral Projection x u u x Imaginary film Lens Sensor

Classical Fourier Slice Theorem Integral Projection 2D Fourier Transform 1D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Classical Fourier Slice Theorem Integral Projection 2D Fourier Transform 1D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Classical Fourier Slice Theorem Integral Projection 2D Fourier Transform 1D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Classical Fourier Slice Theorem Integral Projection Spatial Domain Fourier Domain Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Classical Fourier Slice Theorem Integral Projection Spatial Domain Fourier Domain Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Fourier Slice Photography Theorem Integral Projection Spatial Domain Fourier Domain Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Fourier Slice Photography Theorem Integral Projection 4D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Fourier Slice Photography Theorem Integral Projection 4D Fourier Transform 2D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Fourier Slice Photography Theorem Integral Projection 4D Fourier Transform 2D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Fourier Slice Photography Theorem Integral Projection 4D Fourier Transform 2D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Photographic Imaging Equations Spatial-Domain Integral Projection Fourier-Domain Slicing So here are the definitions of these operators. We already saw in the spatial domain that imaging is this double integral. And in the Fourier domain, here’s the definition of the slicing operator, which shows that the value at any point in the photograph is given by the value at a corresponding point in the light field. So I think this makes really clear why this theorem is useful. On the theoretical side, if you have to do any math, it’s much nicer to work with the Fourier definition where you don’t have to deal with the integral symbols. And on the practical side, if you have to do any computation, it might be better to work in the Fourier domain, where you can just look up an array element rather than summing a bunch of array elements. 10 minutes to here

Photographic Imaging Equations Spatial-Domain Integral Projection Fourier-Domain Slicing So here are the definitions of these operators. We already saw in the spatial domain that imaging is this double integral. And in the Fourier domain, here’s the definition of the slicing operator, which shows that the value at any point in the photograph is given by the value at a corresponding point in the light field. So I think this makes really clear why this theorem is useful. On the theoretical side, if you have to do any math, it’s much nicer to work with the Fourier definition where you don’t have to deal with the integral symbols. And on the practical side, if you have to do any computation, it might be better to work in the Fourier domain, where you can just look up an array element rather than summing a bunch of array elements. 10 minutes to here

Photographic Imaging Equations Spatial-Domain Integral Projection Fourier-Domain Slicing So here are the definitions of these operators. We already saw in the spatial domain that imaging is this double integral. And in the Fourier domain, here’s the definition of the slicing operator, which shows that the value at any point in the photograph is given by the value at a corresponding point in the light field. So I think this makes really clear why this theorem is useful. On the theoretical side, if you have to do any math, it’s much nicer to work with the Fourier definition where you don’t have to deal with the integral symbols. And on the practical side, if you have to do any computation, it might be better to work in the Fourier domain, where you can just look up an array element rather than summing a bunch of array elements. 10 minutes to here

Theorem Limitations Film parallel to lens Everyday camera, not view camera Aperture fully open Closing aperture requires spatial mask

Overview Fourier Slice Photography Theorem Fourier Refocusing Algorithm Theoretical Limits of Refocusing Here’s an overview for what I’ll be talking about in this section. First, I’d like to go over the derivation of the theorem at the heart of this whole analytic approach. The theorem says that in the Fourier domain, photographs are just 2D slice in the 4D light field. That’s much simpler than in the spatial-domain, where photographs are integral projections of the light field. Then I want to show you how the theorem can be applied to make theoretical analysis of our camera easier, and how it gives us a fast algorithm for digital refocusing.

Existing Refocusing Algorithms Existing refocusing algorithms are expensive O(N4) where light field has N samples in each dimension All are variants on integral projection Isaksen et al. 2000 Vaish et al. 2004 Levoy et al. 2004 Ng et al. 2005 The point I want to make here is that all existing digital refocusing approaches are variants on spatial domain integration, from the original paper demonstrating refocusing by Isaksen, McMillan and Gortler in 2000, through work on synthetic photography at Stanford i the last two years. The point here is the digital refocusing is expensive – O(N^4) if we have N samples in each dimension – because to compute any photograph we have to project and integrate the entire light field.

Refocusing in Spatial Domain Integral Projection 4D Fourier Transform 2D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Refocusing in Fourier Domain Integral Projection Inverse 2D Fourier Transform 4D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Refocusing in Fourier Domain Integral Projection Inverse 2D Fourier Transform 4D Fourier Transform Okay so here’s the diagram for the Fourier Slice Photography Theorem again. So all existing approaches compute the refocused photograph from the light field by working in the spatial domain. The idea for a faster algorithm is to work in the FOurier domain. Slicing

Asymptotic Performance Fourier-domain slicing algorithm Pre-process: O(N4 log N) Refocusing: O(N2 log N) Spatial-domain integration algorithm Refocusing: O(N4) So the asymptotic performance in the Fourier domain is O(N4 log N) for the pre-process, and O(N2 log N), dominated by the inverse 2D transform. And of course that’s to be compared against the spatial domain algorithm, which is O(N4) for each refocusing step. I also want to mention that practically, the absolute performance of the two algorithms is about the same for resolutions captured with our prototype. At directional resolutions twice as high, Fourier algorithm is order of magnitude faster.

Resampling Filter Choice Kaiser-Bessel filter (width 2.5) Gold standard (spatial integration) Triangle filter (quadrilinear)

Overview Fourier Slice Photography Theorem Fourier Refocusing Algorithm Theoretical Limits of Refocusing Here’s an overview for what I’ll be talking about in this section. First, I’d like to go over the derivation of the theorem at the heart of this whole analytic approach. The theorem says that in the Fourier domain, photographs are just 2D slice in the 4D light field. That’s much simpler than in the spatial-domain, where photographs are integral projections of the light field. Then I want to show you how the theorem can be applied to make theoretical analysis of our camera easier, and how it gives us a fast algorithm for digital refocusing.

Problem Statement Assume a light field camera with An f /A lens N x N pixels under each microlens If we compute refocused photographs from these light fields, over what range can we move the focal plane? Analytical assumption Assume band-limited light fields

Band-Limited Analysis Okay, so if what do we get with this band-limited assumption? Well, here is the continuous situation. This shows the Fourier transform of the continuous light field in 2D, which is in general unbounded, and here is the optical photograph that forms at a particular focal depth: it’s just a slice in the light field. The conventional camera band-limits this continuous photograph by cutting off its high frequencies, and if we focus at a different depth, getting a different slice, the band-limit is the same. In the plenoptic camera, the band-limit isn’t directly on the photo, but rather on the light field. So we cut off the high frequencies in the light field. Now, when we produce a photograph with digital refocusing, we extract the slice here, but the slice is clipped to the bounds of the light field. If we refocus at a different depth, the width of the bandwidth changes. One thing to note is that the refocused photographs is just a band-limited version of the continuous photograph. By comparing the bandwidths, we can tell how close digital refocusing is getting us to the ideal result.

Band-Limited Analysis Band-width of measured light field Okay, so if what do we get with this band-limited assumption? Well, here is the continuous situation. This shows the Fourier transform of the continuous light field in 2D, which is in general unbounded, and here is the optical photograph that forms at a particular focal depth: it’s just a slice in the light field. The conventional camera band-limits this continuous photograph by cutting off its high frequencies, and if we focus at a different depth, getting a different slice, the band-limit is the same. In the plenoptic camera, the band-limit isn’t directly on the photo, but rather on the light field. So we cut off the high frequencies in the light field. Now, when we produce a photograph with digital refocusing, we extract the slice here, but the slice is clipped to the bounds of the light field. If we refocus at a different depth, the width of the bandwidth changes. One thing to note is that the refocused photographs is just a band-limited version of the continuous photograph. By comparing the bandwidths, we can tell how close digital refocusing is getting us to the ideal result. Light field shot with camera

Band-Limited Analysis Okay, so what do we get with this band-limited assumption? Well, here is the continuous situation. This shows the Fourier transform of the continuous light field in 2D, which is in general unbounded, and here is the optical photograph that forms at a particular focal depth: it’s just a slice in the light field. The conventional camera band-limits this continuous photograph by cutting off its high frequencies, and if we focus at a different depth, getting a different slice, the band-limit is the same. In the plenoptic camera, the band-limit isn’t directly on the photo, but rather on the light field. So we cut off the high frequencies in the light field. Now, when we produce a photograph with digital refocusing, we extract the slice here, but the slice is clipped to the bounds of the light field. If we refocus at a different depth, the width of the bandwidth changes. One thing to note is that the refocused photographs is just a band-limited version of the continuous photograph. By comparing the bandwidths, we can tell how close digital refocusing is getting us to the ideal result.

Band-Limited Analysis Okay, so if what do we get with this band-limited assumption? Well, here is the continuous situation. This shows the Fourier transform of the continuous light field in 2D, which is in general unbounded, and here is the optical photograph that forms at a particular focal depth: it’s just a slice in the light field. The conventional camera band-limits this continuous photograph by cutting off its high frequencies, and if we focus at a different depth, getting a different slice, the band-limit is the same. In the plenoptic camera, the band-limit isn’t directly on the photo, but rather on the light field. So we cut off the high frequencies in the light field. Now, when we produce a photograph with digital refocusing, we extract the slice here, but the slice is clipped to the bounds of the light field. If we refocus at a different depth, the width of the bandwidth changes. One thing to note is that the refocused photographs is just a band-limited version of the continuous photograph. By comparing the bandwidths, we can tell how close digital refocusing is getting us to the ideal result.

Band-Limited Analysis Okay, so if what do we get with this band-limited assumption? Well, here is the continuous situation. This shows the Fourier transform of the continuous light field in 2D, which is in general unbounded, and here is the optical photograph that forms at a particular focal depth: it’s just a slice in the light field. The conventional camera band-limits this continuous photograph by cutting off its high frequencies, and if we focus at a different depth, getting a different slice, the band-limit is the same. In the plenoptic camera, the band-limit isn’t directly on the photo, but rather on the light field. So we cut off the high frequencies in the light field. Now, when we produce a photograph with digital refocusing, we extract the slice here, but the slice is clipped to the bounds of the light field. If we refocus at a different depth, the width of the bandwidth changes. One thing to note is that the refocused photographs is just a band-limited version of the continuous photograph. By comparing the bandwidths, we can tell how close digital refocusing is getting us to the ideal result.

Photographic Imaging Equations Spatial-Domain Integral Projection Fourier-Domain Slicing So here are the definitions of these operators. We already saw in the spatial domain that imaging is this double integral. And in the Fourier domain, here’s the definition of the slicing operator, which shows that the value at any point in the photograph is given by the value at a corresponding point in the light field. So I think this makes really clear why this theorem is useful. On the theoretical side, if you have to do any math, it’s much nicer to work with the Fourier definition where you don’t have to deal with the integral symbols. And on the practical side, if you have to do any computation, it might be better to work in the Fourier domain, where you can just look up an array element rather than summing a bunch of array elements. 10 minutes to here

Results of Band-Limited Analysis Assume a light field camera with An f /A lens N x N pixels under each microlens From its light fields we can Refocus exactly within depth of field of an f /(A  N) lens In our prototype camera Lens is f /4 12 x 12 pixels under each microlens Theoretically refocus within depth of field of an f/48 lens

Light Field Photo Gallery Okay, now some more photographic results.

Stanford Quad

Rodin’s Burghers of Calais

Palace of Fine Arts, San Francisco

Palace of Fine Arts, San Francisco

Waiting to Race

Start of the Race

Summary of Main Contributions Formal theorem about relationship between light fields and photographs Computational application gives asymptotically fast refocusing algorithm Theoretical application gives analytic solution for limits of refocusing

Future Work Apply general signal-processing techniques Cross-fertilization with medical imaging

Thanks and Acknowledgments Collaborators on camera tech report Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan Readers and listeners Ravi Ramamoorthi, Brian Curless, Kayvon Fatahalian, Dwight Nishimura, Brad Osgood, Mike Cammarano, Vaibhav Vaish, Billy Chen, Gaurav Garg, Jeff Klingner Anonymous SIGGRAPH reviewers Funding sources NSF, Microsoft Research Fellowship, Stanford Birdseed Grant

Questions? “Start of the race”, Stanford University Avery Pool, July 2005