Light field photography and videography Marc Levoy Computer Science Department Stanford University.

Slides:



Advertisements
Similar presentations
The f/stop has a dual function complicating its understanding by the beginner. The f/stop is half the exposure changing mechanism along with the shutter.
Advertisements

Aperture, Exposure and Depth of Field
Lytro The first light field camera for the consumer market
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Digital Camera Essential Elements Part 1 Sept
Po-Hsiang Chen Advisor: Sheng-Jyh Wang. People  Shree K. Nayar  Ramesh Raskar  Ren Ng 2.
Fourier Slice Photography
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Project 3 Results
Computational Photography
© 2010 Adobe Systems Incorporated. All Rights Reserved. T. Georgiev, Adobe Systems A. Lumsdaine, Indiana University The Multi-Focus Plenoptic Camera.
 CBo CBo.
Light Field Rendering Shijin Kong Lijie Heng.
New Techniques in Computational photography Marc Levoy Computer Science Department Stanford University.
Computational Photography
Computer Science Department
Image Stitching and Panoramas
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Computational Photography Light Field Rendering Jinxiang Chai.
Intromission Theory: Plato, Euclid, Ptolemy, da Vinci. Plato for instance, wrote in the fourth century B. C. that light emanated from the eye, seizing.
Graphics research and courses at Stanford
Light field microscopy Marc Levoy, Ren Ng, Andrew Adams Matthew Footer, Mark Horowitz Stanford Computer Graphics Laboratory.
 Marc Levoy Synthetic aperture confocal imaging Marc Levoy Billy Chen Vaibhav Vaish Mark Horowitz Ian McDowall Mark Bolas.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Lecture 33: Computational photography CS4670: Computer Vision Noah Snavely.
Light field photography and microscopy Marc Levoy Computer Science Department Stanford University.
Photography Basics. The Process The Exposure - Camera and Lens Developing - Dark Room or PS… Printing – Darkroom or InkJet.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Camera Usage Photography I COM 241. Single lens reflex camera Uses interchangeable lenses Higher quality image than point and shoot cameras –Greater resolution.
How the Camera Works ( both film and digital )
Joel Willis. Photography = Capturing Light Best Light Sources and Directions Basics: Aperture, Shutter Speed, ISO, Focal Length, White Balance Intro to.
A stereo dissecting microscope is normally used to view relatively large specimens at magnifications from about 5X to about 50X. The scope provides a very.
MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research.
Digital Imaging Systems –I/O. Workflow of digital imaging Two Competing imaging format for motion pictures Film vs Digital Video( TV) Presentation of.
Camera types. Megapixel  Equal to one million pixels (or 1 MP).  Higher the MP = higher resolution = nicer looking picture.
What does the Shutter do? Controls the amount of time light is allowed to strike the film (compare to Aperture- which controls the amount of light allowed.
Digital Photography A tool for Graphic Design Graphic Design: Digital Photography.
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Computational photography CS4670: Computer Vision Noah Snavely.
Lecture Exposure/histograms. Exposure - Four Factors A camera is just a box with a hole in it. The correct exposure is determined by four factors: 1.
Macro and Close-up Photography Digital Photography DeCal 2010 Nathan Yan Kellen Freeman Some slides adapted from Zexi Eric Yan Photo by Daniel Schwen.
Definitions Megarays - number of light rays captured by the light field sensor. Plenoptic - camera that uses a mirrolens array to capture 4D light field.
Digital Image Fundamentals. What Makes a good image? Cameras (resolution, focus, aperture), Distance from object (field of view), Illumination (intensity.
High Throughput Microscopy
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Stereo Many slides adapted from Steve Seitz.
CS348B Lecture 7Pat Hanrahan, 2005 Camera Simulation EffectCause Field of viewFilm size, stops and pupils Depth of field Aperture, focal length Motion.
Modeling Light cs129: Computational Photography
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
EG 2011 | Computational Plenoptic Imaging STAR | VI. High Speed Imaging1 Computational Plenoptic Imaging Gordon Wetzstein 1 Ivo Ihrke 2 Douglas Lanman.
 Marc Levoy Using Plane + Parallax to Calibrate Dense Camera Arrays Vaibhav Vaish, Bennett Wilburn, Neel Joshi, Marc Levoy Computer Science Department.
776 Computer Vision Jan-Michael Frahm Fall Capturing light Source: A. Efros.
CS 691B Computational Photography Instructor: Gianfranco Doretto Modeling Light.
 Vaibhav Vaish Synthetic Aperture Focusing Using Dense Camera Arrays Vaibhav Vaish Computer Graphics Laboratory Stanford University.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
In Photography, there is an Exposure Triangle (Reciprocity Theory) Aperture – size of the iris opening, how much light come into the “window” Shutter Speed.
Camera surface reference images desired ray ‘closest’ ray focal surface ‘closest’ camera Light Field Parameterization We take a non-traditional approach.
Introduction Computational Photography Seminar: EECS 395/495
Some of the basic terms related to both film and digital cameras:
Sampling and Reconstruction of Visual Appearance
Aperture, Exposure and Depth of Field
Digital Camera Terms and Functions
Aperture, Exposure and Depth of Field
Presentation transcript:

Light field photography and videography Marc Levoy Computer Science Department Stanford University

 Marc Levoy List of projects high performance imaging using large camera arrays light field photography using a handheld plenoptic camera dual photography

High performance imaging using large camera arrays Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz, Marc Levoy (Proc. SIGGRAPH 2005)

 Marc Levoy Stanford multi-camera array 640 × 480 pixels × 30 fps × 128 cameras synchronized timing continuous streaming flexible arrangement

 Marc Levoy Ways to use large camera arrays widely spaced light field capture tightly packed high-performance imaging intermediate spacing synthetic aperture photography

 Marc Levoy Intermediate camera spacing: synthetic aperture photography 

 Marc Levoy Example using 45 cameras [Vaish CVPR 2004]

Tiled camera array world’s largest video camera no parallax for distant objects poor lenses limit image quality seamless mosaicing isn’t hard Can we match the image quality of a cinema camera?

Tiled panoramic image (before geometric or color calibration)

Tiled panoramic image (after calibration and blending)

Tiled camera array world’s largest video camera no parallax for distant objects poor lenses limit image quality seamless mosaicing isn’t hard per-camera exposure metering HDR within and between tiles Can we match the image quality of a cinema camera?

same exposure in all cameras individually metered checkerboard of exposures

 Marc Levoy High-performance photography as multi-dimensional sampling spatial resolution field of view frame rate dynamic range bits of precision depth of field focus setting color sensitivity

 Marc Levoy Spacetime aperture shaping shorten exposure time to freeze motion → dark stretch contrast to restore level → noisy increase (synthetic) aperture to capture more light → decreases depth of field

center of aperture:few cameras, long exposure → high depth of field, low noise, but action is blurred periphery of aperture:many cameras, short exposure → freezes action, low noise, but low depth of field

 Marc Levoy Light field photography using a handheld plenoptic camera Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan (Proc. SIGGRAPH 2005 and TR )

 Marc Levoy Conventional versus light field camera

 Marc Levoy Conventional versus light field camera uv-planest-plane

 Marc Levoy Conventional versus light field camera uv-plane st-plane

Prototype camera 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lens Contax medium format cameraKodak 16-megapixel sensor Adaptive Optics microlens array125μ square-sided microlenses

 Marc Levoy Mechanical design microlenses float 500μ above sensor focused using 3 precision screws

 Marc Levoy Prior work integral photography –microlens array + film –application is autostereoscopic effect [Adelson 1992] –proposed this camera –built an optical bench prototype using relay lenses –application was stereo vision, not photography

 Marc Levoy Digitally stopping-down stopping down = summing only the central portion of each microlens Σ Σ

 Marc Levoy Digital refocusing refocusing = summing windows extracted from several microlenses Σ Σ

 Marc Levoy A digital refocusing theorem an f / N light field camera, with P × P pixels under each microlens, can produce views as sharp as an f / (N × P) conventional camera – or – it can produce views with a shallow depth of field ( f / N ) focused anywhere within the depth of field of an f / (N × P) camera

 Marc Levoy Example of digital refocusing

 Marc Levoy Refocusing portraits

 Marc Levoy Action photography

Extending the depth of field conventional photograph, main lens at f / 22 conventional photograph, main lens at f / 4 light field, main lens at f / 4, after all-focus algorithm [Agarwala 2004]

 Marc Levoy Macrophotography

 Marc Levoy Digitally moving the observer moving the observer = moving the window we extract from the microlenses Σ Σ

 Marc Levoy Example of moving the observer

 Marc Levoy Moving backward and forward

 Marc Levoy Implications cuts the unwanted link between exposure (due to the aperture) and depth of field trades off (excess) spatial resolution for ability to refocus and adjust the perspective sensor pixels should be made even smaller, subject to the diffraction limit 36mm × 24mm ÷ 2.5μ pixels = 266 megapixels 20K × 13K pixels 4000 × 2666 pixels × 20 × 20 rays per pixel

 Marc Levoy Can we build a light field microscope? ability to photograph moving specimens digital refocusing → focal stack → deconvolution microscopy → volume data

Dual Photography Pradeep Sen, Billy Chen, Gaurav Garg, Steve Marschner, Mark Horowitz, Marc Levoy, Hendrik Lensch (Proc. SIGGRAPH 2005)

Helmholtz reciprocity scene light camera

Helmholtz reciprocity scene camera light

photocell scene Measuring transport along a set of paths projector

scene point light Reversing the paths camera

Forming a dual photograph scene photocell projector “dual” light “dual” camera

Forming a dual photograph scene image of scene “dual” light “dual” camera

Physical demonstration light replaced with projector camera replaced with photocell projector scanned across the scene conventional photograph, with light coming from right dual photograph, as seen from projector’s position and as illuminated from photocell’s position

 Marc Levoy Related imaging methods time-of-flight scanner –if they return reflectance as well as range –but their light source and sensor are typically coaxial scanning electron microscope Velcro® at 35x magnification, Museum of Science, Boston

The 4D transport matrix scene photocell projector camera

The 4D transport matrix scene projector P pq x 1 C mn x 1 T mn x pq

T PC = The 4D transport matrix pq x 1 mn x 1 mn x pq

T C = The 4D transport matrix mn x pq pq x 1 mn x 1

T C = The 4D transport matrix mn x pq pq x 1 mn x 1

T C = The 4D transport matrix mn x pq pq x 1 mn x 1

The 4D transport matrix T P C = pq x 1 mn x 1 mn x pq

The 4D transport matrix applying Helmholtz reciprocity... T PC = pq x 1 mn x 1 mn x pq T P’C’ = mn x 1 pq x 1 pq x mn T

 Marc Levoy Example conventional photograph with light coming from right dual photograph as seen from projector’s position

 Marc Levoy Properties of the transport matrix little interreflection → sparse matrix many interreflections → dense matrix convex object → diagonal matrix concave object → full matrix Can we create a dual photograph entirely from diffuse reflections?

 Marc Levoy Dual photography from diffuse reflections the camera’s view

 Marc Levoy The relighting problem subject captured under multiple lights one light at a time, so subject must hold still point lights are used, so can’t relight with cast shadows Paul Debevec’s Light Stage 3

The 6D transport matrix

 Marc Levoy The advantage of dual photography capture of a scene as illuminated by different lights cannot be parallelized capture of a scene as viewed by different cameras can be parallelized

scene Measuring the 6D transport matrix projector camera arraymirror arraycamera

 Marc Levoy Relighting with complex illumination step 1: measure 6D transport matrix T step 2: capture a 4D light field step 3: relight scene using captured light field scene camera array projector T P’ C’ = mn x uv x 1pq x 1 pq x mn x uv

 Marc Levoy Running time the different rays within a projector can in fact be parallelized to some extent this parallelism can be discovered using a coarse-to-fine adaptive scan can measure a 6D transport matrix in 5 minutes

Can we measure an 8D transport matrix? scene camera arrayprojector array

 Marc Levoy T P C = pq x 1mn x 1mn x pq