Lecture 33: Computational photography CS4670: Computer Vision Noah Snavely.

Slides:



Advertisements
Similar presentations
Lytro The first light field camera for the consumer market
Advertisements

Lecture 2: Convolution and edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Lecture 35: Photometric stereo CS4670 / 5670 : Computer Vision Noah Snavely.
--- some recent progress Bo Fu University of Kentucky.
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Computer Vision CS 776 Spring 2014 Photometric Stereo and Illumination Cones Prof. Alex Berg (Slide credits to many folks on individual slides)
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Light Field Rendering Shijin Kong Lijie Heng.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
Rob Fergus Courant Institute of Mathematical Sciences New York University A Variational Approach to Blind Image Deconvolution.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Capturing light Source: A. Efros. Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How.
Lecture 1: Images and image filtering
Announcements Kevin Matzen office hours – Tuesday 4-5pm, Thursday 2-3pm, Upson 317 TA: Yin Lou Course lab: Upson 317 – Card access will be setup soon Course.
Lecture 32: Photometric stereo, Part 2 CS4670: Computer Vision Noah Snavely.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Lecture 2: Image filtering
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
Photometric Stereo Merle Norman Cosmetics, Los Angeles Readings R. Woodham, Photometric Method for Determining Surface Orientation from Multiple Images.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
6.098 Digital and Computational Photography Advanced Computational Photography Photography Survival Kit Bill Freeman Frédo Durand MIT - EECS.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Light and shading Source: A. Efros.
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Light field photography and videography Marc Levoy Computer Science Department Stanford University.
Noise Estimation from a Single Image Ce Liu William T. FreemanRichard Szeliski Sing Bing Kang.
Photography Parts of a Camera. Aperture size (or width or diameter) of the opening of a lens diaphragm inside a photographic lens regulates the amount.
How the Camera Works ( both film and digital )
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University September.
Super-Resolution Dr. Yossi Rubner
Joel Willis. Photography = Capturing Light Best Light Sources and Directions Basics: Aperture, Shutter Speed, ISO, Focal Length, White Balance Intro to.
Lenses Why so many lenses and which one is right for me?
MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research.
Cameras Course web page: vision.cis.udel.edu/cv March 22, 2003  Lecture 16.
High dynamic range imaging. Camera pipeline 12 bits8 bits.
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
11/15/11 Image-based Lighting Computational Photography Derek Hoiem, University of Illinois Many slides from Debevec, some from Efros T2.
Computational photography CS4670: Computer Vision Noah Snavely.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Capturing light Source: A. Efros.
Mitsubishi Electric Research Labs (MERL) Super-Res from Single Motion Blur PhotoAgrawal & Raskar Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research.
Radiometry and Photometric Stereo 1. Estimate the 3D shape from shading information Can you tell the shape of an object from these photos ? 2.
Why is computer vision difficult?
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Understanding Aperture (a beginner’s guide) Understanding Aperture (a beginner’s guide)
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Aperture and Depth of Field
(High Dynamic Range Imagery)
Photography (the very basics).
Distributed Ray Tracing
Deconvolution , , Computational Photography
Introduction to Digital Photography
Sampling and Reconstruction of Visual Appearance
Rob Fergus Computer Vision
Capturing light Source: A. Efros.
High Dynamic Range Images
Distributed Ray Tracing
Announcements Today: evals Monday: project presentations (8 min talks)
Introduction to Digital Photography
Digital Camera Terms and Functions
Shape from Shading and Texture
Distributed Ray Tracing
Presentation transcript:

Lecture 33: Computational photography CS4670: Computer Vision Noah Snavely

Photometric stereo

Limitations Big problems doesn’t work for shiny things, semi-translucent things shadows, inter-reflections Smaller problems camera and lights have to be distant calibration requirements –measure light source directions, intensities –camera response function Newer work addresses some of these issues Some pointers for further reading: Zickler, Belhumeur, and Kriegman, "Helmholtz Stereopsis: Exploiting Reciprocity for Surface Reconstruction." IJCV, Vol. 49 No. 2/3, pp Helmholtz Stereopsis: Exploiting Reciprocity for Surface Reconstruction Hertzmann & Seitz, “Example-Based Photometric Stereo: Shape Reconstruction with General, Varying BRDFs.” IEEE Trans. PAMI 2005Example-Based Photometric Stereo: Shape Reconstruction with General, Varying BRDFs

Finding the direction of the light source P. Nillius and J.-O. Eklundh, “Automatic estimation of the projected light source direction,” CVPR 2001

Application: Detecting composite photos Fake photo Real photo Which is the real photo?

The ultimate camera What does it do?

The ultimate camera Infinite resolution Infinite zoom control Desired object(s) are in focus No noise No motion blur Infinite dynamic range (can see dark and bright things)...

Creating the ultimate camera The “analog” camera has changed very little in >100 yrs we’re unlikely to get there following this path More promising is to combine “analog” optics with computational techniques “Computational cameras” or “Computational photography” This lecture will survey techniques for producing higher quality images by combining optics and computation Common themes: take multiple photos modify the camera

Noise reduction Take several images and average them Why does this work? Basic statistics: variance of the mean decreases with n:

Field of view We can artificially increase the field of view by compositing several photos together (project 2).

Improving resolution: Gigapixel images A few other notable examples: Obama inauguration (gigapan.org)Obama inauguration HDView (Microsoft Research)HDView Max LyonsMax Lyons, 2003 fused 196 telephoto shots

Improving resolution: super resolution What if you don’t have a zoom lens?

D For a given band-limited image, the Nyquist sampling theorem states that if a uniform sampling is fine enough (  D), perfect reconstruction is possible. D Intuition (slides from Yossi Rubner & Miki Elad) 13

Due to our limited camera resolution, we sample using an insufficient 2D grid 2D 14 Intuition (slides from Yossi Rubner & Miki Elad)

However, if we take a second picture, shifting the camera ‘slightly to the right’ we obtain: 2D 15 Intuition (slides from Yossi Rubner & Miki Elad)

Similarly, by shifting down we get a third image: 2D 16 Intuition (slides from Yossi Rubner & Miki Elad)

And finally, by shifting down and to the right we get the fourth image: 2D 17 Intuition (slides from Yossi Rubner & Miki Elad)

By combining all four images the desired resolution is obtained, and thus perfect reconstruction is guaranteed. Intuition 18

19 3:1 scale-up in each axis using 9 images, with pure global translation between them Example

Dynamic Range Typical cameras have limited dynamic range

HDR images — merge multiple inputs Scene Radiance Pixel count

HDR images — merged Radiance Pixel count

Camera is not a photometer! Limited dynamic range 8 bits captures only 2 orders of magnitude of light intensity We can see ~10 orders of magnitude of light intensity Unknown, nonlinear response pixel intensity  amount of light (# photons, or “radiance”) Solution: Recover response curve from multiple exposures, then reconstruct the radiance map

Camera response function

Capture and composite several photos Works for field of view resolution signal to noise dynamic range Focus But sometimes you can do better by modifying the camera…

Focus Suppose we want to produce images where the desired object is guaranteed to be in focus? Or suppose we want everything to be in focus?

Light field camera [Ng et al., 2005]

Conventional vs. light field camera Conventional camera Light field camera

Rays are reorganized into many smaller images corresponding to subapertures of the main lens

Prototype camera 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lens Contax medium format cameraKodak 16-megapixel sensor Adaptive Optics microlens array125μ square-sided microlenses

What can we do with the captured rays? Change viewpoint

Example of digital refocusing

All-in-focus images Combines sharpest parts of all of the individual refocused images Using single pixel from each subimage

All-in-focus If you only want to produce an all-focus image, there are simpler alternatives E.g., Wavefront coding [Dowsky 1995] Coded aperture [Levin SIGGRAPH 2007], [Raskar SIGGRAPH 2007] –can also produce change in focus (ala Ng’s light field camera)

Why are images blurry? Camera focused at wrong distance Depth of field Motion blur How can we remove the blur?

Motion blur Especially difficult to remove, because the blur kernel is unknown =  both unknown

Multiple possible solutions =  Blurry image Sharp image Blur kernel =  =  Slide courtesy Rob Fergus

Priors can help Priors on natural images Image A is more “natural” than image B

Natural image statistics Histogram of image gradients Characteristic distribution with heavy tails