ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.

Slides:



Advertisements
Similar presentations
Consider Refraction at Spherical Surfaces:
Advertisements

Lytro The first light field camera for the consumer market
Digital Camera Essential Elements Part 1 Sept
Using Digital Photography in Family History Work Using digital cameras to save document images By: Bob Curry.
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
© 2010 Adobe Systems Incorporated. All Rights Reserved. T. Georgiev, Adobe Systems A. Lumsdaine, Indiana University The Multi-Focus Plenoptic Camera.
Example: A particular nearsighted person is unable to see objects clearly when they are beyond 2.5 m away (the far point of this particular eye). What.
Multimedia for the Web: Creating Digital Excitement Multimedia Element -- Graphics.
Announcements. Projection Today’s Readings Nalwa 2.1.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Talk today on “Lightfield photography” by Ren.
The Pinhole Camera Model
Computer Vision Lecture 3: Digital Images
Ch 1 Intro to Graphics page 1CS 367 First Day Agenda Best course you have ever had (survey) Info Cards Name, , Nickname C / C++ experience, EOS experience.
Copyright © 2008 Pearson Education Inc., publishing as Pearson Addison-Wesley PowerPoint ® Lectures for University Physics, Twelfth Edition – Hugh D. Young.
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
Chapter 33 Lenses and Optical Instruments
... M A K E S Y O U R N E T W O R K S M A R T E R Lenses & Filters.
This objective is designed to have you take photographs in some of the classic styles of photography that exist. 1.Contact Sheet 6. Abstract Close-Up 2.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Photography Parts of a Camera. Aperture size (or width or diameter) of the opening of a lens diaphragm inside a photographic lens regulates the amount.
Digital Photography Fundamentals Rule One - all digital cameras capture information at 72 dots per inch (DPI) regardless of their total pixel count and.
How the Camera Works ( both film and digital )
Image processing Lecture 4.
By Joe Jodoin The Human Eye. Parts of the eye There are lots of parts of the eye so EYE will only talk about the main parts. Those parts are the cornea,
Copyright © 2009 Pearson Education, Inc. Chapter 33 Lenses and Optical Instruments.
The Very First Camera Created by Ronquelda Quillen.
Digital Image Characteristic
Office hours are posted on the website. –Molly: Tuesdays 2-4pm –Dr. Keister: Wednesdays 10am-12 –Prof. Goldman: Wednesdays 2-3:30pm All office hours are.
KEYWORDS: refraction, angle of incidence, Angle of refraction, refractive index KEYWORDS: refraction, angle of incidence, Angle of refraction, refractive.
Photographics 10 Introduction to Digital Photography
COMP 175: Computer Graphics March 24, 2015
Chapter 23 Mirrors and Lenses.
Comparing Regular Film to Digital Photography
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
How Photography Works Friday, August 27. SWBAT explain the permanent formation of an image How photography works.
Integral University EC-024 Digital Image Processing.
© 1999 Rochester Institute of Technology Introduction to Digital Imaging.
10.3 Ray Model of light.  Remember in our first talk, we discussed how images that are formed by light are created by BILLIONS of light rays coming from.
Macro and Close-up Photography Digital Photography DeCal 2010 Nathan Yan Kellen Freeman Some slides adapted from Zexi Eric Yan Photo by Daniel Schwen.
Definitions Megarays - number of light rays captured by the light field sensor. Plenoptic - camera that uses a mirrolens array to capture 4D light field.
Digital Image Fundamentals. What Makes a good image? Cameras (resolution, focus, aperture), Distance from object (field of view), Illumination (intensity.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
1. Two long straight wires carry identical currents in opposite directions, as shown. At the point labeled A, is the direction of the magnetic field left,
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
A Multi-Spectral Structured Light 3D Imaging System MATTHEW BEARDMORE MATTHEW BOWEN.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 2 Introduction, Light Course webpage:
A pixel is not a little square & A voxel is not a little cube Chung ji hye.
Digital Photography IID Day August 25, Outline 1. Using your camera overview 2.Tips for shooting great pictures 3.Transferring Images from Camera.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
1. These basics are common to ALL cameras: F-Stop Shutter Speed Film Speed 2.
CS 325 Introduction to Computer Graphics 03 / 22 / 2010 Instructor: Michael Eckmann.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Picture Composition. There are two parts to taking good photographs –Exposure –Composition Exposure is the technical part of the photographic process.
Its now time to see the light…..  A lens is a curved transparent material that is smooth and regularly shaped so that when light strikes it, the light.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Intelligent Robotics Today: Vision & Time & Space Complexity.
01/19/05© 2005 University of Wisconsin CS 779: Rendering Prof Stephen Chenney Spring 2005
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
Hi, I’m Michele Del Core! I’m 18 years old and photography is one of my biggest passions. Practicing and doing researches about it, I discovered that.
Basics Reflection Mirrors Plane mirrors Spherical mirrors Concave mirrors Convex mirrors Refraction Lenses Concave lenses Convex lenses.
 Imagine a clear evening when a full moon is just starting to rise. Even though the Moon might seem large and close, it is still too far away for you.
Digital Cameras A digital camera ( or digital) is a camera that takes video or still photographs, or both, digitally by recording images by an electronic.
Mirrors and Lenses How do eyeglasses correct your vision? When you look in a mirror, where is the face you see? In which ways is a cell phone camera similar.
Digital Image -M.V.Ramachandranwww.youtube.com/postmanchandru
Introduction Computational Photography Seminar: EECS 395/495
How Photography Works Friday, August 29.
Computer Vision Lecture 3: Digital Images
Presentation transcript:

ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The first assignment will be posted here on 1/22) To view video recordings of past lectures, go to: and select “course login” from the upper right corner of the page. Lecture #25: 3/12/14

LightField Photography “Shoot first, Focus later”: “LightField” photography refers to cameras able to capture enough information about an object that the resulting image can be ‘re- focused’ later. The information captured is called the “LightField”.

LightField Photography The lightfield is a geometrical optics concept, whose elements are rays. Consider a lens interacting with rays coming from several objects, all at different distances from the lens: Z In a conventional camera, a detector may be placed at some distance, Z, behind the lens. What the detector records is a 2D record of where the rays strike the detector. If the detector happens to be at the conjugate distance from some object, then an image of that object will be recorded. If the detector and object are not at conjugate distances (that is, related by the imaging equation: then only a blur will be recorded. The only thing we can deduce about the rays from the image is where they ended up.

LightField Photography Suppose that we are able to record, for each ray, not only where it arrives on the detector, but where it was previously: Ray Measurement Plane #1 Ray Measurement Plane #2 We now have 4 measurements for each ray: its 2D position at each of two planes. This is called the “4D Light Field”. Since rays travel in straight lines, we also have all the information we need to calculate where each ray would be at any position, and hence the image that would be captured if a detector were at that position: Calculated image positions Measured ray positions:

LightField Photography Somewhat surprisingly, the concept of the lightfield is over 100 years old. Perhaps the first clear description of the concept was by Gabriel Lippmann, who called it “Integral Photography”: How is this a measure of the 4D light field? Assume that the object, A, is far away compared to the size of the lenslets. Then, each lenslet records 4 things about each ray (at the Z-position of the lenslet array): 1.Its X-Y position at the lenslet plane. (To the precision of the lenslet size.) 2.Its direction of travel, encoded in the x-y position of the recorded position of the ray behind the individual lenslet. Note that knowing the position and direction of a ray is equivalent to knowing it at two positions. Either set of data uniquely determines a straight line in 3D space, and hence uniquely determines a ray.

Lenticular Printing Lipmann planned to display the developed photograph behind the same lenslet array, so that users would see a different perspective, depending on their position and direction of view. (The computational effort to re-focus the light field at another position would have been prohibitive in 1908, as it would have to have been done entirely by hand.) The Lippmann work led to the development of “Lenticular Printing” – the method of producing 3D (and now, short videos) on pieces of paper, using a micro-array of cylindrical lenslets over a set of prepared image slices: Each color represents a different image, separated into slices. Array of cylindrical lenslets (seen edge on). Each lenslet projects each images in different directions: Different images are visible from different directions of view.

Lenticular Printing Here’s another illustration of how each eye (or each viewer position) can see a different picture. Today, you can buy sheets of lenticular material and make your own ‘light- field’ displays using public license software and a printer. Lenticular prints project a 3D ‘slice’ of the 4D lightfield – since they don’t change if the view moves along the long axis of the lenslets.

Lenticular Printing Lenticular printing has developed many uses, such as greeting cards, movie posters, and other advertisements: Here is a lenticular print used to demonstrate a high zoom ratio lens via a table-top poster: Movie Posters:

LightField Photography Today, however, we have the computational capability to fully manipulate the 4D light field: Re-focusing to different planes (or even focusing onto curved surfaces): Curved object 4D Lightfield detector Calculated image plane This flexibility might be used in a microscope, say, for imaging the surface of a cell’s nucleus. There is no other way that such an image could be made.

LightField Photography There have been many ideas (and experiments) on how to measure the light field, but the currently commercially successful ideas are variations on Lippmann’s original design, sometimes called a “Plenoptic” camera instead of a Lightfield camera: Camera Lens Lenslet array Pixel array Exit Pupil Layout of a lightfield detector: 1.Each lenslet in the array images the exit pupil onto the detector. 2.Each image of the pupil (on the detector) represents one pixel in the final image. The resolution of the final image is that of the lenslet array, not the pixel array – the 4D lightfield needs 1 -2 orders of magnitude more information to define than a single image. Each pixel of the image at the detector plane is simply the sum of all the pixels in the exit pupil image for each lenslet.

LightField Photography Re-Focusing the Lightfield: To re-create the image at a different location, extend each ray in the detected lightfield to the Z-location of the virtual detector, and add its value to the (virtual) pixel (lenslet) reached. Real Detector Location Virtual Detector Location RAYS 2 3 Lenslets A B C Detector Arrays Z 1 Exit Pupil Plane a b c A’ B’B’ C’C’ a’ b’ c’ The 4D lightfield can be recreated at the virtual location as well, by adding the rays to the corresponding pixels in the virtual detector: a’, b’, and c’. The image pixel, B’, in the virtual detector is made from rays 1, 2, and 3, whose values are the detector pixels a, b, and c, behind image pixels (lenslets) A, B, and C. Object:

LightField Photography A B C ABC Raw Light Field Image: Light Field Photography with a Hand-held Plenoptic Camera Stanford Tech Report CTSR Sub-images, showing that each ‘pixel’ of the image is made of an image of the exit pupil of the optics: (This paper is an excellent introduction to the optical and algorithmic design issues involved in lightfield photography.)

LightField Photography There are several companies now marketing LightField Cameras.

LightField Photography ‘Natural’ image from summing the pupil images of the raw light-field photograph: “Re-Focused” images produced by combining different pixels from the pupil images: (This is the image that would have resulted if the lenslet array had been replaced by a detector array of the same resolution.)

LightField Photography Re-Focused Image Raw Image: It is not necessary for the pixel arrays to be focused on the exit pupil. One company has marketed a lightfield camera in which the pixel arrays reproduce images of segments of the object.

LightField Photography Re-Focused Image Raw Image:

LightField Photography We’ve been talking about light field detectors made by putting lenslet arrays over higher-resolution digital detectors. From our discussion of pinhole photography, however, it’s evident that the lenslet array could be replaced by a simple pinhole array and also make a lightfield detector, at some cost in sensitivity: Camera Lens Pinhole array Pixel array Exit Pupil

LightField Photography Here is a website which demonstrates how to convert a digital camera to a lightfield camera at low cost (~$5) using a printed pinhole array: Building your own Light Field Camera

Light Field Microscope The light field capture concept would seem to be of great interest to microscopists, as it would allow post-capture exploration of the depth details of the object. The extreme loss of resolution, however, has prevented the technique from being used as more than just a demonstration. Here is a schematic of a confocal scanning microscope: Object is scanned in x-y Scanning Light Field Microscope: Pixelated Detector Instead of an image a one depth, the entire light field is captured over the scan.