Panorama artifacts online –send your votes to Li Announcements.

Slides:



Advertisements
Similar presentations
Announcements Project status reports on Wednesday prepare 5 minute ppt presentation should contain: –problem statement (1 slide) –description of approach.
Advertisements

Optics and Human Vision The physics of light
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Computational Photography: Color perception, light spectra, contrast Connelly Barnes.
Chapter 9: Color Vision. Overview of Questions How do we perceive 200 different colors with only three cones? What does someone who is “color-blind” see?
Color & Light, Digitalization, Storage. Vision Rods work at low light levels and do not see color –That is, their response depends only on how many photons,
16421: Vision Sensors Lecture 6: Radiometry and Radiometric Calibration Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 2:50pm.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Fall 2014.
Photoreceptors.
Computer Vision - Color Hanyang University Jong-Il Park.
Why is this hard to read. Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color.
Human Color Perception
Announcements Midterm out today due in a week Project2 no demo session artifact voting TBA.
Capturing light Source: A. Efros. Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How.
Light Readings Forsyth, Chapters 4, 6 (through 6.2) by Ted Adelson.
Modeling Light : Rendering and Image Processing Alexei Efros.
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
Announcements. Projection Today’s Readings Nalwa 2.1.
Capturing Light… in man and machine : Computational Photography Alexei Efros, CMU, Fall 2006 Some figures from Steve Seitz, Steve Palmer, Paul Debevec,
Light by Ted Adelson Readings Forsyth, Chapters 4, 6 (through 6.2)
CSCE641: Computer Graphics Image Formation Jinxiang Chai.
Lecture 21: Light, reflectance and photometric stereo
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Lecture 20: Light, color, and reflectance CS6670: Computer Vision Noah Snavely.
Capturing Light… in man and machine : Computational Photography Alexei Efros, CMU, Fall 2010.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
1 Computer Science 631 Lecture 6: Color Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
Computational Photography Light Field Rendering Jinxiang Chai.
Intromission Theory: Plato, Euclid, Ptolemy, da Vinci. Plato for instance, wrote in the fourth century B. C. that light emanated from the eye, seizing.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Lecture 20: Light, reflectance and photometric stereo
Chapter 7: Color Vision How do we perceive color?.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Light and shading Source: A. Efros.
9/14/04© University of Wisconsin, CS559 Spring 2004 Last Time Intensity perception – the importance of ratios Dynamic Range – what it means and some of.
CS559-Computer Graphics Copyright Stephen Chenney 2001 The Human Eye Graphics is concerned with the visual transmission of information How do we see? –Light.
03/10/03© 2003 University of Wisconsin Last Time Tone Reproduction and Perceptual Issues Assignment 2 all done (almost)
Proxy Plane Fitting for Line Light Field Rendering Presented by Luv Kohli COMP238 December 17, 2002.
Image-based rendering Michael F. Cohen Microsoft Research.
Physiology of Vision: a swift overview : Learning-Based Methods in Vision A. Efros, CMU, Spring 2009 Some figures from Steve Palmer.
Physiology of Vision: a swift overview : Advanced Machine Perception A. Efros, CMU, Spring 2006 Some figures from Steve Palmer.
Digital Image Fundamentals. What Makes a good image? Cameras (resolution, focus, aperture), Distance from object (field of view), Illumination (intensity.
16421: Vision Sensors Lecture 7: High Dynamic Range Imaging Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 3:00pm.
Capturing light Source: A. Efros.
Why is this hard to read. Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color.
Digital Image Processing Part 1 Introduction. The eye.
COLORCOLOR Angel 1.4 and 2.4 J. Lindblad
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2012.
Announcements Office hours today 2:30-3:30 Graded midterms will be returned at the end of the class.
Introduction to Computer Graphics
Lecture 34: Light, color, and reflectance CS4670 / 5670: Computer Vision Noah Snavely.
776 Computer Vision Jan-Michael Frahm Fall Capturing light Source: A. Efros.
Applying Pixel Values to Digital Images
David Luebke 1 2/5/2016 Color CS 445/645 Introduction to Computer Graphics David Luebke, Spring 2003.
Seeing READING ASSIGNMENT Discussion of Gregory’s Article on Visual Illusions – Tues Feb 17 Available in your course pack.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Capturing Light… in man and machine CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2015.
Retina Retina covered with light sensitive receptors –RODS Primarily for night vision and movement Sensitive to broad spectrum of light.
Chapter 9: Perceiving Color. Figure 9-1 p200 Figure 9-2 p201.
CSE 185 Introduction to Computer Vision
CS5670: Computer Vision Noah Snavely Light & Perception.
Capturing Light… in man and machine
COSC579: Color and Perception
Physiology of Vision: a swift overview
Capturing Light… in man and machine
Computer Vision Lecture 4: Color
Capturing Light… in man and machine
Announcements Midterm out today Project 1 demos.
Announcements Midterm Project2 Project3
Presentation transcript:

Panorama artifacts online –send your votes to Li Announcements

Light Readings Andrew Glassner, Principles of Digital Image Synthesis (Vol. 1), Morgan Kaufmann Publishers, 1995, pp (class handout) by Ted Adelson

Properties of light Today What is light? How do we measure it? Next time How does light propagate? How does light interact with matter? Shape from shading

What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) – is wavelength Light field We can describe all of the light in the scene by specifying the radiation (or “radiance” along all light rays) arriving at every point in space and from every direction

The light field Known as the plenoptic function If you know R, you can predict how the scene would appear from any viewpoint. How? Gives a complete description of scene appearance The light field Assume radiance does not change along a ray –what does this assume about the world? Parameterize rays by intersection with two planes: Usually drop and time parameters How could you capture a light field? t is not time (different from above t !)

Stanford light field gantry

More info on light fields If you’re interested to read more: The plenoptic function Original reference: E. Adelson and J. Bergen, "The Plenoptic Function and the Elements of Early Vision," in M. Landy and J. A. Movshon, (eds) Computational Models of Visual Processing, MIT Press 1991.The Plenoptic Function and the Elements of Early Vision L. McMillan and G. Bishop, “Plenoptic Modeling: An Image-Based Rendering System”, Proc. SIGGRAPH, 1995, pp Plenoptic Modeling: An Image-Based Rendering System The light field M. Levoy and P. Hanrahan, “Light Field Rendering”, Proc SIGGRAPH 96, pp Light Field Rendering S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, "The lumigraph," in Proc. SIGGRAPH, 1996, pp The lumigraph show video

What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) – is wavelength Perceiving light How do we convert radiation into “color”? What part of the spectrum do we see?

The visible light spectrum We “see” electromagnetic radiation in a range of wavelengths

Light spectrum The appearance of light depends on its power spectrum How much power (or energy) at each wavelength daylighttungsten bulb Our visual system converts a light spectrum into “color” This is a rather complex transformation

The human visual system Color perception Light hits the retina, which contains photosensitive cells –rods and cones These cells convert the spectrum into a few discrete values

Density of rods and cones Rods and cones are non-uniformly distributed on the retina Rods responsible for intensity, cones responsible for color Fovea - Small region (1 or 2°) at the center of the visual axis containing the highest density of cones (and no rods). Less visual acuity in the periphery—many rods wired to the same neuron

Demonstrations of visual acuity With one eye shut, at the right distance, all of these letters should appear equally legible (Glassner, 1.7).

Demonstrations of visual acuity With left eye shut, look at the cross on the left. At the right distance, the circle on the right should disappear (Glassner, 1.8).

Brightness contrast and constancy The apparent brightness depends on the surrounding region brightness contrast: a constant colored region seem lighter or darker depending on the surround: brightness constancy: a surface looks the same under widely varying lighting conditions.

Light response is nonlinear Our visual system has a large dynamic range We can resolve both light and dark things at the same time One mechanism for achieving this is that we sense light intensity on a logarithmic scale –an exponential intensity ramp will be seen as a linear ramp Another mechanism is adaptation –rods and cones adapt to be more sensitive in low light, less sensitive in bright light.

Visual dynamic range

After images Tired photoreceptors Send out negative response after a strong stimulus

Color perception Three types of cones Each is sensitive in a different region of the spectrum –but regions overlap –Short (S) corresponds to blue –Medium (M) corresponds to green –Long (L) corresponds to red Different sensitivities: we are more sensitive to green than red Colorblindness—deficiency in at least one type of cone L response curve

Color perception Rods and cones act as filters on the spectrum To get the output of a filter, multiply its response curve by the spectrum, integrate over all wavelengths –Each cone yields one number Q: How can we represent an entire spectrum with 3 numbers? S ML Wavelength Power A: We can’t! Most of the information is lost. –As a result, two different spectra may appear indistinguishable »such spectra are known as metamers »

Perception summary The mapping from radiance to perceived color is quite complex! We throw away most of the data We apply a logarithm Brightness affected by pupil size Brightness contrast and constancy effects Afterimages

Camera response function Now how about the mapping from radiance to pixels? It’s also complex, but better understood This mapping known as the film or camera response function How can we recover radiance values given pixel values? Why should we care? Useful if we want to estimate material properties Shape from shading requires radiance Enables creating high dynamic range images What does the response function depend on?

Recovering the camera response Method 1 Carefully model every step in the pipeline –measure aperture, model film, digitizer, etc. –this is *really* hard to get right Method 2 Calibrate (estimate) the response function –Image several objects with known radiance –Measure the pixel values –Fit a function radiance pixel intensity = response curve

Recovering the camera response Method 3 Calibrate the response function from several images –Consider taking images with shutter speeds 1/1000, 1/100, 1/10, and 1 –Q: What is the relationship between the radiance or pixel values in consecutive images? –A: 10 times as much radiance –Can use this to recover the camera response function For more info P. E. Debevec and J. Malik. Recovering High Dynamic Range Radiance Maps from Photographs. In SIGGRAPH 97, August 1997Recovering High Dynamic Range Radiance Maps from PhotographsSIGGRAPH 97 response curve exposure  radiance * time = pixel intensity =

High dynamic range imaging Techniques Debevec: Columbia:

High dynamic range imaging Basic approach Choose a single pixel Plot pixel value as a function of exposure Repeat for many other pixels Fit a response function Invert to obtain exposure from pixel values show video