Lecture 21: Light, reflectance and photometric stereo

Slides:



Advertisements
Similar presentations
Announcements Project status reports on Wednesday prepare 5 minute ppt presentation should contain: –problem statement (1 slide) –description of approach.
Advertisements

Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
1 Graphics CSCI 343, Fall 2013 Lecture 18 Lighting and Shading.
Week 5 - Friday.  What did we talk about last time?  Quaternions  Vertex blending  Morphing  Projections.
Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Lecture 35: Photometric stereo CS4670 / 5670 : Computer Vision Noah Snavely.
3D Modeling from a photograph
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Radiometry. Outline What is Radiometry? Quantities Radiant energy, flux density Irradiance, Radiance Spherical coordinates, foreshortening Modeling surface.
Physically Based Illumination Models
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Fall 2014.
Torrance Sparrow Model of Reflectance + Oren Nayar Model of Reflectance.
Based on slides created by Edward Angel
1 Angel: Interactive Computer Graphics 5E © Addison-Wesley 2009 Shading I.
University of New Mexico
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Announcements Midterm out today due in a week Project2 no demo session artifact voting TBA.
Capturing light Source: A. Efros. Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How.
Light Readings Forsyth, Chapters 4, 6 (through 6.2) by Ted Adelson.
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
Light by Ted Adelson Readings Forsyth, Chapters 4, 6 (through 6.2)
Computer Graphics (Fall 2008) COMS 4160, Lecture 19: Illumination and Shading 2
Introduction to Computer Vision CS / ECE 181B Tues, May 18, 2004 Ack: Matthew Turk (slides)
Basic Principles of Surface Reflectance
Objectives Learn to shade objects so their images appear three- dimensional Learn to shade objects so their images appear three- dimensional Introduce.
Lecture 32: Photometric stereo, Part 2 CS4670: Computer Vision Noah Snavely.
Basic Principles of Surface Reflectance Lecture #3 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Computer Graphics (Fall 2004) COMS 4160, Lecture 16: Illumination and Shading 2 Lecture includes number of slides from.
Photometric Stereo Merle Norman Cosmetics, Los Angeles Readings R. Woodham, Photometric Method for Determining Surface Orientation from Multiple Images.
Lecture 20: Light, reflectance and photometric stereo
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Shading I Ed Angel Professor of Computer Science, Electrical and Computer Engineering,
CS 480/680 Computer Graphics Shading I Dr. Frederick C Harris, Jr.
Light and shading Source: A. Efros.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
CS 445 / 645: Introductory Computer Graphics
Human Eye and Color Rays of light enter the pupil and strike the back of the eye (retina) – signals go to the optic nerve and eventually to the brain Retina.
MIT EECS 6.837, Durand and Cutler Local Illumination.
CSC418 Computer Graphics n Illumination n Lights n Lightinging models.
Capturing light Source: A. Efros.
David Luebke 1 10/26/2015 Lighting CS 445/645 Introduction to Computer Graphics David Luebke, Spring 2003.
Taku KomuraComputer Graphics Local Illumination and Shading Computer Graphics – Lecture 10 Taku Komura Institute for Perception, Action.
Radiometry and Photometric Stereo 1. Estimate the 3D shape from shading information Can you tell the shape of an object from these photos ? 2.
Advanced Illumination Models Chapter 7 of “Real-Time Rendering, 3 rd Edition”
Panorama artifacts online –send your votes to Li Announcements.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2012.
Announcements Office hours today 2:30-3:30 Graded midterms will be returned at the end of the class.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
CSCE 641 Computer Graphics: Reflection Models Jinxiang Chai.
Lecture 34: Light, color, and reflectance CS4670 / 5670: Computer Vision Noah Snavely.
776 Computer Vision Jan-Michael Frahm Fall Capturing light Source: A. Efros.
Local Illumination and Shading
Cornell CS465 Spring 2004 Lecture 4© 2004 Steve Marschner 1 Shading CS 465 Lecture 4.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Distributed Ray Tracing. Can you get this with ray tracing?
Illumination Models. Introduction 1 Illumination model: Given a point on a surface, what is the perceived color and intensity? Known as Lighting Model,
Chapter 9: Perceiving Color. Figure 9-1 p200 Figure 9-2 p201.
Understanding the effect of lighting in images Ronen Basri.
Computer Graphics: Illumination
Announcements Project 3a due today Project 3b due next Friday.
CS5670: Computer Vision Noah Snavely Light & Perception.
MAN-522 Computer Vision Spring
Merle Norman Cosmetics, Los Angeles
Capturing light Source: A. Efros.
Announcements Project 3b due Friday.
Announcements Midterm Project2 Project3
Announcements Today: evals Monday: project presentations (8 min talks)
Lecture 28: Photometric Stereo
Announcements Project 3 out today demo session at the end of class.
Shape from Shading and Texture
Presentation transcript:

Lecture 21: Light, reflectance and photometric stereo CS6670: Computer Vision Noah Snavely Lecture 21: Light, reflectance and photometric stereo

Announcements Final projects Midterm reports due November 24 (next Tuesday) by 11:59pm (upload to CMS) State the problem Describe progress so far, any problems that have come up

What is light? Electromagnetic radiation (EMR) moving along rays in space R(l) is EMR, measured in units of power (watts) l is wavelength Perceiving light How do we convert radiation into “color”? What part of the spectrum do we see?

Visible light We “see” electromagnetic radiation in a range of wavelengths

Light spectrum The appearance of light depends on its power spectrum How much power (or energy) at each wavelength fluorescent light daylight tungsten bulb Our visual system converts a light spectrum into “color” This is a rather complex transformation

The human visual system Color perception Light hits the retina, which contains photosensitive cells rods and cones These cells convert the spectrum into a few discrete values

Density of rods and cones Rods and cones are non-uniformly distributed on the retina Rods responsible for intensity, cones responsible for color Fovea - Small region (1 or 2°) at the center of the visual field containing the highest density of cones (and no rods). Less visual acuity in the periphery—many rods wired to the same neuron

Demonstrations of visual acuity With one eye shut, at the right distance, all of these letters should appear equally legible (Glassner, 1.7).

Demonstrations of visual acuity With left eye shut, look at the cross on the left. At the right distance, the circle on the right should disappear (Glassner, 1.8).

Brightness contrast and constancy The apparent brightness depends on the surrounding region brightness contrast: a constant colored region seems lighter or darker depending on the surrounding intensity: http://www.sandlotscience.com/Contrast/Checker_Board_2.htm brightness constancy: a surface looks the same under widely varying lighting conditions. “Approximate brightness constancy, a similar effect, makes us tend to see objects in terms of their reflecting power rather than the amount of light they actually reflect. Thus we can almost always identify a piece of white paper as white even though it is placed in shadow where it actually reflects much less light to the eye than a piece of gray paper in full illumination.”

Light response is nonlinear Our visual system has a large dynamic range We can resolve both light and dark things at the same time One mechanism for achieving this is that we sense light intensity on a logarithmic scale an exponential intensity ramp will be seen as a linear ramp Another mechanism is adaptation rods and cones adapt to be more sensitive in low light, less sensitive in bright light.

Visual dynamic range

After images Tired photoreceptors Send out negative response after a strong stimulus http://www.sandlotscience.com/Aftereffects/Andrus_Spiral.htm

Color perception Three types of cones L response curve Each is sensitive in a different region of the spectrum but regions overlap Short (S) corresponds to blue Medium (M) corresponds to green Long (L) corresponds to red Different sensitivities: we are more sensitive to green than red Colorblindness—deficiency in at least one type of cone

Color perception Rods and cones act as filters on the spectrum Power S Wavelength Rods and cones act as filters on the spectrum To get the output of a filter, multiply its response curve by the spectrum, integrate over all wavelengths Each cone yields one number Q: How can we represent an entire spectrum with 3 numbers? A: We can’t! Most of the information is lost. As a result, two different spectra may appear indistinguishable such spectra are known as metamers http://www.cs.brown.edu/exploratories/freeSoftware/repository/edu/brown/cs/exploratories/applets/spectrum/metamers_guide.html

Perception summary The mapping from radiance to perceived color is quite complex! We throw away most of the data We apply a logarithm Brightness affected by pupil size Brightness contrast and constancy effects Afterimages The same is true for cameras But we have tools to correct for these effects Coming soon: Computational Photography lecture

Light transport

Light sources Basic types More generally point source directional source a point source that is infinitely far away area source a union of point sources More generally a light field can describe *any* distribution of light sources What happens when light hits an object?

Reflectance spectrum (albedo) To a first approximation, surfaces absorb some wavelengths of light and reflect others These spectra are multiplied by the spectra of the incoming light

Specular reflection/ transmission conductor insulator Materials - physics -> conductors and insulators (mirrors and glass) - complex interface -> glossy behavior - complex subsurface -> diffuse from Steve Marschner

Non-smooth-surfaced materials Looked at physical basis of specular reflection What about other types? from Steve Marschner

Direct Global

Classic reflection behavior ideal specular (Fresnel) rough specular Lambertian from Steve Marschner

What happens when a light ray hits an object? Some of the light gets absorbed converted to other forms of energy (e.g., heat) Some gets transmitted through the object possibly bent, through “refraction” a transmitted ray could possible bounce back Some gets reflected as we saw before, it could be reflected in multiple directions (possibly all directions) at once Let’s consider the case of reflection in detail

The BRDF Answer given by the BRDF: The Bidirectional Reflection Distribution Function Given an incoming ray and outgoing ray what proportion of the incoming light is reflected along outgoing ray? surface normal Answer given by the BRDF:

Constraints on the BRDF Energy conservation Quantity of outgoing light ≤ quantity of incident light integral of BRDF ≤ 1 Helmholtz reciprocity reversing the path of light produces the same reflectance =

Diffuse reflection Diffuse reflection Dull, matte surfaces like chalk or latex paint Microfacets scatter incoming light randomly Effect is that light is reflected equally in all directions

Diffuse reflection Lambert’s Law: BRDF for Lambertian surface Diffuse reflection governed by Lambert’s law Viewed brightness does not depend on viewing direction Brightness does depend on direction of illumination This is the model most often used in computer vision L, N, V unit vectors Ie = outgoing radiance Ii = incoming radiance Lambert’s Law: BRDF for Lambertian surface

Specular reflection For a perfect mirror, light is reflected about N Near-perfect mirrors have a highlight around R common model:

Moving the light source Specular reflection Moving the light source Changing ns

Phong illumination model Phong approximation of surface reflectance Assume reflectance is modeled by three components Diffuse term Specular term Ambient term (to compensate for inter-reflected light) L, N, V unit vectors Ie = outgoing radiance Ii = incoming radiance Ia = ambient light ka = ambient light reflectance factor (x)+ = max(x, 0)

BRDF models Phenomenological Physical Phong [75] Ward [92] Lafortune et al. [97] Ashikhmin et al. [00] Physical Cook-Torrance [81] Dichromatic [Shafer 85] He et al. [91] Here we’re listing only some well-known examples

Measuring the BRDF Gonioreflectometer design by Greg Ward traditional Gonioreflectometer Device for capturing the BRDF by moving a camera + light source Need careful control of illumination, environment

BRDF databases MERL (Matusik et al.): 100 isotropic, 4 nonisotropic, dense CURET (Columbia-Utrect): 60 samples, more sparsely sampled, but also bidirectional texure functions (BTF)

Questions? 3-minute break

Merle Norman Cosmetics, Los Angeles Photometric Stereo Merle Norman Cosmetics, Los Angeles Readings R. Woodham, Photometric Method for Determining Surface Orientation from Multiple Images. Optical Engineering 19(1)139-144 (1980). (PDF)

Diffuse reflection Simplifying assumptions image intensity of P Simplifying assumptions I = Re: camera response function f is the identity function: can always achieve this in practice by solving for f and applying f -1 to each pixel in the image Ri = 1: light source intensity is 1 can achieve this by dividing each pixel in the image by Ri

Shape from shading Suppose You can directly measure angle between normal and light source Not quite enough information to compute surface shape But can be if you add some additional info, for example assume a few of the normals are known (e.g., along silhouette) constraints on neighboring normals—“integrability” smoothness Hard to get it to work well in practice plus, how many real objects have constant albedo?

Photometric stereo L2 L3 L1 N V Can write this as a matrix equation:

Solving the equations

More than three lights Get better results by using more lights Least squares solution: Solve for N, kd as before What’s the size of LTL?