Light and shading Source: A. Efros.

Slides:



Advertisements
Similar presentations
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Advertisements

CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 5 – Image formation (photometry)
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Lecture 35: Photometric stereo CS4670 / 5670 : Computer Vision Noah Snavely.
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Computer Vision CS 776 Spring 2014 Photometric Stereo and Illumination Cones Prof. Alex Berg (Slide credits to many folks on individual slides)
The Radiance Equation Mel Slater. Outline Introduction Light Simplifying Assumptions Radiance Reflectance The Radiance Equation Traditional Rendering.
16421: Vision Sensors Lecture 6: Radiometry and Radiometric Calibration Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 2:50pm.
Basic Principles of Surface Reflectance
Radiometry. Outline What is Radiometry? Quantities Radiant energy, flux density Irradiance, Radiance Spherical coordinates, foreshortening Modeling surface.
Review Pinhole projection model What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection?
Computer Vision - A Modern Approach Set: Radiometry Slides by D.A. Forsyth Radiometry Questions: –how “bright” will surfaces be? –what is “brightness”?
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Fall 2014.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
AA II z f Surface Radiance and Image Irradiance Same solid angle Pinhole Camera Model.
Announcements Midterm out today due in a week Project2 no demo session artifact voting TBA.
Capturing light Source: A. Efros. Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How.
Light Readings Forsyth, Chapters 4, 6 (through 6.2) by Ted Adelson.
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
3-D Computer Vision CSc83029 / Ioannis Stamos 3-D Computer Vision CSc Radiometry and Reflectance.
Stefano Soatto (c) UCLA Vision Lab 1 Homogeneous representation Points Vectors Transformation representation.
Radiometry, lights and surfaces
© 2002 by Davi GeigerComputer Vision January 2002 L1.1 Image Formation Light can change the image (and appearances). What is the relation between pixel.
Light by Ted Adelson Readings Forsyth, Chapters 4, 6 (through 6.2)
Computer Graphics (Fall 2008) COMS 4160, Lecture 19: Illumination and Shading 2
Introduction to Computer Vision CS / ECE 181B Tues, May 18, 2004 Ack: Matthew Turk (slides)
Basic Principles of Surface Reflectance
© 2004 by Davi GeigerComputer Vision February 2004 L1.1 Image Formation Light can change the image and appearances (images from D. Jacobs) What is the.
Computer Vision Sources, shading and photometric stereo Marc Pollefeys COMP 256.
Lecture 32: Photometric stereo, Part 2 CS4670: Computer Vision Noah Snavely.
Image formation 2. Blur circle A point at distance is imaged at point from the lens and so Points a t distance are brought into focus at distance Thus.
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Image Formation Light can change the image and appearances (images from D. Jacobs) What is the.
Basic Principles of Surface Reflectance Lecture #3 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Photometric Stereo & Shape from Shading
Lecture 20: Light, reflectance and photometric stereo
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
EECS 274 Computer Vision Light and Shading. Radiometry – measuring light Relationship between light source, surface geometry, surface properties, and.
Reflectance Map: Photometric Stereo and Shape from Shading
Computer Vision CS 776 Spring 2014 Cameras & Photogrammetry 3 Prof. Alex Berg (Slide credits to many folks on individual slides)
Computer Vision - A Modern Approach Set: Sources, shadows and shading Slides by D.A. Forsyth Sources and shading How bright (or what colour) are objects?
Analysis of Lighting Effects Outline: The problem Lighting models Shape from shading Photometric stereo Harmonic analysis of lighting.
-Global Illumination Techniques
Capturing light Source: A. Efros.
Sources, Shadows, and Shading
Radiometry and Photometric Stereo 1. Estimate the 3D shape from shading information Can you tell the shape of an object from these photos ? 2.
Course 10 Shading. 1. Basic Concepts: Light Source: Radiance: the light energy radiated from a unit area of light source (or surface) in a unit solid.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2012.
Announcements Office hours today 2:30-3:30 Graded midterms will be returned at the end of the class.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
CSCE 641 Computer Graphics: Reflection Models Jinxiang Chai.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
776 Computer Vision Jan-Michael Frahm Fall Capturing light Source: A. Efros.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
01/27/03© 2002 University of Wisconsin Last Time Radiometry A lot of confusion about Irradiance and BRDFs –Clarrified (I hope) today Radiance.
CS552: Computer Graphics Lecture 33: Illumination and Shading.
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
EECS 274 Computer Vision Sources, Shadows, and Shading.
Announcements Project 3a due today Project 3b due next Friday.
CS580: Radiometry Sung-Eui Yoon ( 윤성의 ) Course URL:
MAN-522 Computer Vision Spring
CS262 – Computer Vision Lect 4 - Image Formation
Image-based Lighting Computational Photography
Radiometry (Chapter 4).
Capturing light Source: A. Efros.
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Announcements Midterm Project2 Project3
Lecture 28: Photometric Stereo
Presentation transcript:

Light and shading Source: A. Efros

Image formation What determines the brightness of an image pixel? Light source properties Surface shape and orientation Sensor characteristics Exposure Review slide (skip) Surface reflectance properties Optics Slide by L. Fei-Fei

Fundamental radiometric relation L: Radiance emitted from P toward P’ Energy carried by a ray (Watts per sq. meter per steradian) E: Irradiance falling on P’ from the lens Energy arriving at a surface (Watts per sq. meter) P P’ f z d α Radiance: energy carried by a ray • Power per unit area perpendicular to the direction of travel, per unit solid angle • Units: Watts per square meter per steradian Irradiance: energy arriving at a surface • Incident power per unit area (not foreshortened) • Units: Watts per square meter What is the relationship between E and L? Szeliski 2.2.3

Fundamental radiometric relation P P’ f z d α Image irradiance is linearly related to scene radiance Irradiance is proportional to the area of the lens and inversely proportional to the squared distance between the lens and the image plane The irradiance falls off as the angle between the viewing ray and the optical axis increases Difference between the cos^4 alpha effect and mechanical vignetting: the former effect has nothing to do with light getting lost in the lens system but is a basic radiometric property even of a perfect lens; mechanical vignetting is usually much stronger. Szeliski 2.2.3

Fundamental radiometric relation Application: S. B. Kang and R. Weiss, Can we calibrate a camera using an image of a flat, textureless Lambertian surface? ECCV 2000. The brightest point corresponds to the principal point (where the optical axis intersects the image plane). The shape of the curves of constant brightness tells us about the shape (scaling factors) of the pixels. The equation also includes the focal length (z’). Thus, we can in principle figure out all the intrinsic camera parameters using the constraints given by this equation (fundamental radiometric relation).

From light rays to pixel values Camera response function: the mapping f from irradiance to pixel values Useful if we want to estimate material properties Enables us to create high dynamic range images For more info: P. E. Debevec and J. Malik, Recovering High Dynamic Range Radiance Maps from Photographs, SIGGRAPH 97 Chakrabarti, Scharstein, and Zickler (BMVC 2009) on the remapping process: “the camera modifies the tristimulus values so that they fit within the limited gamut and dynamic range of the output color space, and it does so in a way that is most visually pleasing (as opposed to most accurate). Referred to as color rendering, this is a proprietary art that may include luminance histogram analysis, corrections to hue and saturation, and even local corrections for things such as skin tones. In most cases, this nonlinear color rendering process is scene-dependent and is not a fixed property of a camera.”

The interaction of light and surfaces What happens when a light ray hits a point on an object? Some of the light gets absorbed converted to other forms of energy (e.g., heat) Some gets transmitted through the object possibly bent, through refraction or scattered inside the object (subsurface scattering) Some gets reflected possibly in multiple directions at once Really complicated things can happen fluorescence Bidirectional reflectance distribution function (BRDF) How bright a surface appears when viewed from one direction when light falls on it from another Definition: ratio of the radiance in the emitted direction to irradiance in the incident direction Source: Steve Seitz

BRDFs can be incredibly complicated…

Diffuse reflection Light is reflected equally in all directions Dull, matte surfaces like chalk or latex paint Microfacets scatter incoming light randomly Effect is that light is reflected equally in all directions Brightness of the surface depends on the incidence of illumination brighter darker

Diffuse reflection: Lambert’s law θ B: radiosity (total power leaving the surface per unit area) ρ: albedo (fraction of incident irradiance reflected by the surface) N: unit normal S: source vector (magnitude proportional to intensity of the source) BRDF is constant Albedo: fraction of incident irradiance reflected by the surface Radiosity: total power leaving the surface per unit area (regardless of direction)

Specular reflection Radiation arriving along a source direction leaves along the specular direction (source direction reflected about normal) Some fraction is absorbed, some reflected On real surfaces, energy usually goes into a lobe of directions Phong model: reflected energy falls of with Lambertian + specular model: sum of diffuse and specular term

Moving the light source Specular reflection Moving the light source Changing the exponent

Role of specularity in computer vision

Photometric stereo (shape from shading) Can we reconstruct the shape of an object based on shading cues? Luca della Robbia, Cantoria, 1438

Photometric stereo Assume: Goal: reconstruct object shape and albedo A Lambertian object A local shading model (each point on a surface receives light only from sources visible at that point) A set of known light source directions A set of pictures of an object, obtained in exactly the same camera/object configuration but using different sources Orthographic projection Goal: reconstruct object shape and albedo ??? S1 S2 Sn F&P 2nd ed., sec. 2.2.4

Surface model: Monge patch F&P 2nd ed., sec. 2.2.4

Image model Known: source vectors Sj and pixel values Ij(x,y) Unknown: surface normal N(x,y) and albedo ρ(x,y) Assume that the response function of the camera is a linear scaling by a factor of k Lambert’s law: F&P 2nd ed., sec. 2.2.4

Least squares problem For each pixel, set up a linear system: known unknown (n × 3) (3 × 1) Obtain least-squares solution for g(x,y) (which we defined as N(x,y) (x,y)) Since N(x,y) is the unit normal, (x,y) is given by the magnitude of g(x,y) Finally, N(x,y) = g(x,y) / (x,y) F&P 2nd ed., sec. 2.2.4

Example Recovered albedo Recovered normal field F&P 2nd ed., sec. 2.2.4

Recovering a surface from normals Recall the surface is written as This means the normal has the form: If we write the estimated vector g as Then we obtain values for the partial derivatives of the surface: F&P 2nd ed., sec. 2.2.4

Recovering a surface from normals Integrability: for the surface f to exist, the mixed second partial derivatives must be equal: We can now recover the surface height at any point by integration along some path, e.g. (in practice, they should at least be similar) (for robustness, should take integrals over many different paths and average the results) F&P 2nd ed., sec. 2.2.4

Surface recovered by integration F&P 2nd ed., sec. 2.2.4

… Assignment 1 Input Estimated albedo Integrated height map Estimated normals x y z

Limitations Orthographic camera model Simplistic reflectance and lighting model No shadows No interreflections No missing data Integration is tricky

Finding the direction of the light source I(x,y) = N(x,y) ·S(x,y) + A Full 3D case: N S For points on the occluding contour: We assume the object has uniform albedo, which gets “rolled into” the magnitude of the light source term P. Nillius and J.-O. Eklundh, “Automatic estimation of the projected light source direction,” CVPR 2001

Finding the direction of the light source P. Nillius and J.-O. Eklundh, “Automatic estimation of the projected light source direction,” CVPR 2001

Application: Detecting composite photos Real photo Fake photo M. K. Johnson and H. Farid, Exposing Digital Forgeries by Detecting Inconsistencies in Lighting, ACM Multimedia and Security Workshop, 2005.