776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2012.

Slides:



Advertisements
Similar presentations
Motion illusion, rotating snakes. Slide credit Fei Fei Li.
Advertisements

Color Phillip Otto Runge ( ). Overview The nature of color Color processing in the human visual system Color spaces Adaptation and constancy White.
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Computer Vision CS 776 Spring 2014 Photometric Stereo and Illumination Cones Prof. Alex Berg (Slide credits to many folks on individual slides)
Computer Vision CS 776 Spring 2014 Color Prof. Alex Berg (Slide credits to many folks on individual slides)
Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each.
Computational Photography CSE 590 Tamara Berg Filtering & Pyramids.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Fall 2014.
School of Computing Science Simon Fraser University
Algorithms and Applications in Computer Vision Lihi Zelnik-Manor
Review: Image formation
Capturing light Source: A. Efros. Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How.
SWE 423: Multimedia Systems Chapter 4: Graphics and Images (2)
Lecture 1: Images and image filtering
Linear filtering. Overview: Linear filtering Linear filters Definition and properties Examples Gaussian smoothing Separability Applications Denoising.
Announcements Kevin Matzen office hours – Tuesday 4-5pm, Thursday 2-3pm, Upson 317 TA: Yin Lou Course lab: Upson 317 – Card access will be setup soon Course.
Capturing Light… in man and machine : Computational Photography Alexei Efros, CMU, Fall 2010.
Lecture 2: Image filtering
Linear filtering.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Color Monday, Feb 7 Prof. Kristen Grauman UT-Austin.
CS 558 C OMPUTER V ISION Lecture IV: Light, Shade, and Color Acknowledgement: Slides adapted from S. Lazebnik.
Light and shading Source: A. Efros.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 6 – Color.
Color Based on Kristen Grauman's Slides for Computer Vision.
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Kavita Bala Hybrid Images, Oliva et al.,
Computer Science 631 Lecture 7: Colorspace, local operations
Computational Photography Tamara Berg Features. Representing Images Keep all the pixels! Pros? Cons?
Color Phillip Otto Runge ( ). What is color? Color is the result of interaction between physical light in the environment and our visual system.
Capturing light Source: A. Efros.
Sources, Shadows, and Shading
Image Filtering Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/02/10.
Why is computer vision difficult?
CSC361/ Digital Media Burg/Wong
Linear filtering. Motivation: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! What’s.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 7 – Linear Filters 1.
Pixels and Image Filtering Computer Vision Derek Hoiem, University of Illinois 02/01/11 Graphic:
Three-Receptor Model Designing a system that can individually display thousands of colors is very difficult Instead, colors can be reproduced by mixing.
Introduction to Computer Graphics
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
ECE 638: Principles of Digital Color Imaging Systems Lecture 3: Trichromatic theory of color.
CS 691B Computational Photography
Pixels and Image Filtering Computational Photography Derek Hoiem 08/26/10 Graphic:
Capturing Light… in man and machine CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2015.
Reconnaissance d’objets et vision artificielle
Linear filtering. Motivation: Image denoising How can we reduce noise in a photograph?
CSE 185 Introduction to Computer Vision Image Filtering: Spatial Domain.
Color Phillip Otto Runge ( ). What is color? Color is a psychological property of our visual experiences when we look at objects and lights, not.
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Noah Snavely Hybrid Images, Oliva et al.,
Chapter 9: Perceiving Color. Figure 9-1 p200 Figure 9-2 p201.
CS 558 C OMPUTER V ISION Lecture III: Light, Shade, and Color Acknowledgement: Slides adapted from S. Lazebnik.
EECS 274 Computer Vision Sources, Shadows, and Shading.
09/10/02(c) University of Wisconsin, CS559 Fall 2002 Last Time Digital Images –Spatial and Color resolution Color –The physics of color.
1 of 32 Computer Graphics Color. 2 of 32 Basics Of Color elements of color:
Image Filtering in Spatial domain
ECE 638: Principles of Digital Color Imaging Systems
Color Image Processing
Color Image Processing
Capturing Light… in man and machine
Linear Filters T-11 Computer Vision University of Houston
Color Image Processing
Color April 16th, 2015 Yong Jae Lee UC Davis.
Pixels and Image Filtering
Color Representation Although we can differentiate a hundred different grey-levels, we can easily differentiate thousands of colors.
Computer Vision Lecture 4: Color
Capturing Light… in man and machine
Motion illusion, rotating snakes
Linear Filtering CS 678 Spring 2018.
Lecture 2: Image filtering
Color Phillip Otto Runge ( ).
Presentation transcript:

776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2012

Photometric stereo (shape from shading) Can we reconstruct the shape of an object based on shading cues? Luca della Robbia, Cantoria, 1438

Photometric stereo Assume: o A Lambertian object o A local shading model (each point on a surface receives light only from sources visible at that point) o A set of known light source directions o A set of pictures of an object, obtained in exactly the same camera/object configuration but using different sources o Orthographic projection Goal: reconstruct object shape and albedo SnSn ??? S1S1 S2S2 Forsyth & Ponce, Sec. 5.4 slide: S. Lazebnik

Surface model: Monge patch Forsyth & Ponce, Sec. 5.4

Image model Known: source vectors S j and pixel values I j (x,y) We also assume that the response function of the camera is a linear scaling by a factor of k Combine the unknown normal N(x,y) and albedo ρ(x,y) into one vector g, and the scaling constant k and source vectors S j into another vector V j : slide: S. Lazebnik

Least squares problem Obtain least-squares solution for g(x,y) Since N(x,y) is the unit normal,  (x,y) is given by the magnitude of g(x,y) (and it should be less than 1) Finally, N(x,y) = g(x,y) /  (x,y) (n × 1) known unknown (n × 3)(3 × 1) For each pixel, we obtain a linear system: slide: S. Lazebnik

Example Recovered albedo Recovered normal field Forsyth & Ponce, Sec. 5.4

Recall the surface is written as This means the normal has the form: Recovering a surface from normals If we write the estimated vector g as Then we obtain values for the partial derivatives of the surface: slide: S. Lazebnik

Recovering a surface from normals Integrability: for the surface f to exist, the mixed second partial derivatives must be equal: We can now recover the surface height at any point by integration along some path, e.g. (for robustness, can take integrals over many different paths and average the results) (in practice, they should at least be similar) slide: S. Lazebnik

Surface recovered by integration Forsyth & Ponce, Sec. 5.4

Reading Szeliski Szeliski

Typical image operations image: R. Szeliski

Color Phillip Otto Runge ( )

What is color? Color is the result of interaction between physical light in the environment and our visual system Color is a psychological property of our visual experiences when we look at objects and lights, not a physical property of those objects or lights (S. Palmer, Vision Science: Photons to Phenomenology) slide: S. Lazebnik

Electromagnetic spectrum Human Luminance Sensitivity Function

Interaction of light and surfaces Reflected color is the result of interaction of light source spectrum with surface reflectance slide: S. Lazebnik

Spectra of some real-world surfaces metamers image: W. Freeman

Standardizing color experience We would like to understand which spectra produce the same color sensation in people under similar viewing conditions Color matching experiments Foundations of Vision, by Brian Wandell, Sinauer Assoc., 1995

Color matching experiment 1 Source: W. Freeman

Color matching experiment 1 p 1 p 2 p 3 Source: W. Freeman

Color matching experiment 1 p 1 p 2 p 3 Source: W. Freeman

Color matching experiment 1 p 1 p 2 p 3 The primary color amounts needed for a match Source: W. Freeman

Color matching experiment 2 Source: W. Freeman

Color matching experiment 2 p 1 p 2 p 3 Source: W. Freeman

Color matching experiment 2 p 1 p 2 p 3 Source: W. Freeman

Color matching experiment 2 p 1 p 2 p 3 We say a “negative” amount of p 2 was needed to make the match, because we added it to the test color’s side. The primary color amounts needed for a match: p 1 p 2 p 3 Source: W. Freeman

Trichromacy In color matching experiments, most people can match any given light with three primaries o Primaries must be independent For the same light and same primaries, most people select the same weights o Exception: color blindness Trichromatic color theory o Three numbers seem to be sufficient for encoding color o Dates back to 18 th century (Thomas Young) slide: S. Lazebnik

Grassman’s Laws Color matching appears to be linear If two test lights can be matched with the same set of weights, then they match each other: o Suppose A = u 1 P 1 + u 2 P 2 + u 3 P 3 and B = u 1 P 1 + u 2 P 2 + u 3 P 3. Then A = B. If we mix two test lights, then mixing the matches will match the result: o Suppose A = u 1 P 1 + u 2 P 2 + u 3 P 3 and B = v 1 P 1 + v 2 P 2 + v 3 P 3. Then A + B = (u 1 +v 1 ) P 1 + (u 2 +v 2 ) P 2 + (u 3 +v 3 ) P 3. If we scale the test light, then the matches get scaled by the same amount: o Suppose A = u 1 P 1 + u 2 P 2 + u 3 P 3. Then kA = (ku 1 ) P 1 + (ku 2 ) P 2 + (ku 3 ) P 3. slide: S. Lazebnik

Linear color spaces Defined by a choice of three primaries The coordinates of a color are given by the weights of the primaries used to match it mixing two lights produces colors that lie along a straight line in color space mixing three lights produces colors that lie within the triangle they define in color space slide: S. Lazebnik

How to compute the weights of the primaries to match any spectral signal Matching functions: the amount of each primary needed to match a monochromatic light source at each wavelength p 1 p 2 p 3 ? Given: a choice of three primaries and a target color signal Find: weights of the primaries needed to match the color signal p 1 p 2 p 3 slide: S. Lazebnik

RGB space Primaries are monochromatic lights (for monitors, they correspond to the three types of phosphors) Subtractive matching required for some wavelengths RGB matching functions RGB primaries slide: S. Lazebnik

How to compute the weights of the primaries to match any spectral signal Let c(λ) be one of the matching functions, and let t(λ) be the spectrum of the signal. Then the weight of the corresponding primary needed to match t is λ Matching functions, c(λ) Signal to be matched, t(λ) slide: S. Lazebnik

Nonlinear color spaces: HSV Perceptually meaningful dimensions: Hue, Saturation, Value (Intensity) RGB cube on its vertex slide: S. Lazebnik

Color perception Color/lightness constancy o The ability of the human visual system to perceive the intrinsic reflectance properties of the surfaces despite changes in illumination conditions Instantaneous effects o Simultaneous contrast o Mach bands Gradual effects o Light/dark adaptation o Chromatic adaptation o Afterimages J. S. Sargent, The Daughters of Edward D. Boit, 1882 slide: S. Lazebnik

Simultaneous contrast/Mach bands Source: D. Forsyth

Chromatic adaptation The visual system changes its sensitivity depending on the luminances prevailing in the visual field o The exact mechanism is poorly understood Adapting to different brightness levels o Changing the size of the iris opening (i.e., the aperture) changes the amount of light that can enter the eye o Think of walking into a building from full sunshine Adapting to different color temperature o The receptive cells on the retina change their sensitivity o For example: if there is an increased amount of red light, the cells receptive to red decrease their sensitivity until the scene looks white again o We actually adapt better in brighter scenes: This is why candlelit scenes still look yellow slide: S. Lazebnik

White balance When looking at a picture on screen or print, we adapt to the illuminant of the room, not to that of the scene in the picture When the white balance is not correct, the picture will have an unnatural color “cast” incorrect white balance correct white balance slide: S. Lazebnik

White balance Film cameras: o Different types of film or different filters for different illumination conditions Digital cameras: o Automatic white balance o White balance settings corresponding to several common illuminants o Custom white balance using a reference object slide: S. Lazebnik

White balance Von Kries adaptation o Multiply each channel by a gain factor slide: S. Lazebnik

White balance Von Kries adaptation o Multiply each channel by a gain factor Best way: gray card o Take a picture of a neutral object (white or gray) o Deduce the weight of each channel If the object is recoded as r w, g w, b w use weights 1/r w, 1/g w, 1/b w slide: S. Lazebnik

White balance Without gray cards: we need to “guess” which pixels correspond to white objects Gray world assumption o The image average r ave, g ave, b ave is gray o Use weights 1/r ave, 1/g ave, 1/b ave Brightest pixel assumption o Highlights usually have the color of the light source o Use weights inversely proportional to the values of the brightest pixels Gamut mapping o Gamut: convex hull of all pixel colors in an image o Find the transformation that matches the gamut of the image to the gamut of a “typical” image under white light Use image statistics, learning techniques slide: S. Lazebnik

White balance by recognition Key idea: For each of the semantic classes present in the image, compute the illuminant that transforms the pixels assigned to that class so that the average color of that class matches the average color of the same class in a database of “typical” images J. Van de Weijer, C. Schmid and J. Verbeek, Using High-Level Visual Information for Color Constancy, ICCV 2007.Using High-Level Visual Information for Color Constancy slide: S. Lazebnik

Mixed illumination When there are several types of illuminants in the scene, different reference points will yield different results Reference: moonReference: stone slide: S. Lazebnik

Spatially varying white balance E. Hsu, T. Mertens, S. Paris, S. Avidan, and F. Durand, “Light Mixture Estimation for Spatially Varying White Balance,” SIGGRAPH 2008Light Mixture Estimation for Spatially Varying White Balance InputAlpha mapOutput slide: S. Lazebnik

Uses of color in computer vision Color histograms for image matching slide: S. Lazebnik

Uses of color in computer vision Image segmentation and retrieval C. Carson, S. Belongie, H. Greenspan, and Ji. Malik, Blobworld: Image segmentation using Expectation-Maximization and its application to image querying, ICVIS slide: S. Lazebnik

Uses of color in computer vision Skin detection M. Jones and J. Rehg, Statistical Color Models with Application to Skin Detection, IJCV 2002.Statistical Color Models with Application to Skin Detection slide: S. Lazebnik

Uses of color in computer vision Robot soccer M. Sridharan and P. Stone, Towards Eliminating Manual Color Calibration at RoboCup. RoboCup-2005: Robot Soccer World Cup IX, Springer Verlag, 2006Towards Eliminating Manual Color Calibration at RoboCup Source: K. Grauman

Uses of color in computer vision Building appearance models for tracking D. Ramanan, D. Forsyth, and A. Zisserman. Tracking People by Learning their Appearance. PAMI 2007.Tracking People by Learning their Appearance slide: S. Lazebnik

Linear filtering slide: S. Lazebnik

Motivation: Image denoising How can we reduce noise in a photograph? slide: S. Lazebnik

Let’s replace each pixel with a weighted average of its neighborhood The weights are called the filter kernel What are the weights for the average of a 3x3 neighborhood? Moving average “box filter” Source: D. Lowe

Defining convolution f Let f be the image and g be the kernel. The output of convolving f with g is denoted f * g. Source: F. Durand MATLAB functions: conv2, filter2, imfilter Convention: kernel is “flipped”

Key properties Linearity: filter(f 1 + f 2 ) = filter(f 1 ) + filter(f 2 ) Shift invariance: same behavior regardless of pixel location: filter(shift(f)) = shift(filter(f)) Theoretical result: any linear shift-invariant operator can be represented as a convolution slide: S. Lazebnik

Properties in more detail Commutative: a * b = b * a o Conceptually no difference between filter and signal Associative: a * (b * c) = (a * b) * c o Often apply several filters one after another: (((a * b 1 ) * b 2 ) * b 3 ) o This is equivalent to applying one filter: a * (b 1 * b 2 * b 3 ) Distributes over addition: a * (b + c) = (a * b) + (a * c) Scalars factor out: ka * b = a * kb = k (a * b) Identity: unit impulse e = […, 0, 0, 1, 0, 0, …], a * e = a slide: S. Lazebnik

Annoying details What is the size of the output? MATLAB: filter2(g, f, shape) o shape = ‘full’: output size is sum of sizes of f and g o shape = ‘same’: output size is same as f o shape = ‘valid’: output size is difference of sizes of f and g f gg gg f gg g g f gg gg full samevalid slide: S. Lazebnik

Annoying details What about near the edge? o the filter window falls off the edge of the image o need to extrapolate o methods: clip filter (black) wrap around copy edge reflect across edge Source: S. Marschner

Annoying details What about near the edge? o the filter window falls off the edge of the image o need to extrapolate o methods (MATLAB): clip filter (black): imfilter(f, g, 0) wrap around:imfilter(f, g, ‘circular’) copy edge: imfilter(f, g, ‘replicate’) reflect across edge: imfilter(f, g, ‘symmetric’) Source: S. Marschner

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Filtered (no change) Source: D. Lowe

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Shifted left By 1 pixel Source: D. Lowe

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Blur (with a box filter) Source: D. Lowe

Practice with linear filters Original ? (Note that filter sums to 1) Source: D. Lowe

Practice with linear filters Original Sharpening filter - Accentuates differences with local average Source: D. Lowe

Sharpening

Sharpening with unsharp masking What does blurring take away? original smoothed (5x5) – detail = sharpened = Let’s add it back: originaldetail + slide: S. Lazebnik