Specularity, the Zeta-image, and Information-Theoretic Illuminant

Slides:



Advertisements
Similar presentations
A Common Framework for Ambient Illumination in the Dichromatic Reflectance Model Color and Reflectance in Imaging and Computer Vision Workshop 2009 October.
Advertisements

Colour From Grey by Optimized Colour Ordering Arash VahdatMark S. Drew School of Computing Science Simon Fraser University.
The Helmholtz Machine P Dayan, GE Hinton, RM Neal, RS Zemel
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Automatic Color Gamut Calibration Cristobal Alvarez-Russell Michael Novitzky Phillip Marks.
Intrinsic Images by Entropy Minimization (Midway Presentation, by Yingda Chen) Graham D. Finlayson, Mark S. Drew and Cheng Lu, ECCV, Prague, 2004.
Illumination Estimation via Thin Plate Spline Weihua Xiong ( OmniVision Technology,USA ) Lilong Shi, Brian Funt ( Simon Fraser University, Canada) ( Simon.
Information and Coding Theory
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Computer graphics & visualization HDRI. computer graphics & visualization Image Synthesis – WS 07/08 Dr. Jens Krüger – Computer Graphics and Visualization.
Computer Vision Lecture 16: Region Representation
Physics-based Illuminant Color Estimation as an Image Semantics Clue Christian Riess Elli Angelopoulou Pattern Recognition Lab (Computer Science 5) University.
1 Practical Scene Illuminant Estimation via Flash/No-Flash Pairs Cheng Lu and Mark S. Drew Simon Fraser University {clu,
BMVC 2009 Specularity and Shadow Interpolation via Robust Polynomial Texture Maps Mark S. Drew 1, Nasim Hajari 1, Yacov Hel-Or 2 & Tom Malzbender 3 1 School.
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
ICCV 2003 Colour Workshop 1 Recovery of Chromaticity Image Free from Shadows via Illumination Invariance Mark S. Drew 1, Graham D. Finlayson 2, & Steven.
1 Invariant Image Improvement by sRGB Colour Space Sharpening 1 Graham D. Finlayson, 2 Mark S. Drew, and 2 Cheng Lu 1 School of Information Systems, University.
1 Automatic Compensation for Camera Settings for Images Taken under Different Illuminants Cheng Lu and Mark S. Drew Simon Fraser University {clu,
1 A Markov Random Field Framework for Finding Shadows in a Single Colour Image Cheng Lu and Mark S. Drew School of Computing Science, Simon Fraser University,
School of Computer Science Simon Fraser University November 2009 Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image Mark.
Sensor Transforms to Improve Metamerism-Based Watermarking Mark S. Drew School of Computing Science Simon Fraser University Raja Bala Xerox.
Shadow removal algorithms Shadow removal seminar Pavel Knur.
Computer Graphics Hardware Acceleration for Embedded Level Systems Brian Murray
MAS1302: General Feedback on Assignment 1 Part A (non-computer questions) Generally well done by all those who made a serious effort.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
Clustering Color/Intensity
Mark S. Drew and Amin Yazdani Salekdeh School of Computing Science,
Shadow Removal Using Illumination Invariant Image Graham D. Finlayson, Steven D. Hordley, Mark S. Drew Presented by: Eli Arbel.
Shadow Removal Seminar
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Colour Image Compression by Grey to Colour Conversion Mark S. Drew 1, Graham D. Finlayson 2, and Abhilash Jindal 3 1 Simon Fraser University 2 University.
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
Clustering Vertices of 3D Animated Meshes
CS559-Computer Graphics Copyright Stephen Chenney Image File Formats How big is the image? –All files in some way store width and height How is the image.
Perception Introduction Pattern Recognition Image Formation
November 2012 The Role of Bright Pixels in Illumination Estimation Hamid Reza Vaezi Joze Mark S. Drew Graham D. Finlayson Petra Aurora Troncoso Rey School.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Ch5 Image Restoration CS446 Instructor: Nada ALZaben.
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
DIGITAL IMAGE. Basic Image Concepts An image is a spatial representation of an object An image can be thought of as a function with resulting values of.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Greg Ward Exponent - Failure Analysis Assoc. Elena Eydelberg-Vileshin
Colorization is a user-assisted color manipulation mechanism for changing grayscale images into colored ones. Surprisingly, only a fairly sparse set of.
A Hybrid Strategy For Illuminant Estimation Targeting Hard Images Roshanak Zakizadeh 1 Michael Brown 2 Graham Finlayson 1 1 University of East Anglia 2.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
Instructor: Mircea Nicolescu Lecture 7
Computer Graphics: Illumination
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
AP CSP: Pixelation – B&W/Color Images
Machine Vision ENT 273 Lecture 4 Hema C.R.
SIFT Scale-Invariant Feature Transform David Lowe
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Random Testing: Theoretical Results and Practical Implications IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 2012 Andrea Arcuri, Member, IEEE, Muhammad.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Classification of unlabeled data:
What is the next line of the proof?
Mingjing Zhang and Mark S. Drew
Subhash Khot Theory Group
Reconstruction.
Image and Video Processing
Computer and Robot Vision I
Computer and Robot Vision I
Shape from Shading and Texture
Physical Problem of Simultaneous Linear Equations in Image Processing
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Presentation transcript:

Specularity, the Zeta-image, and Information-Theoretic Illuminant Estimation Mark S. Drew1, Hamid Reza Vaezi Joze1, and Graham D. Finlayson2 2School of Computing Sciences, The University of East Anglia Norwich, England graham@cmp.uea.ac.uk 1School of Computing Science Simon Fraser University Vancouver, BC, Canada mark@cs.sfu.ca

Zeta-Image: Goal= Discover the light color (yet another!??) Relative Chromaticity: The main idea is that we can get at a good solution for the chromaticity of the light by dividing image chromaticity 3-vector  by the candidate light chromaticity e — the “relative chromaticity”  =    e [where   is component-wise division].

Algorithm: Then we show that, over pixels that are specular or white, the log of the relative chromaticity log() is perpendicular to the light chromaticity e in color space. This gives a useful hint for recovering e .

Proof; Background: Simple image formation model -- k = R,G,B surface “Neutral interface”: Lee, JOSA 1986, “Method for computing the scene-illuminant hromaticity” k = R,G,B pixel color surface light specularity Light is “white enough”: Borges, JOSA 1991, “Trichromatic approximation method for surface illumination”

Proof…  = {R,G,B}/{R+G+B) Now chromaticity: Relative chrom.:

Proof… …Relative chrom.: simple!

Proof… Now let’s head for a Planar Constraint:

Proof…

Therefore, let the Zeta-image† be Definition: where e is the chromaticity of illuminant and (x) is the chromaticity value of pixel at position x . Properties It has the structure of the Kullback-Leibler Divergence from Information Theory; Zeta is low (near-zero) at specularities: † Patent applied for

Zeta-image for illumination estimation Planar constraint: For near-specular pixels or white surfaces, Log-Relative-Chromaticity values are orthogonal to the light chromaticity. Or, equally, the zeta image is near zero. So the best light chromaticity e for an image is that which minimizes the zeta-image for near-specular pixels (or white surfaces) => guess a domain  ; then , == Search

Search — explained: assume candidate e; form  ; the lowest 10-percentile, say, of dot-product values could be near-specular pixels. Over a grid of possible light-chromaticities e, minimize dot-product values over candidate illuminants for those lowest 10- percentile pixels.

Or, Thm: Analytic solution is the geometric mean of where  is a set of bright pixels Search is better than Analytic but slower.

Domain  for Analytic Solution: We start by approximating  as top-5% brightness pixels. Could be any other method to indicate near- specular pixels and white surface regions. Detecting Failure  i.e., detecting images not having specularity or white surfaces in top brightness pixels:  can stem from areas of images belonging to the brightest surface which happens to tend to be some particular surface color. we can simply check if these pixels are in the possible chromaticity gamut of illuminants.  can be a bag-of-pixels from all over the image. we can investigate the distribution of  in chromaticity space.

Does this work?... divide by correct light, bottom 20% of Zeta: yes!

...Does this work?... incorrect light, bottom 20% of Zeta: no! correct light, float (inverted) Zeta 

Details of Analytic: How did we come to the geometric mean? Solve:  

…Details of Analytic… How do we know  is positive? If components were probabilities then has structure of Kullback-Leibler Divergence: extra bits to code samples from ek when using codebook based on ik , so positive!

…Details of Analytic… Does this work? Final step: form  using the geomean chromaticity for bright pixels, and then trim to least-10% values of Zeta, and recalculate the geomean. Does this work?

float (inverted) Zeta bright lowzeta = bright & (zeta<quantile(zeta)),0.10);

plots of  for correct light e (O) and for analytic solution e (x) :

In expts. mask off the ColorChecker

Algorithm 2: Planar Constraint Applied as Post-Processing so far: Algorithm 1: Use either analytic answer, or a simple hierarchical grid search over light- chromaticity Algorithm 2: Planar Constraint Applied as Post-Processing Use any alg’s answer for e; take SVD of lowest-10% dot-product pixels: improves light estimate!

fallback position: If either of two chromaticity checks fails, use Grey-Edge.

Post-processing: Results.

Illumination Estimation: Results. (Analytic)

Funding: Natural Sciences and Engineering Research Council of Canada Thanks! Funding: Natural Sciences and Engineering Research Council of Canada