6/23/2015CIC 10, 20022 Color constancy at a pixel [Finlayson et al. CIC8, 2000] Idea: plot log(R/G) vs. log(B/G): 14 daylights 24 patches.

Slides:



Advertisements
Similar presentations
Ter Haar Romeny, FEV Nuclei of fungus cell Paramecium Caudatum Spatial gradient Illumination spectrum -invariant gradient Color RGB original Color-Scale.
Advertisements

Intrinsic Images by Entropy Minimization (Midway Presentation, by Yingda Chen) Graham D. Finlayson, Mark S. Drew and Cheng Lu, ECCV, Prague, 2004.
Invariants (continued).
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Scene illumination and surface albedo recovery via L1-norm total variation minimization Hong-Ming Chen Advised by: John Wright.
Illumination Estimation via Thin Plate Spline Weihua Xiong ( OmniVision Technology,USA ) Lilong Shi, Brian Funt ( Simon Fraser University, Canada) ( Simon.
A Standardized Workflow for Illumination-Invariant Image Extraction Mark S. Drew Muntaseer Salahuddin Alireza Fathi Simon Fraser University, Vancouver,
Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Color Image Understanding Sharon Alpert & Denis Simakov.
1. A given pattern p is sought in an image. The pattern may appear at any location in the image. The image may be subject to some tone changes. 2 pattern.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Light, Surface and Feature in Color Images Lilong Shi Postdoc at Caltech Computational Vision Lab, Simon Fraser University.
Physics-based Illuminant Color Estimation as an Image Semantics Clue Christian Riess Elli Angelopoulou Pattern Recognition Lab (Computer Science 5) University.
1 Practical Scene Illuminant Estimation via Flash/No-Flash Pairs Cheng Lu and Mark S. Drew Simon Fraser University {clu,
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
ICCV 2003 Colour Workshop 1 Recovery of Chromaticity Image Free from Shadows via Illumination Invariance Mark S. Drew 1, Graham D. Finlayson 2, & Steven.
1 Invariant Image Improvement by sRGB Colour Space Sharpening 1 Graham D. Finlayson, 2 Mark S. Drew, and 2 Cheng Lu 1 School of Information Systems, University.
1 Automatic Compensation for Camera Settings for Images Taken under Different Illuminants Cheng Lu and Mark S. Drew Simon Fraser University {clu,
1 A Markov Random Field Framework for Finding Shadows in a Single Colour Image Cheng Lu and Mark S. Drew School of Computing Science, Simon Fraser University,
School of Computer Science Simon Fraser University November 2009 Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image Mark.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Midterm Review CS485/685 Computer Vision Prof. Bebis.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Color Image Understanding Sharon Alpert & Denis Simakov.
Computer Vision - A Modern Approach
Ter Haar Romeny, FEV Nuclei of fungus cell Paramecium Caudatum Spatial gradient Illumination spectrum -invariant gradient Color RGB original Color-Scale.
Illumination Estimation via Non- Negative Matrix Factorization By Lilong Shi, Brian Funt, Weihua Xiong, ( Simon Fraser University, Canada) Sung-Su Kim,
Mark S. Drew and Amin Yazdani Salekdeh School of Computing Science,
Color, lightness & brightness Lavanya Sharan February 7, 2011.
Colour Constancy T.W. Hung. Colour Constancy – Human A mechanism enables human to perceive constant colour of a surface over a wide range of lighting.
Objectives Learn to shade objects so their images appear three- dimensional Learn to shade objects so their images appear three- dimensional Introduce.
Shadow Removal Using Illumination Invariant Image Graham D. Finlayson, Steven D. Hordley, Mark S. Drew Presented by: Eli Arbel.
Shadow Removal Seminar
Lecture 2: Image filtering
A Theory of Locally Low Dimensional Light Transport Dhruv Mahajan (Columbia University) Ira Kemelmacher-Shlizerman (Weizmann Institute) Ravi Ramamoorthi.
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
The Radiosity Method Donald Fong February 10, 2004.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
Calibration & Curve Fitting
Sean Ryan Fanello. ^ (+9 other guys. )
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Kavita Bala Hybrid Images, Oliva et al.,
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
COMPUTER GRAPHICS CS 482 – FALL 2014 AUGUST 27, 2014 FIXED-FUNCTION 3D GRAPHICS MESH SPECIFICATION LIGHTING SPECIFICATION REFLECTION SHADING HIERARCHICAL.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Epipolar geometry The fundamental matrix and the tensor
November 2012 The Role of Bright Pixels in Illumination Estimation Hamid Reza Vaezi Joze Mark S. Drew Graham D. Finlayson Petra Aurora Troncoso Rey School.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
Image Filtering Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/02/10.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
PERCEPTUAL STRATEGIES FOR MATERIAL IDENTIFICATION Qasim Zaidi Rocco Robilotto Byung-Geun Khang SUNY College of Optometry.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Local features: detection and description
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Comparison of Image Registration Methods David Grimm Joseph Handfield Mahnaz Mohammadi Yushan Zhu March 18, 2004.
Instructor: Mircea Nicolescu Lecture 9
Schedule Update GP 4 – Tesselation/Cg GDS 4 – Subdiv Surf. GP 5 – Object Modeling Lab: Mini-proj Setup GDS 5 – Maya Modeling MCG 6 – Intersections GP 6.
1 What do color changes reveal about an outdoor scene? KalyanFabianoWojciechToddHanspeter SunkavalliRomeiroMatusikZicklerPfister Harvard University Adobe.
Linear Filters and Edges Chapters 7 and 8
TP12 - Local features: detection and description
What Is Spectral Imaging? An Introduction
A study of the paper Rui Rodrigues, João P
COLOR CONSTANCY IN THE COMPRESSED DOMAIN
Lecture 2: Image filtering
Presentation transcript:

6/23/2015CIC 10, Color constancy at a pixel [Finlayson et al. CIC8, 2000] Idea: plot log(R/G) vs. log(B/G): 14 daylights 24 patches

6/23/2015CIC 10, Log(R/G) Log(B/G) For every patch, the direction from light color change is about the same!

6/23/2015CIC 10, Why all linear and same direction? colorshading intensity light SPD reflectance sensor k=1..3 Now let’s make some assumptions: The image formation equation:

6/23/2015CIC 10, Assumption 1: Light is ~ Planckian Assumption 1: Light is ~ Planckian (or some other 1D assumption) Wien’s approximation of a Planckian source: Note: 1D parameter: T == temperature == light color. P100

6 Assumption 2: Narrow band sensors SONY DXC-930 The Sony Camera has fairly narrow band sensitivities Using spectral sharpening, we can make almost all sensor sets have this property. [Finlayson, Drew, Funt]

Modified Image Formation The k th response Substituting Narrow-band and Planckian Assumptions Take logs Response = light intensity + surface + light color

8Implications We have k equations of the form: is common to all equations and can be removed by simple differencing at this pixel This results in k-1 independent equations of the form reflectance term light color term

6/23/2015CIC 10, Implications Implications The log chromaticities of 7 surfaces viewed under 10 lights (1) If there are 3 sensors we have two independent equations of this form: (2) For a single surface viewed under different colored lights the log chromaticities must fall on a line: (3) Different surfaces induce lines with the *same* orientation

6/23/2015CIC 10, Luminance1D invariant Gray  One degree of freedom is invariant to light change One degree of freedom is invariant to light change

11 More formally: and define  form ratios define vectors line in 2D 

12 What is this good for? With certain restrictions, from a 3-band color image we can derive a 1-D grayscale image which is: - illuminant invariant - and so, shadow free

13 Then use edge info. to integrate back without shadows [ECCV02 Finlayson, Hordley, and Drew] These are approximately the same, except that the invariant edge map has no shadow edges

14 Other tasks: Tracking, etc. Tracking result for moving hand under lamp light. [Jiang and Drew, 2003]

6/23/2015CIC 10, But problem: doesn’t always remove all shadows: Depends on camera sensors 

16 How do we find light color change direction? Sony DXC-930 camera Mean-subtracted log- chromaticity (Use robust line-finder)

6/23/2015CIC 10, Problem: invariant image isn’t invariant across illuminants

6/23/2015CIC 10, Gets worse: Kodak DCS420 camera is much less sharp

6/23/2015CIC 10, How to proceed? Try spectral sharpening, since wish to make sensors more narrowband…. Or just optimize directly, making invariant image more invariant. E.g. optimize color-matching functions :

6/23/ Invariant image for patches  apply optimized sensors to any image Before optimization of sensorsAfter optimization of sensors

21 How to optimize? Firstly, let’s use a linear matrixing transform, taking 31 x 3 sensor matrix Q to a new sensor set: Should we sharpen to get M?  sensorscolors  3 x 3

22 Should we sharpen to get M? There’s a problem: If we made sensors that were all the same, the definition makes the invariant go to zero… The more the sensors are alike, the “better” Sharpening & flattening both work…

6/23/2015CIC 10, So need to use a term to steer away from a rank-reduced M Optimize on the (correlation coefficient) 2  R 2 and encourage high effective rank are singular values of M Initialize with data-driven spectral sharpening matrix.

6/23/2015CIC 10, So optimize M: E.g., color-matching functions: R 2 goes from 0.43 to 0.94

6/23/2015CIC 10, HP912 camera: R 2 : 0.86  0.93 entropy :  bits/pixel

26 Real image: entropy  bits/pixel with an M

27 Thanks!