Color-Invariant Motion Detection under Fast Illumination Changes Paper by:Ming Xu and Tim Ellis CIS 750 Presented by: Xiangdong Wen Advisor: Prof. Latecki.

Slides:



Advertisements
Similar presentations
What do color changes reveal about an outdoor scene? Kalyan Sunkavalli Fabiano Romeiro Wojciech Matusik Todd Zickler Hanspeter Pfister Harvard University.
Advertisements

Automatic Color Gamut Calibration Cristobal Alvarez-Russell Michael Novitzky Phillip Marks.
Moving Object Detection with Background Model based on spatio- Temporal Texture Ryo Yumiba, Masanori Miyoshi,Hirononbu Fujiyoshi WACV 2011.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
Prénom Nom Document Analysis: Document Image Processing Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Approximated tracking of multiple non-rigid objects using adaptive quantization and resampling techniques. J. M. Sotoca 1, F.J. Ferri 1, J. Gutierrez.
Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović
1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
Foreground Background detection from video Foreground Background detection from video מאת : אבישג אנגרמן.
Different Tracking Techniques  1.Gaussian Mixture Model:  1.Construct the model of the Background.  2.Given sequence of background images find the.
1. What is Lighting? 2 Example 1. Find the cubic polynomial or that passes through the four points and satisfies 1.As a photon Metal Insulator.
Physics-based Illuminant Color Estimation as an Image Semantics Clue Christian Riess Elli Angelopoulou Pattern Recognition Lab (Computer Science 5) University.
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Detecting Digital Image Forgeries Using Sensor Pattern Noise presented by: Lior Paz Jan Lukas, jessica Fridrich and Miroslav Goljan.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Background Estimation with Gaussian Distribution for Image Segmentation, a fast approach Gianluca Bailo, Massimo Bariani, Paivi Ijas, Marco Raggio IEEE.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Robust Object Segmentation Using Adaptive Thresholding Xiaxi Huang and Nikolaos V. Boulgouris International Conference on Image Processing 2007.
Region-Level Motion- Based Background Modeling and Subtraction Using MRFs Shih-Shinh Huang Li-Chen Fu Pei-Yung Hsiao 2007 IEEE.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Object Detection and Tracking Mike Knowles 11 th January 2005
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications Lucia Maddalena and Alfredo Petrosino, Senior Member, IEEE.
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Shadow Removal Using Illumination Invariant Image Graham D. Finlayson, Steven D. Hordley, Mark S. Drew Presented by: Eli Arbel.
Shadow Removal Seminar
University of MarylandComputer Vision Lab 1 A Perturbation Method for Evaluating Background Subtraction Algorithms Thanarat Horprasert, Kyungnam Kim, David.
[cvPONG] A 3-D Pong Game Controlled Using Computer Vision Techniques Quan Yu and Chris Wagner.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Xinqiao LiuRate constrained conditional replenishment1 Rate-Constrained Conditional Replenishment with Adaptive Change Detection Xinqiao Liu December 8,
Image Subtraction for Real Time Moving Object Extraction Shahbe Mat Desa, Qussay A. Salih, CGIV’04.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
VINCENT URIAS, CURTIS HASH Detection of Humans in Images Using Skin-tone Analysis and Face Detection.
Noise Estimation from a Single Image Ce Liu William T. FreemanRichard Szeliski Sing Bing Kang.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
A General Framework for Tracking Multiple People from a Moving Camera
3D SLAM for Omni-directional Camera
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
Image Restoration Fasih ur Rehman. –Goal of restoration: improve image quality –Is an objective process compared to image enhancement –Restoration attempts.
Digital Image Processing Lecture 10: Image Restoration
Expectation-Maximization (EM) Case Studies
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Image Restoration: Noise Models By Dr. Rajeev Srivastava.
Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.
Motion Estimation of Moving Foreground Objects Pierre Ponce ee392j Winter March 10, 2004.
Computer Graphics: Illumination
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Motion Detection And Analysis
Dynamical Statistical Shape Priors for Level Set Based Tracking
Color-Invariant Motion Detection under Fast Illumination Changes
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Presentation transcript:

Color-Invariant Motion Detection under Fast Illumination Changes Paper by:Ming Xu and Tim Ellis CIS 750 Presented by: Xiangdong Wen Advisor: Prof. Latecki

Agenda >Introduction >Color Fundamentals >Color-Invariant Motion Detection >Experimental Results >Discussion >Conclusion

Introduction >Motion detection algorithms: based on the differencing operation of image intensities between a frame and a background image. >Background image: reflects the static elements in a scene. >Background image needs to be updated because >Lack of a target-free training period; >Gradual illumination variations; >Background objects which then move. >Updating scheme Linear interpolation between the previous bg value and new observation >Gaussian mixture model: based on gray-level or RGB color intensities >could detect a large proportion of changes. >cannot follow fast illumination changes: >moving clouds, >long shadows, >switching of artificial lighting

>Marchant and Onyango.(2000) proposed a physics-based method for shadow compensation in scenes illuminated by daylight. >Represented the daylight as a black body, >Assumed the color filters to be of infinitely narrow bandwidth. Results: as illumination changes, the ratio (R/B)/(G/B)^A depends on surface reflection only. (A can be calculated from daylight model and camera.) >Finlayson et al.(2000) Using same scheme. Results: Log-Chromaticity Differences (LCDs) ln(R/G) and ln(B/g) are independent of light intensity and there exists a weighted combination of LCDs which is independent of light intensity and light color Identifying a particular object surface under varying illumination

Adaptive schemes in color-invariant detection of motion under varying illumination >Wren et al (1997) used the normalised components, U/Y and V/Y of YUV color space to remove shadows in a indoor scene >A single adaptive Gaussian represents the probability density of pixel belonging to the background. >The scene without person has to be learned before to locate people. >Raja et al (1998) used hue(H) and saturation(S) of an HSI color space to decouple the influence of illumination changes in a indoor scene, >A Gaussian mixture model was used to estimate the probabilities of each pixel belonging to a multi-colored foreground object >Each Gaussian models one color in the foreground object and was learned in a training stage.

Motion detection in outdoor environments illuminated by daylight >A refection model influenced by ambient objects is used. >large-scale illumination changed mainly arises from varying cloud cover. >The dominant illumination comes from either direct sunlight or reflection from clouds >The normalised rgb color space is used to eliminate the influence of varying illumination >A Gaussian mixture model is used to model each pixel of the background, provides multi-background modelling capabilities for complex out scenes

Colour Fundamentals An image taken with a color camera is composed of sensor responses as: ->illumination, -> wavelength, ->reflectance of an object surface ->Camera sensitivity The image intensity: >The appearance of objects is a result of interaction between illumination and reflectance. >To track the object surface, it is desirable to separate the variation of the illumination from that of the surface reflection.

Shadow model(1) In an out door environment, fast illumination changes occur at the regions where shadows emerge or disappear. >large-scale (arising from moving cloud) >small-scale (from objects themselves) Shadow model (Gershon et al 1986) >There is only one illuminant in the scene >Some of the light does not reach the object because blocking objects. >create a shadow region and a directly lit region on the object. >The shadow region is illuminated by each reflection objects j:

Shadow model cont. >The reflected light from the object surface >For the directly lit region >For the shadow region: >Assume the chromatic average of the ambient objects is gray i.e. it is relatively balanced in all visible wavelengths and Where c is independent of and may varies over space

Shadow model cont. The assumption is realistic for the fast-moving cloud case, in which the only illuminant is the sunlight and both the blocking and ambient objects are gray(or white) clouds. Under the assumption, the reflected light from directly lit and shadow regions will stay in proportion for a given object surface. Thus the image intensities at all color channels being in proportion no matter lit or shadowed The proportionality between RGB color channels can be represented using the normalised color components : Where each component of will keep constant for a given object surface under varying illumination.

Color-Invariant Motion Detection A single Gausssion is sufficient to model a pixel value for one channel of the RGB components resulting from a particular surface under particular lighting and account for acquisition noise. A single adaptive Gaussian is sufficient to model each RGB channel if lighting changes gradually over time. The estimated background value is interpolated between the previous estimation and the new observation. It cannot follow an RGB component under fast illumination changes. A normalized color component (rgb) for a given object surface tends to be constant under lighting changes and is appropriate to model using an adaptive Gaussion. Multiple adaptive Gaussians (a mixture of Gaussions) are used to model a pixel at which multiple object surfaces may appear as the backgrounds. E.g. swaying trees

Color-Invariant Motion Detection cont. Let the pixel value at time t be and modeled by a mixture of N Gaussian distributions. The probability of observing the pixel value is: Where G is the Gaussian probability density function of the I-th background Bi, P(Xi | Bi) P(Bi) reflecting the likelihood that the distribution accounts for the observed data.

Scheme Every new observation, Xt, is checked against the N Gaussian distributions, A match is defined as an observation within about 3 standard deviations of a distribution. If none of the N distributions match the current pixel value, the least probable distribution is replaced by the new observation. For the matched distribution, i, the parameters are updated as: For the unmatched distribution The distribution(s) with greatest weight is(are)considered as the background model.

Experimental Results To assess the significance of the color-invariant motion detection >Evaluated the model at both pixel and frame levels using a set of image sequences. >The image sequence was captured at a frame rate of 2 hz >Each frame was compressed in JPEG format >frame size: 384x288 pixels. This sequence well represents the abundant contexts of a day lit outdoor environment: >Fast illumination changes, waving trees, shading of tree canopies, >Highlights of specula reflection, as well as pedestrians.

the absolute (RGB) and normalised (rgb) color components at selected pixels through time. (a)No foreground object is present. (b)Foreground object are present. The absolute color components (RGB) change greatly with the illumination. The normalized color components (rgb) for a background pixel have flat profiles under illumination. For each foreground pixel, at least one rgb component appears as an apparent spike.

The parameter updating procedure of the Gaussian background model for one color component (a)A lit region with foreground objects (b)A shadowed region without foreground objects The thin lines represent and (upper and lower) profiles, respectively.

Comparing the RGB and rgb results under little illumination change >The results are coherent. >Because of the different emphasis of image contexts, the “blobs” appear as different shapes.

The RGB and rgb results under a major illumination change. (a)A large area of the background is detected as a huge foreground object. (c) Ground truth targets are clearly visible under fast illumination changes

Discusion(1) >Appropriate selection of the initial deviation >An underestimate of the initial deviation prohibits many “ground truth” background pixels from being adapted into background models >An overestimate of the initial deviation needs a longer learning period at the start of an image sequence >Currently it is manually selected and globally uniform according to the noise level in shaded regions where the absolute noise level in rgb components is high. >In future it may be automatically selected according to the local spatial variation in the rgb components at the start time

Discussion(2) >The rgbl color space >combined the intensity I, with the rgb color space >is an invertible transformation from RGB space >avoids the loss of the intensity information. >robustly determinates the shadowed region: >the rgb components are stable >the I component is significantly lower. >Two kinds of pixels which may be excluded from consideration: >RGB components saturated can make the corresponding rgb components unconstrained >The rgb components in over-dark regions are very noisy. >To alleviate this problem: > Using cameras with auto iris control > Gamma correction

Conclutions >A Gaussion mixture model based on the rgb color space has been presented for maintaining a background image for motion detection. >The scheme is especially successful when applied to outdoor scenes illuminated by daylight and is robust to fast illumination changes arising from moving cloud and self-shadows. >The success results from a realistic reflection model in which shadows are present.

Thank you!