Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.

Similar presentations


Presentation on theme: "Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND."— Presentation transcript:

1 Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE VOL.29, NO. 7, July 2007

2 Outline Introduction Proposed approach and underlying assumption Gaussian Mixture Models Learning shadow distributions Experience Conclusion

3 Introduction Shadow detection algorithm can be classified as property-based or model- based algorithms. Property-based: Geometry, brightness, color Without any priori knowledge Model-based: Well suited to particular situations

4 Introduction How to describe a shadow Reduce luminance values while maintaining chromaticity values. In an RGB color space, background values under a cast shadow are proportional to background values under direct lighting. Cucchiara et al. use the hypothesis that shadows reduce surface brightness and saturation while maintaining hue properties in the HSV color space. Schreer et al. observe shadows reduce the YUV pixel value linearly. etc.

5 Introduction However, these algorithms based on chroma constancy under cast shadows can falsely label pixels as cast shadows.

6 Introduction Without a priori knowledge of the scene, the shadow volume defined by property-based shadow detection algorithms cannot be reduced without increasing the number of missed detections. RGB background values under a cast shadow will not necessarily be proportional to RGB values under direct light.

7 Proposed approach and underlying assumption Gaussian Mixture Models Periodically transfer Gaussian Mixture Shadow Models

8 GMM v.s. GMSM GMMGMSM What is modeled ? The distribution at a fixed pixel of video sequence,including BK, FG, shadow, etc. Only shadow pixels What is learning from ? All pixels at same position from video sequence. Only shadow pixels which is satisfying for certain shadow description properties. Input form?Random variable in color spaceThe parameters of shadow distribution from GMM Ordering rule in decreasing order of Updating period Every frame interval

9 Proposed approach and underlying assumption Exploit the repetitiveness of the appearance of cast shadows to learn shadowed surface values. Shadow values are not as frequent as background values but their rate of appearance is higher than random foreground values. Shadow values are associated with frequently seen distributions which are labeled as foreground by the GMM.

10 Proposed approach and underlying assumption To prevent them from being quickly discarded, we increase their learning rate, leading to stable shadow distributions. A second multidistrubution learning process: Gaussian Mixture Shadow Model (GMSM) To prevent shadow distributions being discarded just as other foreground distribution, we need to store the parameters of the shadow distributions. Complex and changing scene condition may require learning and storing the parameters of more than one shadow distribution.

11 Proposed approach and underlying assumption Four distinct aspects: The properties of shadowed surfaces are learned. Adapt to scene ’ s nonuniform and time-varying lighting conditions. The approach can also learn shadow distributions whose RGB values deviate slightly from the hypothesis that they are proportional to the background RGB values. Regions where moving cast shadows cannot be detected are excluded from the shadow model.

12 Gaussian Mixture Models A fixed number of states K, typically between 3 and 5, is defined. Each pixel values X t is a sample in a color space of a random variable X. A Gaussian probability density function and a priori probability w k. Parameter

13 Gaussian Mixture Models The K states are ordered by decreasing values of the. The first B states whose combined a priori probability of appearing is greater than a threshold T are labeled as background states and the other states are labeled as foreground states.

14 Gaussian Mixture Models Pixel value X t is associated to the state k with the smallest label among the states satisfying If we cannot associate a pixel values to an existing distribution, a new state k is created around this value with a priori probability and the less probable state is dropped.

15 Gaussian Mixture Models Priori probability and distribution parameters are updated.

16 Learning shadow distribution Three steps: Identification of pixel values that could represent cast shadows Generation of stable shadow distributions within the GMM Learning and storing parameters of shadow distributions with the GMSM

17 Learning shadow distribution Identification of pixels whose values could describe a shadowed surface Use property-based description of a shadowed surfaces. If we match a pixel values X t to a nonbackground state of the GMM, we then verify if the pixel value matches the description of a shadowed surface for one of the background states k=1, …,B

18 Learning shadow distribution Generation of stable shadow distributions within the GMM.

19 Learning shadow distribution When a pixel value is associated to at state, the a priori probability of the sate increases as is the learning parameter. M k,t is equal to 1 for the state that is associated to the pixel value and zero for the other states.

20 Learning shadow distribution When the pixel value could describe a shadow over the background surface, we increase the learning rate of the state associated to this pixel value. By imposing a maximum value on the a priori probability of state k=1, we conserve the most frequently appearing foreground states, which are most likely to be shadow states.

21 Learning shadow distribution Gaussain mixture shadow models Periodically transfer to a second mixture model, the GMSM by inputing Gaussian probabililty density functions with parameters Test if the mean of this distribution could describe a shadowed surface:

22 Learning shadow distribution It the test is true, the parameters are then compared to the existing GMSM distributions: Parameters updating:

23 Learning shadow distribution If these is no match, a new state is added in the GMSM, up to a maximum of K s states. A priori probabilities are then normalized and the states sorted in decreasing order of.

24 Learning shadow distribution Shadow detection If a pixel,X t, is labeled as foreground, it is then compared to the shadow states of the GMSM: If this condition is met for a state k with, the pixel is labeled as a moving cast shadow.

25 Learning shadow distribution Summary of GMM/GMSM algorithm

26 Learning shadow distribution

27

28 Experience : YUV-based description of shadowed surfaces First estimate attenuation ratio using the luminance component Y and verify that both the U and V components are also reduced by a similar ratio.

29 Experience : YUV-based description of shadowed surfaces Office scene with complex illumination. (a) Mean value of the first GMM background state. (b) A frame in the sequence. (c) Mean value of the first GMSM state. (d) Mean value of the second SMSM state.

30 Experience : YUV-based description of shadowed surfaces Distributions of normalized background state (blue) and shadow states (red).

31 Experience : YUV-based description of shadowed surfaces Histogram of the background pixels acquired during the video sequence and the computed background distribution.

32 Experience : YUV-based description of shadowed surfaces Histogram of the cast shadow pixels acquired during the video sequence and the computed shadow distribution.

33 Experience : YUV-based description of shadowed surfaces Foreground and shadow detection. (a) Foreground detection from GMM. (b) shadow detection with the first GMSM state. (c) Shadow detection with any of the B s GMSM states.

34 (a) Mean value of first background of GMM. (b) Frame in the sequence. (c) Mean value of the first GMSM state. (d) Foreground detection from GMM. (e) shadow detection from the YUV description. (f) Shadow detection from the GMSM. (g) Foreground detection: Img(d)- Img(e). (h) Foreground detection: Img(d) – Img(f).

35 Experience : YUV-based description of shadowed surfaces Shadow volumes from the YUV (gray) and GMSM (blue) models and background volume (red). The red circle represents the YUV value of the pixel, which belongs to the foreground.

36 Experience : YUV-based description of shadowed surfaces (a) Mean value of the first background state of GMM. (b) Mean value of the first GMSM state. (c) Frame in the sequence. (d) Mean value of the second GMSM state. (e) Foreground detection from GMM. (f) Shadow detection from the GMSM.

37 Experience : Brightness and chromaticity distortion model Horprasert et al. proposed a pixel-based segmentation model in RGB color space which decomposed each background value into its brightness ( ) and chromaticity distortion (CD). The expected background value is computed from N training frames representing the static background.

38 Experience : Brightness and chromaticity distortion model Brightness: Chromaticity distortions:

39 Experience : Brightness and chromaticity distortion model During the training phase, the variation b of the chromaticity distortion is evaluated: Normalized chromaticity distortion:

40 Experience : Brightness and chromaticity distortion model Pixels labeling Background pixels: small normalized brightness distortion, and small normalized chromaticity distortion cast shadow small normalized chromaticity distortion and a lower brightness value than the background value. Unclassified pixels are labeled foreground.

41 Experience : Brightness and chromaticity distortion model (a) Mean value of first background of GMM. (b) Frame in the sequence. (c) Mean value of the first GMSM state. (d) Foreground detection from GMM. (e) shadow detection from the brightness and chromaticity distortion model. (f) Shadow detection from the GMSM. (g) Foreground detection: Img(d)-Img(e). (h) Foreground detection: Img(d) – Img(f).

42 Experience : Brightness and chromaticity distortion model Shadow volumes from the BCD (gray) and GMSM (blue) models and background volume (red). The red circle represents the YUV value of the pixel, which belongs to the foreground.

43 Highway. (a) Mean value of the first background state of GMM. (b) Mean value of the first GMSM state. (c), (g), and (k) Frames from the sequence. (d), (h), and (l) Foreground detection from the GMM. (e), (i), and (m) Shadow detection from the brightness and chromaticity distortion model. (f), (j), and (n) Shadow detection from the GMSM.

44 Experience : Brightness and chromaticity distortion model Highway: Background (red), shadow volumes using the BCD model (gray), and the GMSM (blue). The red circle represents the RGB value of the pixel belonging to a foreground car.

45 Experience : HSV shadow model Cucchiara et al. proposed. Shadows cast on a surface reduce the brightness value while maintaining chromaticity properties. A pixel

46

47 Conclusion The proposed approach uses a GMM to learn from repetition the properties of shadowed background surfaces. Build a second mixture model for moving shadows on background surfaces.


Download ppt "Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND."

Similar presentations


Ads by Google