Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shadow Removal Seminar

Similar presentations


Presentation on theme: "Shadow Removal Seminar"— Presentation transcript:

1 Shadow Removal Seminar
Color Invariance Shadow Removal Seminar

2 What we passed till now Cause to shadows, and what shadows means for us (the interpretation of shadows in human brain). How to create shadows graphically. Some shadow detection techniques

3 This lecture Overview Intro Shadow classification Shadow segmentation
Invariance and color invariance Shadow classification Shadow segmentation

4 Intro - Shadows Generation of shadows Shadows Types Cast shadows
Self shadows Shadows: - As we already know: shadow occurs when an object partially or totally occludes direct light from a source of illumination. Shadows can be divided in two cases: self and cast shadows.

5 Intro - Invariance Invariant Invariance in images
A feature (quantity or property or function) that remains unchanged when a particular transformation is applied to it. What it used for Invariance in images Matlab demo

6 Intro – Shadow detection techniques
Shadow detection techniques classification: Model based Property based Model based techniques relay on models representing the a priory knowledge of the geometry of the scene, the objects, and the illumination. Property based techniques identify shadows by using features such as geometry, brightness or color of shadows. (from “Cast shadow segmentation using invariant color features”)

7 Shadow identification and classification using invariant color models
Elena Salvador, Andrea Cavallaro, Touradj Eebrahimi 2001

8 Overview Goal Constraints Color Invariants Algorithm steps Results
Conclusions

9 Goal Extraction and classification of shadows in color images.

10 Constraints A simple environment assumed where shadows are cast on flat or nearly flat non textured surface. Objects are uniformly colored. Only one light source illuminates the scene. Shadows and objects are within the image. The light source must be strong.

11 Color Invariants Photometric color invariants Definition
Models of photometric color invariants Normalized rgb Hue (H) and saturation (S) (C1,C2,C3) and (L1,L2,L3) Photometric color invariants – are functions which describe the color configuration of each image point discounting shadings, shadows and highlights.

12 Color Invariants - cont
C1C2C3 color invariant features defined as: In fact, any expression defining colors on the same linear color cluster spanned by the body reflection vector in RGB-space is an invariant for the dichromatic reflection model with white illumination. And from that assumption comes this color invariant model. Denoting the angles of the body reflection vector and consequently being invariants for matte, dull objects. ( Color Based Object Recognition Theo Gevers and Arnold W.M. Smeulders 1999 )

13 Algorithm steps Shadow candidates identification Edge detection
Finding the outer points of the edge map Intensities used as reference Morphological processing used to close contours of the edge map. As we already know, shadows result from the obstruction of light from the light source. Thus the luminance values in a shadow region are smaller than those in the surrounding lit regions. In this section (shadow candidates identification) I’ll describe the method that used here to find the shadow candidates. 1) At first an edge map obtained by applying Sobel operator on the luminance. 2) Horizontal and vertical scanning is performed on the edge map, in order to find the outer points of the edge map. The intensities at the detected points are used as reference to determine if the pixels in the inner part of the edge map are darker and therefore candidate to be shadow points. **** Since luminance is a color feature that is sensitive to shadows and shadings, the map contains both object and shadow edges.***** By using this edge map in the dark regions extraction process, we restrict the search for shadow candidate regions in the portion of the image that occupied by the object and its cast shadow.

14 Algorithm steps - cont Shadow classification
Applying photometric color invariants Edge detection Classification Once the dark regions have been extracted from the image, color information can be used to classify shadow regions on the object (self shadow) and shadow regions on the background (cast shadows). By performing edge detection on the invariant color features an edge map which does not contain the edges corresponding to shadow boundaries is obtained (c). The color edge map and the dark regions map are then used as input for the classification level. The process for the classification of the dark regions is similar to that used for the classification level. The input color edge map is scanned in the horizontal and vertical directions to find the outer points of the edge map. The detected points indicate the outer edge points on the object. Points in the dark region mask that lie within the detected edge points are classified as self shadow points. The outer points are classified as cast shadow points.

15 Algorithm steps - summary

16 Results

17 Conclusions This method succeeds in detecting and classifying shadows within environmental constraints that are less restrictive then other methods. Need to define strategy to describe the object color discounting the effect of self shadow.

18 Cast shadow segmentation using invariant color features
Elena Salvador, Andrea Cavallaro and Touradj Ebrahimi 2004

19 Overview Goal Constraints Spectral properties of shadows
Dichromatic reflection model Photometric color invariants Algorithm steps Results

20 Goal Detection of cast shadows on video and on still images.

21 Constraints The ambient light assumed to be a proportional to direct occluded light. Inter-object reflection among different surfaces not taken in account. Video The camera is not moving. The reason for first bullet: Ambient light can have different spectral characteristics with respect to direct light. The case of outdoor scenes, where the diffuse light from the sky differs in spectral composition with respect to the direct light from the sun, provides an example. Since we aim in this work at avoiding calibration procedures and camera-dependent computations, so as to propose a segmentation algorithm that can be applied even when control on the imaging conditions and the scene is possible, we assume….(the first bullet).

22 Dichromatic reflection model
Radiance of light: When object obstructing the direct light we have: Let to be a spectral sensitivities of R,G and B sensors of color camera. The appearance of surface is the result of the interaction among illumination, surface reflectance properties, and the responses of chromatic mechanism. This chromatic mechanism is composed of three color filters in a color camera. To model the physical interaction between illumination and object's surface we will consider the Dichromatic Reflection Model. Radiance of light: Lr reflected on a given point p on a surface in 3D. La – ambient reflection, Lb – body reflection term, Ls – surface reflection term. Gamma – is the wavelength. If there is no direct illumination because of object obstructing the direct light, then the radiance of reflected light is as follows. Which represents the intensity of the reflected light at a point in a shadow region.

23 Dichromatic reflection model - cont
The color components of reflected intensity that reaching the camera sensors are: Sensor measurements in direct light: For a point in shadow the measurements are: Image Irradiance The color components of the reflected intensity reaching the sensors at point (x,y) in the 2D image plane. Where Ci(R,G,B) are the sensor responses, E(lamda, x, y) is the image irradiance at (x,y), and Sci(lamda) is {Sr(lamda), Sg(lamda), Sb(lamda)}. The interval of summation is determined by Sci(lamda), which non-zero over bounded interval of wavelength v. Since the image irradiance is proportional to scene radiance, for a pixel position (x,y) representing a point p in direct light the sensor measurements are… giving a color vector C(x,y)lit. Alpha is the proportionality factor between radiance and irradiance. For a point in shadow the measurements are ……….. giving a color vector C(x,y)shadow=(R_shadow, G_shadow, B_shadow). It follows that each of the three RGB components, if positive and not zero, decrease when passing from a lit region to a shadowed one, that is: Note: Irradiance - is the radiant flux incident upon a unit area of a surface. For sunlight it is the number of watts received per square metre of the Earth's surface.

24 Dichromatic reflection model - cont
The conclusions are:

25 Color invariance The color invariants are the same as in previous article.

26 Algorithm steps Hypothesis generation Accumulation of evidence
Dichromatic model Accumulation of evidence Color invariance test Geometric properties test Decision

27 Hypothesis generation
Still images Find edges with Sobel operator. Use reference pixels to find shadow suspected areas. Video Analysis performed only in areas that identified by motion detector The reference image represents the background of the scene. To obtain more robustness the analysis performed on window Still images: The edge map obtained by applying Sobel operator, separately on the color channels and after that OR operation between them. A contour point (x,y) becomes a candidate shadow contour point if the reference pixel value is bigger then the current pixel value, when the reference pixel defined by a first level neighboring. Video: The reference pixel (Xr,Yr) belongs to a reference image which represents the background of the scene. The reference image can be either a frame in the sequence or a reconstructed one. Therefore the reference pixel is at the same location as (x,y) in the image under analysis. In the noise free case the condition I(Xr,Yr)-I(x,y)>0 for each color channel will tell that the pixel is in shadow. To obtain more robustness for each pixel position the analysis will be performed to a window.

28 Hypothesis generation - cont
Result of the first level: The candidate shadow points belonging to the edge map:

29 Accumulation of evidence – overview
Color invariance property used to strength or cancel the hypothesized shadow areas. Checking the existence of shadow line and hidden line. The result of the first level of analysis is the identification of a set candidate shadow pixels. Photometric invariant color features and spatial constraints are exploited at this level of the shadow segmentation process. Invariant color features compared for every pixel with the features of the reference pixel. If the value of the invariant color features has not changed with respect to the reference, the hypothesized shadow is strengthened.

30 Accumulation of evidence – Still Images
Color edge detection performed in the invariant space. Morphological dilation applied on the edge map. Isolated pixels removed.

31 Accumulation of evidence – in video
Compute invariant feature values by: Geometric property test Position of shadow with respect to the object is tested. The identification of the pixels satisfying the first evidence is achieved by analyzing the difference in the invariant feature values. As in the still image case the d(x,y) is not 0 for the invariant case, therefore here too window operation performed and threshold set. The reference picture and the current picture are converted to the invariance space and then differed one from each other. In perfect case the shadowed area should give 0 but its not occurred in real images so threshold on d(x,y) is set.  if (d(x,y)<threshold) then it’s a shadow. Once the set of pixels is obtained, the position of shadows with respect to objects is tested (geometric property). In case a hypothesized shadow is fully included in an object, the shadow line is not present, and the shadow hypothesis is then weakened.

32 Information integration
Results of integrating all stages. Color edge map of the invariant features Once the additional evidences have been extracted, a decision making step is performed. This final step allows the fusion of different pieces of information. The result is a rejection of the initial hypothesis in case the rules are not respected. Otherwise the hypothesis is confirmed. If the analysis of the photometric invariant color features on the candidate shadow is not successful, the pixel is labeled as material change. If the analysis is successful, the candidate shadow undergoes further analysis by means of the geometrical constraints. This final verification is required to eliminate the last ambiguities. C – color edge map of the invariant features containing material boundaries for which the shadow hypothesis is weakened; D – integration of shadow evidence from the spectral analyzes of (B) and (C). E & F are refinement by means of geometric analysis providing the shadow line and hidden shadow line (E), and complete shadow contours (F). (E) And (F) contains refinement by means of geometric analysis providing the shadow line and hidden shadow line. Integration of shadow evidence from (B) and (C)

33 Results In video there is a problem with shadows that does not moving, also the outside scenes are much more harder

34 References Shadow identification and classification using invariant color models. Elena Salvador, Andrea Cavallaro, Touradj Eebrahimi 2001 Cast shadow segmentation using invariant color features. Elena Salvador, Andrea Cavallaro and, Touradj Ebrahimi 2004

35 The End…

36 Sobel operator Performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial gradient that correspond to edges. Basic Sobel convolution mask: The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial gradient that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. These masks are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one mask for each of the two perpendicular orientations. The masks can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient. The gradient magnitude is given by: Note about convolution: One of the most powerful techniques in all of image processing is convolution.  Convolution is the modification of a pixel's value on the basis of the value of neighboring pixels.  Images are convolved by multiplying each pixel and its neighbors by a numerical matrix, called a kernel. This matrix is essentially moved over each pixel in the image, each pixel under the matrix is multiplied by the appropriate matrix value, the total is summed and normalized, and the central pixel is replaced by the result.  C(x,y) = sum(sum(P(i,j)*M(i,j)))/(sum(sum(M(i,j)))

37 Pseudo-convolution kernels in general
We can use a pseudo convolution operator to perform these to steps in one step. P1 P2 P3 P4 P5 P6 P7 P8 P9 |G| = |(P1+2*P2+P3)-(P7+2*P8+P9)|+|(P3+2*P6+P9)-(P1+2*P4+P7) Often, this absolute magnitude is the only output the user sees --- the two components of the gradient are conveniently computed and added in a single pass over the input image using the pseudo-convolution operator Using this kernel the approximate magnitude is given by: (… the equation…)

38 Morphological dilation of images
The state of any given pixel in the output image is determined by applying a rule to the corresponding pixel and its neighbors in the input image. The rule used to process the pixels defines the operation as a dilation or an erosion. Dilation: The value of the output pixel is the maximum value of all the pixels in the input pixel's neighborhood. In a binary image, if any of the pixels is set to the value 1, the output pixel is set to 1. (the neighborhood in this example is the structuring element).

39 Examples of photometric color invariants
(L1,L2,L3) ( Color Based Object Recognition Theo Gevers and Arnold W.M. Smeulders 1999 )


Download ppt "Shadow Removal Seminar"

Similar presentations


Ads by Google