Presentation is loading. Please wait.

Presentation is loading. Please wait.

Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu Department of Electronic and Information Engineering, South.

Similar presentations


Presentation on theme: "Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu Department of Electronic and Information Engineering, South."— Presentation transcript:

1 Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu Department of Electronic and Information Engineering, South China University of Technology, Guangzhou, P.R.China Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, Guangzhou, 18-21 August 2005

2 Outline Introduction Color image segmentation Semantic Descriptions of Colors Image query using semantic descriptions of colors Result

3 Introduction Content-based image retrieval (CBIR) system supports image searches based on perceptual features, such as color, texture, and shape. Users prefer using keywords in semantic level to conduct searches rather than using low-level features. (semantic-based retrieval) Color is one of the main visual cues, according to the strong relationship between colors and human emotions, an emotional semantic query model based on color semantic description is proposed in this study.

4 Introduction Our method Segment images using color clustering in L*a*b* space. Generate semantic terms using fuzzy clustering algorithm and describe the image region and the whole image with the semantic terms. Present an image query scheme through image color semantic description, which allows the users to query images with emotional semantic words.

5 Color image segmentation In this paper, we propose an effective image segmentation method, which involves three stages: 1.Image preprocessing. Edges are removed and the images are smoothed by Gaussian kernel. 2.Color space conversion. The color space is converted from RGB space into L*a*b* space. 3.Color clustering in L*a*b* space.

6 Color segmentation 1.Initialize clustering centroids. Initial K (1) clustering centroids μ (1) j and empty cluster set Q (1) j (j=1,2,…, K (1) ). 2.In the L*a*b* space, when performing the i-th round overlap, for each pixel Pr(L,a,b) in the image, find a cluster K (i) which satisfies and put into cluster K (i).

7 Color segmentation 3. Update the cluster centroids as follows: where N j is the number of pixels in the cluster Q (i) j. 4. For each cluster , if there exists a cluster , which satisfies: merge Q (i) j1 and Q (i) j2 into Q (i+1) j, the cluster centroid updated as K (i+1) = k (i) -1

8 Color segmentation 5. Repeat stage 2 to stage 4 until all the clusters are convergent. After image segmentation, the images are divided into K color regions.

9 Semantic Descriptions of Colors We propose a color description model which can automatically generate the semantic terms of image segmentations and the whole image through a fuzzy clustering algorithm. –LCH space conversion –Fuzzy Clustering –Regional Semantic Description of Colors –Global Semantic Description of Images

10 LCH space conversion L*C*h* space is selected because its definitions and measurements are suited for vision perception psychology. -L: lightness -C : color saturation -H: hue

11 Fuzzy Clustering According to the findings of color naming, we develop our color semantic terms to name colors and describe them.

12 Fuzzy Clustering Input data sequence, where x i denotes feature L*or C* of the i-th region and n is the number of color regions. 1.Initialize the 5 membership functions. c 0 =min{x 1,x 2,…,x n } c 6 =max{x 1,x 2,…,x n } compute c 1,c 2,... c 5 c j = c 0 +j(c 6 + c 0 )/6

13 Fuzzy Clustering 2.For each x i , compute μ ij using the following rules: Rule 1 : if , x i <= c 1, μ i,1 =1 and μ i,k≠1 = 0 Rule 2 : if , x i >=c 5, μ i,5 =1 and μ i,k≠5 = 0 Rule 3 : if , c j < x i <= c j+1, μ ij = (c j+1 - x i )/(c j+1 - c j ), μ i,j+1 =1- μ ij and μ i,k≠j,j+1 =0 μ ij : the membership value that the i-th pattern belongs to j-th semantic term, 1 <= i <= n, 1 <= j <= 5

14 c j < x i <= c j+1 xixi μ ij = (c 3 -x i ) / (c 3 -c 2 ) μ i,j+1 =1- μ ij

15 Fuzzy Clustering 3. Update the class centroids c 1,c 2,... c 5 using the following equations: 4. Repeat step 2 and step 3 until c are unchanged.

16 Fuzzy Clustering Membership functions of the lightness Membership functions of the saturation

17 Fuzzy Clustering Membership functions of the hue

18 Fuzzy Clustering According to the above membership functions, all the semantic terms (hue, lightness, saturation) can be automatically generated from the L*C*h* space values. Colors can be described by the 3-demension semantic vector of hue, lightness and saturation, each vector containing a hue term, a saturation term and a lightness term. There are 150(5*5*6) vectors in total. The semantic vector Q n = [q 1,q 2,q 3 ], n=1,2,…,150, q 1 =1,2…,5,q 2 =1,2,…,5 , q 3 =1,2,3,…6. ex: [521] = very light weak red

19 Fuzzy Clustering colorless = lightness We use the semantic terms “black”, “dark grey”, “medium grey”, “light grey” and “white” to describe the color and then we get 125 semantic vectors in total. For given color S i, we can compute the membership of the 125 semantic vectors. The vector with the largest membership is selected as the semantic description of color S i. Following this method, we can get semantic terms for every color.

20 Regional Semantic Description of Colors Suppose the segment is composed by K pixels, S i , i=1,2,…,K , is the color of pixel, the membership of the pixel S i for the 125 combinations μ Q n (S i ), n=1,2,…125 can be computed. Computing all the pixels together, we can get membership of the whole region for the 125 combinations, and the histograms. The combination of the largest membership is selected as the regional semantic description.

21 Global Semantic Description of Images In order to describe the global character of an image, average lightness, average saturation and average color contrast of the image are defined as follows. Suppose the image I has n pixels. Global light Global saturation Global contrast where L i , C i , a i , b i denote the L*,C* , a, and b values of the i-th pixel in the image respectively.

22

23 Image Query using Semantic Descriptions of Colors ::= | | or | and. ::= global. ::= lightness | saturation | contrast | not | or | and. ::= | not | and | or. ::=. ::= the percentage of the ::= hue equal to | lightness | lightness |saturation | not | or | and. ::= red | orange| yellow | green | blue | purple. ::= is greater than | is equal to | is less than | is greater than or equal to | is less than or equal to. ::= very dark| dark | medium | light | very light. ::= colorless | weak | moderate | strong | very strong. ::= very low | low | medium| high | very high. ::= black | dark grey | middle grey | light grey | white.

24 Result - sad (the percentage of the ((light is less than middle light and hue is equal to blue) or (light is less than middle light grey)) is greater than 40%) or (global average lightness is less than dark)

25 Result - warm (the percentage of the( (light is greater than dark and saturation is greater than weak) and (hue equal to red or hue equal to orange or hue equal to yellow)) is great than 20 % ) and (global light is greater than very dark).


Download ppt "Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu Department of Electronic and Information Engineering, South."

Similar presentations


Ads by Google