Presentation is loading. Please wait.

Presentation is loading. Please wait.

Entropy and some applications in image processing Neucimar J. Leite Institute of Computing

Similar presentations


Presentation on theme: "Entropy and some applications in image processing Neucimar J. Leite Institute of Computing"— Presentation transcript:

1 Entropy and some applications in image processing Neucimar J. Leite Institute of Computing neucimar@ic.unicamp.br

2 Outline Introduction –Intuitive understanding Entropy as global information Entropy as local information –edge detection, texture analysis Entropy as minimization/maximization constraints –global thresholding –deconvolution problem

3 Information Entropy (Shannon´s entropy) An information theory concept closely related to the following question: - What is the minimum amount of data needed to represent an information content? For images (compression problems): - How few data are sufficient to completely describe an images without (much) loss of information?

4 Intuitive understanding: - relates the amount of uncertainty about an event with a given probability distribution Event: randomly draw out a ball high uncertainty low uncertainty no uncertainty entropy = max min (uncertainty)

5 Example 1: Event: a coin flipping = { heads, tails } Probability: P(heads) = P(tails) = 1/2 0  heads 1  tails self-information: inversely related to the probability of E Self-information: - Units of information used to represent an event E

6 Example 2: amount of conveyed information of event E Entropy : average information

7 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 coding the balls (3 bits/ball) Entropy: = 3bits/ball Degree of information compression: equal length binary code for independent data:

8 H= -( 5/8 log 2 (5/8) + 1/8 log 2 (1/8) + 1/8 log 2 (1/8) + 1/8 log 2 (1/8) ) = 1.54 H = -8log 2 1 = 0 medium uncertainty: no uncertainty: code 0 0 1 1 0 1

9 H= -( 5/8 log 2 (5/8) + 1/8 log 2 (1/8) + 1/8 log 2 (1/8) + 1/8 log 2 (1/8) ) = 1.54 medium uncertainty: code 0 0 1 1 0 1 2 bits/ball > 1.54 bit/ball  code redundancy !!! and  We need an encoding method for eliminating this code redundancy 22%

10 BallProbabilityReduction 1Reduction 2 red black blue green 5/8 1/8 The Huffman encoding:

11 BallProbabilityReduction 1Reduction 2 red black blue green 5/8 1/8 5/8 2/8 1/8

12 BallProbabilityReduction 1Reduction 2 red black blue green 5/8 1/8 5/8 2/8 1/8 5/8 3/8

13 BallProbabilityReduction 1Reduction 2 red black blue green 5/8 1/8 5/8 2/8 1/8 5/8 3/8 (1) (0)

14 BallProbabilityReduction 1Reduction 2 red black blue green 5/8 1/8 5/8 2/8 1/8 5/8 3/8 (1) (0) (01) (00) (1)

15 BallProbabilityReduction 1Reduction 2 red black blue green 5/8 1/8 5/8 2/8 1/8 5/8 3/8 (1) (0) (01) (00) (1) (011) (010) (1) (00) variable length code red black blue green ball 1 00 011 010 and (18,6%)

16 Entropy: 4.11 512 x 512 8-bit image: After Huffman encoding: bits/pixel Variable length coding does not take advantage of the high images pixel-to-pixel correlation:  a pixel can be predicted from the values of its neighbors  more redundancy  lower entropy (bits/pixel)

17 Entropy: 7.45 After Huffman encoding: Entropy: 7.35 After Huffman encoding:

18 Coding the interpixel difference  highlighting redundancies: Entropy: 4.73 instead of 7.45 After Huffman encoding: Entropy: 5.97 instead of 7.35 After Huffman encoding: instead of 1.07 instead of 1.08

19 Entropy as a local information: the edge detection example

20 Edge detection examples: 0 01 0 10 -2 000 121 01 -202 01

21 Entropy-based edge detection Low entropy values  low frequencies  uniform image regions High entropy values  high frequencies  image edges

22 Binary entropy function: 0.5 01.0 Entropy H p 1.0

23 0.5 01.0 Entropy H p 1.0

24 0.5 01.0 Entropy H p 1.0

25 0.5 01.0 Entropy H p 1.0

26 0.5 01.0 Entropy H p 1.0

27 0.5 01.0 Entropy H p 1.0

28 Binary entropy function: Isotropic edge detection

29 H in a 3x3 neighborhood:

30 5x5 neighborhood:

31 7x7 neighborhood:

32 9x9 neighborhood:

33 Texture Analysis Similarity grouping based on brightness, colors, slopes, sizes etc The perceived patterns of lightness, directionality, coarseness, regularity, etc can be used to describe and segment an image

34 Texture description: statistical approach Characterizes textures as smooth, coarse, periodic, etc - Based on the intensity histogram  prob. density function Descriptors examples: z i = random variable denoting gray levels p(z i ) = the intensity histogram in a region Mean: a measure of average intensity

35 Other moments of different orders: - e.g., standard deviation: a measure of average contrast Entropy: a measure of randomness

36 TextureAverage IntensityAverage contrastEntropy smooth87.510.85.3 coarse121.274.27.8 periodic99.634.06.5 smooth coarse periodic

37 Descriptors and segmentation: ?

38 Gray-level co-occurrence matrix: Haralick´s descriptors Conveys information about the positions of pixels having similar gray level values. 12 1 3 3 2 1 23 2 2 2 1 1 33 2 2 1 1 3 1 3 1 2 1 1 3 d=1 0 1 2 3 4 0 0 0 1 0 4 2 1 0 2 0 3 3 2 0 3 0 1 2 3 0 4 0 0 0 0 0 M d (a,b)

39 For the descriptor H: large empty spaces in M  little information content cluttered areas  large information content M d = the probability that a pixel with gray level i will have a pixel with level j a distance of d pixels away in a given direction d = 2, horizontal direction

40 Obviously, more complex texture analysis based on statistical descriptors should consider combination of information related to image scale, moments, contrast, homogeneity, directionality, etc

41 Entropy as minimization/maximization constraints

42 Global thresholding examples: mean histogram peaks

43 For images with levels 0-255: The probability that a given pixel will have value less than or equal t is: Now considering: Class A: Class B:

44 The optimal threshold is the value of t that maximizes where

45 Examples:

46 Entropy as a fuzziness measure In fuzzy set theory an element x belongs to a set S with a certain probability p x defined by a membership function p x (x) Example of a membership function for a given threshold t: p x (x) gives the degree to which x belongs to the object or background with gray-level average and, respectively.

47 How can the degree of fuzziness be measured? Example: t = 0 for a a binary image  fuzziness = 0

48 Using the Shannon´s function (for two classes): the entropy of an entire fuzzy set of dimension MxN is and for segmentation purpose, the threshold t is such that E(t) is minimum  t minimizes fuzziness

49 Segmentation examples

50 Maximum Entropy Restoration: the deconvolution problem

51 The image degradation model: f(x,y) h(x,y) + noise degraded image g(x,y)

52 The restoration problem: Given g, h, and we can find an estimate such that the residual Since there may exist many functions such that the above constraint is satisfied, we can consider the maximization entropy as an additional constraint for “optimum” restoration originaldegradedrestored

53 Wiener Lucy-Richardson Entropy h Degraded Other restoration methods:

54 Conclusions The entropy information has been extensively used in various image processing applications. Other examples concern distortion prediction, images evaluation, registration, multiscale analysis, high-level feature extraction and classification, etc


Download ppt "Entropy and some applications in image processing Neucimar J. Leite Institute of Computing"

Similar presentations


Ads by Google