Presentation is loading. Please wait.

Presentation is loading. Please wait.

: Office Room #:: 7

Similar presentations


Presentation on theme: ": Office Room #:: 7"— Presentation transcript:

1 Computer Graphics & Image Processing Lecture 3 Image Enhancement in Spatial Domain

2 Email:: alijaved@uettaxila.edu.pk Office Room #:: 7
Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA : Office Room #:: 7

3 Image Enhancement Process an image to make the result more suitable than the original image for a specific application –Image enhancement is subjective (problem /application oriented) Image enhancement methods: Spatial domain: Direct manipulation of pixel in an image (on the image plane) Frequency domain: Processing the image based on modifying the Fourier transform of an image Many techniques are based on various combinations of methods from these two categories

4 Image Enhancement

5 Basic Concepts Spatial domain enhancement methods can be generalized as g(x,y)=T[f(x,y)] f(x,y): input image g(x,y): processed (output) image T[*]: an operator on f (or a set of input images), defined over neighborhood of (x,y) Neighborhood about (x,y): a square or rectangular sub-image area centered at (x,y)

6 Basic Concepts

7 Basic Concepts g(x,y) = T [f(x,y)] Pixel/point operation:
Neighborhood of size 1x1: g depends only on f at (x,y) T: a gray-level/intensity transformation/mapping function Let r = f(x,y) s = g(x,y) r and s represent gray levels of f and g at (x,y) Then s = T(r) Local operations: g depends on the predefined number of neighbors of f at (x,y) Implemented by using mask processing or filtering Masks (filters, windows, kernels, templates) : a small (e.g. 3×3) 2-D array, in which the values of the coefficients determine the nature of the process

8 3 basic gray-level transformation functions
Linear function Negative and identity transformations Logarithm function Log and inverse-log transformation Power-law function nth power and nth root transformations

9 Identity Function Output intensities are identical to input intensities. Is included in the graph only for completeness. 9

10 Image Negatives Reverses the gray level order
For L gray levels the transformation function is s =T(r) = (L - 1) - r

11 Function of s = cLog(1+r)
Log Transformations Function of s = cLog(1+r)

12 Log Transformations Properties of log transformations Application:
–For lower amplitudes of input image the range of gray levels is expanded –For higher amplitudes of input image the range of gray levels is compressed Application: This transformation is suitable for the case when the dynamic range of a processed image far exceeds the capability of the display device (e.g. display of the Fourier spectrum of an image) Also called “dynamic-range compression / expansion”

13 Log Transformations

14 Inverse Log Transformations
Do opposite to the Log Transformations Used to expand the values of high pixels in an image while compressing the darker-level values. 14

15 Power-Law Transformation

16 Power-Law Transformation
For γ < 1: Expands values of dark pixels, compress values of brighter pixels For γ > 1: Compresses values of dark pixels, expand values of brighter pixels If γ=1 & c=1: Identity transformation (s = r) A variety of devices (image capture, printing, display) respond according to power law and need to be corrected Gamma (γ) correction The process used to correct the power-law response phenomena

17 Power-Law Transformation

18 Gamma correction Monitor Gamma correction  = 2.5  =1/2.5 = 0.4 Cathode ray tube (CRT) devices have an intensity-to-voltage response that is a power function, with  varying from 1.8 to 2.5 The picture will become darker. Gamma correction is done by preprocessing the image before inputting it to the monitor with s = cr1/

19 Power-Law Transformation: Example

20 Power-Law Transformation: Example

21 Piecewise-Linear Transformation
Contrast Stretching Goal: Increase the dynamic range of the gray levels for low contrast images Low-contrast images can result from –poor illumination –lack of dynamic range in the imaging sensor –wrong setting of a lens aperture during image acquisition

22 Contrast Stretching Example

23 Piecewise-Linear Transformation: Gray-level slicing
Highlighting a specific range of gray levels in an image Display a high value of all gray levels in the range of interest and a low value for all other gray levels (a) transformation highlights range [A,B] of gray level and reduces all others to a constant level (b) transformation highlights range [A,B] but preserves all other levels 23

24 Piecewise-Linear Transformation: Bit Plane slicing
Highlighting the contribution made to total image appearance by specific bits Suppose each pixel is represented by 8 bits Higher-order bits contain the majority of the visually significant data Useful for analyzing the relative importance played by each bit of the image Bit-plane 7 (most significant) Bit-plane 0 (least significant) One 8-bit byte 24

25 8 bit planes Bit-plane 7 Bit-plane 6 Bit-plane 5 Bit-plane 4
25

26 Histograms

27 Example Histogram 4/22/2017

28 Example Histogram

29 Histogram Examples

30 Contrast Stretching through Histogram
If rmax and rmin are the maximum and minimum gray level of the input image and L is the total gray levels of output image The transformation function for contrast stretching will be

31 Histogram Equalization
31

32 Histogram Equalization
32

33 Histogram Equalization
33

34 Histogram Equalization
Spreading out the frequencies in an image (or equalising the image) is a simple way to improve dark or washed out images The formula for histogram equalisation is given where rk: input intensity sk: processed intensity k: the intensity range (e.g 0.0 – 1.0) nj: the frequency of intensity j n: the sum of all frequencies 34

35 Histogram Equalization: Example
35

36 Histogram Equalization: Example
Image After Equalization Initial Image 36 Notice that the minimum value (52) is now 0 and the maximum value (154) is now 255.

37 Histogram Equalization: Example
37

38 Histogram Equalization: Example
4x4 image Gray scale = [0,9] histogram 1 2 3 4 5 6 7 8 9 No. of pixels Gray level 2 3 4 5 38

39 Histogram Equalization: Example
Gray Level(j) 1 2 3 4 5 6 7 8 9 No. of pixels 11 15 16 / s x 9 3.3 3 6.1 6 8.4 8 39

40 Histogram Equalization: Example
Output image Gray scale = [0,9] Histogram equalization 1 2 3 4 5 6 7 8 9 No. of pixels Gray level 3 6 8 9 40

41 Mathematical/Logical Operations on Images
Addition –Averaging images for noise removal Subtraction –Removal of background from images –Image enhancement –Image matching –Moving/displaced object tracking Multiplication –Superimposing of texture on an image –Convolution and correlation of images And and or operations –To remove the unnecessary area of an image through mask operations 4/22/2017

42 Image Averaging for Noise Reduction
4/22/2017

43 Image Averaging for Noise Reduction
4/22/2017

44 Image Averaging for Noise Reduction
4/22/2017

45 Image Subtraction Variants
Takes two images as input and produces a third image whose pixel values are those of the first image minus the corresponding pixel values from the second image Variants It is also often possible to just use a single image as input and subtract a constant value from all the pixels Just output the absolute difference between pixel values, rather than the straightforward signed output. 4/22/2017

46 Image Subtraction The subtraction of two images is performed in a single pass If the operator computes absolute differences between the two input images then: If it is simply desired to subtract a constant value C from a single image then: 4/22/2017

47 Image Subtraction If the operator calculates absolute differences, then it is impossible for the output pixel values to be outside the range In rest of the two cases the pixel value may become negative This is one good reason for using absolute differences. How to solve problem of negative pixels? 4/22/2017

48 Image Subtraction How to solve problem of negative pixels?
Let we have an 8 bit Grayscale image (Value Range= 0 t0 255) The result of image subtraction may come in the range of -255 to +255 One scheme can be to add 255 to every pixel and then divide by 2 Method is easy and fast Limitations Truncation errors can cause loss of accuracy Full range of display may not be utilized 4/22/2017

49 Image Subtraction How to solve problem of Negative Pixels?
Another scheme can be first, find the minimum gray value of the subtracted image second, find the maximum gray value of the subtracted image set the minimum value to be zero and the maximum to be 255 while the rest are adjusted according to the interval [0, 255], by timing each value with 255/max 4/22/2017

50 Examples of Image Subtraction
4/22/2017

51 Examples of Image Subtraction
4/22/2017

52 Example: Background Removal Using Image Subtraction
4/22/2017

53 Example: Background Removal Using Image Subtraction
4/22/2017

54 Image Multiplication Like other image arithmetic operators, multiplication comes in two main forms. The first form takes two input images and produces an output image in which the pixel values are just those of the first image, multiplied by the values of the corresponding values in the second image. The second form takes a single input image and produces output in which each pixel value is multiplied by a specified constant. This latter form is probably the more widely used and is generally called scaling. How It Works The multiplication of two images is performed in the obvious way in a single pass using the formula: Scaling by a constant is performed using: 4/22/2017

55 Image Multiplication Guidelines for Use 4/22/2017
There are many specialist uses for scaling. In general though, given a scaling factor greater than one, scaling will brighten an image. Given a factor less than one, it will darken the image. Scaling generally produces a much more natural brightening/darkening effect than simply adding an offset to the pixels, since it preserves the relative contrast of the image better. For instance, shows a picture of model robot that was taken under low lighting conditions. Simply scaling every pixel by a factor of 3, we obtain the one shown in the middle which is much clearer. However, when using pixel multiplication, we should make sure that the calculated pixel values don't exceed the maximum possible value. If we, for example, scale the above image by a factor of 5 using a 8-bit representation, we obtain the one shown in last. All the pixels which, in the original image, have a value greater than 51 exceed the maximum value and are (in this implementation) wrapped around from 255 back to 0. 4/22/2017

56 Examples of Image Multiplication
4/22/2017

57 Examples of Image Multiplication
Multiplication also provides a good way of "shading" artwork. You can use it to introduce a sense of diffuse lighting into your paintings or 3D CG objects. Notice in the example below how the warm and cool grays not only darken the final 3D rendering, but also influence its color temperature 4/22/2017

58 Examples of Image Multiplication
Multiplication provides a good way to color line drawings. Here you can really see the "black times anything is black, white times anything is that thing unchanged" rule in action. 4/22/2017

59 Logic Operations Logic operation performs on gray-level images, the pixel values are processed as binary numbers Light represents a binary 1, and dark represents a binary 0 NOT operation = negative transformation Useful for Image analysis e.g in Texture Analysis & Classification 4/22/2017 59

60 Example of Logical Operations using Masks
4/22/2017

61 Neighbourhood Operations
Neighbourhood operations simply operate on a larger neighbourhood of pixels than point operations Neighbourhoods are mostly a rectangle around a central pixel Any size rectangle and any shape filter are possible Origin x y Image f (x, y) (x, y) Neighbourhood 4/22/2017 61 61

62 Local Enhancement through Spatial Filtering
The output intensity value at (x,y) depends not only on the input intensity value at (x,y) but also on the specified number of neighboring intensity values around (x,y) Spatial masks (also called window, filter, kernel, template) are used and convolved over the entire image for local enhancement (spatial filtering) The size of the mask determines the number of neighboring pixels which influence the output value at (x,y) The values (coefficients) of the mask determine the nature and properties of enhancing technique 4/22/2017 62

63 Simple Neighbourhood Operations
Some simple neighbourhood operations include: Min: Set the pixel value to the minimum in the neighbourhood Max: Set the pixel value to the maximum in the neighbourhood Median: The median value of a set of numbers is the midpoint value in that set (e.g. from the set [1, 7, 15, 18, 24] 15 is the median). Sometimes the median works better than the average 4/22/2017 63 63

64 Simple Neighbourhood Operations Example
123 127 128 119 115 130 140 145 148 153 167 172 133 154 183 192 194 191 199 207 210 198 195 164 170 175 162 173 151 Original Image x y Enhanced Image x y 4/22/2017 64 64

65 The Spatial Filtering Process
Origin x a b c d e f g h i r s t u v w x y z * Original Image Pixels Filter Simple 3*3 Neighbourhood e 3*3 Filter eprocessed = v*e + r*a + s*b + t*c + u*d + w*f + x*g + y*h + z*i y Image f (x, y) The above is repeated for every pixel in the original image to generate the smoothed image 4/22/2017 65 65

66 Local Enhancement through Spatial Filtering
4/22/2017 66

67 Basics of Spatial Filtering
Given the 3×3 mask with coefficients: w1, w2,…, w9 The mask covers the pixels with gray levels: z1, z2,…, z9 z gives the output intensity value for the processed image (to be stored in a new array) at the location of z5 in the input image 4/22/2017 67

68 Basics of Spatial Filtering
Mask operation near the image border Problem arises when part of the mask is located outside the image plane; to handle the problem: 1. Discard the problem pixels (e.g. 512x512input 510x510output, if mask size is 3x3) 2. Zero padding: expand the input image by padding zeros (512x512input 514x514output) – Zero padding is not good; creates artificial lines or edges on the border 3. We normally use the gray levels of border pixels to fill up the expanded region (for 3x3 mask). For larger masks a border region equal to half of the mask size is mirrored on the expanded region. 4/22/2017 68

69 Mask Operation Near the Image Border
4/22/2017 69

70 Types of Spatial Filtering
4/22/2017 70 70

71 Spatial Filtering for Smoothing
For blurring/noise reduction; Blurring is usually used in preprocessing steps, e.g., to remove small details from an image prior to object extraction, or to bridge small gaps in lines or curves Equivalent to Low-pass spatial filtering in frequency domain because smaller (high frequency) details are removed based on neighborhood averaging (averaging filters) Implementation: The simplest form of the spatial filter for averaging is a square mask (assume m×m mask) with the same coefficients 1/m2 to preserve the gray levels (averaging). Applications: Reduce noise; smooth false contours Side effect: Edge blurring 4/22/2017 71

72 Smoothing Filters 4/22/2017 72

73 Smoothing Spatial Filters
One of the simplest spatial filtering operations we can perform is a smoothing operation Simply average all of the pixels in a neighbourhood around a central value Especially useful in removing noise from images Also useful for highlighting gross detail 1/9 Simple Averaging Filter 4/22/2017 73 73

74 Smoothing Spatial Filtering
Origin x 104 100 108 99 106 98 95 90 85 1/9 * Original Image Pixels Filter 1/9 104 99 95 100 108 98 90 85 Simple 3*3 Neighbourhood 3*3 Smoothing Filter 106 e = 1/9* /9* /9* /9* /9*99 + 1/9* /9*95 + 1/9*90 + 1/9*85 = y Image f (x, y) The above is repeated for every pixel in the original image to generate the smoothed image 4/22/2017 74 74

75 Spatial Filtering for Smoothing (Example)
4/22/2017 75

76 Spatial Filtering for Smoothing (Example)
4/22/2017 76

77 Weighted Smoothing Filters
More effective smoothing filters can be generated by allowing different pixels in the neighbourhood different weights in the averaging function Pixels closer to the central pixel are more important Often referred to as a weighted averaging 1/16 2/16 4/16 Weighted Averaging Filter 4/22/2017 77 77

78 Order-Statistics Filtering
Nonlinear spatial filters Output is based on order of gray levels in the masked area (sub-image) Examples: Median filtering, Max & Min filtering Median filtering Assigns the mid value of all the gray levels in the mask to the center of mask; Particularly effective when – the noise pattern consists of strong, spiky components (impulse noise, salt-and-pepper) – edges are to be preserved – Force points with distinct gray levels to be more like their neighbors 4/22/2017 78

79 Median Filtering 4/22/2017 79

80 Median Filtering (Example)
4/22/2017 80

81 Strange Things Happen At The Edges!
At the edges of an image we are missing pixels to form a neighbourhood Origin x e e e e e e e 4/22/2017 y Image f (x, y) 81 81

82 Strange Things Happen At The Edges!
There are a few approaches to dealing with missing edge pixels: Pad the image Typically with either all white or all black pixels Replicate border pixels Truncate the image 4/22/2017 82 82

83 Simple Neighbourhood Operations Example
123 127 128 119 115 130 140 145 148 153 167 172 133 154 183 192 194 191 199 207 210 198 195 164 170 175 162 173 151 x y 4/22/2017 83 83

84 Correlation & Convolution
The filtering we have been talking about so far is referred to as correlation with the filter itself referred to as the correlation kernel Convolution is a similar operation, with just one subtle difference For symmetric filters it makes no difference a b c d e f g h Original Image Pixels r s t u v w x y z Filter eprocessed = v*e + z*a + y*b + x*c + w*d + u*e + t*f + s*g + r*h * 4/22/2017 84 84

85 Spatial Filtering for Image Sharpening
Background: to highlight fine detail in an image or to enhance blurred detail Applications: electronic printing, medical imaging, industrial inspection, autonomous target detection (smart weapons)...... Foundation (Blurring vs Sharpening): Blurring/smoothing is performed by spatial averaging (equivalent to integration) Sharpening is performed by noting only the gray level changes in the image that is the differentiation 4/22/2017 85

86 Spatial Filtering for Image Sharpening
Operation of Image Differentiation Enhance edges and discontinuities (magnitude of output gray level >>0) De-emphasize areas with slowly varying gray-level values (output gray level: 0) Mathematical Basis of Filtering for Image Sharpening First-order and second-order derivatives Gradients Implementation by mask filtering 4/22/2017 86

87 Edge Detection What is an Edge? 5 7 50 55 60 65 70 50 150 75 10 200 20
Edge is a change but every change is not an edge Edge is a noticeable or abrupt change E.g is not a noticeable change in the range of (0 to 255) We have to define a threshold if the change is more than a specified threshold then we will define it as an edge point. Here gradual change exists you cannot pinpoint where the edge exists so the change must be abrupt For each pixel we have to look in horizontal,vertical and diagonal direction dx/ds -> for horizontal direction dy/ds -> for vertical direction 5 7 50 55 60 65 70 50 150 75 10 200 20 25 195 30 4/22/2017 87

88 Edge Detection 4/22/2017 88

89 Derivatives First Order Derivative Second Order Derivative
A basic definition of the first-order derivative of a one-dimensional function f(x) is the difference Second Order Derivative Ssimilarly, we define the second-order derivative of a one-dimensional function f(x) is the difference 4/22/2017 89

90 First and Second Order Derivatives
4/22/2017 90

91 Example for Discrete Derivatives
4/22/2017 91

92 Comparison between f" and f´
f´ generally produces thicker edges in an image f" has a stronger response to fine detail f´ generally has a stronger response to a gray-level step f" produces a double response at step changes in gray level f" is generally better suited than f´ for image enhancement Major application of f´ is for edge extraction; f´ used together with f" results in impressive enhancement effect 4/22/2017 92

93 Laplacian for Image Enhancement
4/22/2017 93

94 Laplacian for Image Enhancement
4/22/2017 94

95 Laplacian for Image Enhancement
Image background is removed by Laplacian filtering. Background can be recovered simply by adding original image to Laplacian output 4/22/2017 95

96 Laplacian for Image Enhancement (Example)
4/22/2017 96

97 Laplacian for Image Enhancement (Example)
4/22/2017 97

98 Image Sharpening Based on Unsharp Masking
4/22/2017 98

99 High Boost Filtering Principal application:
Boost filtering is used when input image is darker than desired, high-boost filter makes the image lighter and more natural 4/22/2017 99

100 High Boost Filtering Masks
4/22/2017 100

101 High Boost Filtering Masks
4/22/2017 101

102 1st Derivative Filtering- The Gradient
Implementing 1st derivative filters is difficult in practice For a function f (x, y) the gradient of f at coordinates (x, y) is given as the column vector: 102

103 1st Derivative Filtering (cont…)
The magnitude of this vector is given by: For practical reasons this can be simplified as: 103

104 1st Derivative Filtering (cont…)
Now we want to define digital approximations and their Filter Masks For simplicity we use a 3x3 region For example z5 denotes f(x,y), z1 denotes f(x-1,y-1) A simple approximation for First Derivative is z1 z2 z3 z4 z5 z6 z7 z8 z9

105 1st Derivative Filtering (cont…)
A simple approximation for First Derivative is Two other definitions proposed by Roberts use cross- difference z1 z2 z3 z4 z5 z6 z7 z8 z9 If we use

106 1st Derivative Filtering (cont…)
If we use absolute values then z1 z2 z3 z4 z5 z6 z7 z8 z9 The Masks corresponding to these equations are: Roberts Cross-Gradient Operators

107 Gradient Operators 4/22/2017 107

108 Gradient Operators Normally the smallest mask used is of size 3 x 3
Based on the concept of approximating the gradient several spatial masks have been proposed: 4/22/2017 108

109 Gradient Operators 4/22/2017 109

110 Gradient Processing (Example)
4/22/2017 110

111 NOTE The summation of coefficients in all masks equals 0, indicating that they would give a response of 0 in an area of constant gray level. 4/22/2017 111

112 Canny Edge Detection Algorithm Steps
The Canny edge detection operator was developed by John F. Canny in 1986 and uses a multi-stage algorithm to detect a wide range of edges in images. Algorithm Steps Image smoothing Gradient computation Edge direction computation Non-maximum suppression Hysteresis Thresholding 4/22/2017 112

113 Image Smoothing Reduces image noise that can lead to erroneous output
Performed by convolution of the input image with a Gaussian filter 2 4 5 9 12 15 1 159 σ=1.4 4/22/2017 113

114 Image Smoothing 4/22/2017 114

115 Gradient Computation Determines intensity changes
High intensity changes indicate edges Performed by convolution of smoothed image with masks to determine horizontal and vertical derivatives -1 1 -2 2 -1 -2 1 2 y x 4/22/2017 115

116 Gradient Computation Gradient magnitude determined by adding X and Y gradient images = x + y 4/22/2017 116

117 Edge Direction Computation
Edge directions are determined from running a computation on the X and Y gradient images Edge directions are then classified by their nearest 45° angle x Θx,y = tan-1  y 4/22/2017 117

118 Edge Direction Computation
0 ° 90 ° 45 ° 135 ° 4/22/2017 118

119 Non-Maximum Suppression
Given estimates of the image gradients, a search is then carried out to determine if the gradient magnitude assumes a local maximum in the gradient direction. So, for example, if the rounded angle is zero degrees the point will be considered to be on the edge if its intensity is greater than the intensities in the north and south directions, if the rounded angle is 90 degrees the point will be considered to be on the edge if its intensity is greater than the intensities in the west and east directions, if the rounded angle is 135 degrees the point will be considered to be on the edge if its intensity is greater than the intensities in the north east and south west directions, if the rounded angle is 45 degrees the point will be considered to be on the edge if its intensity is greater than the intensities in the north west and south east directions. This is worked out by passing a 3x3 grid over the intensity map. From this stage referred to as non-maximum suppression, a set of edge points, in the form of a binary image, is obtained. 4/22/2017 119

120 Non-Maximum Suppression
4/22/2017 120

121 Reduce number of false edges by applying a threshold T
Thresholding Reduce number of false edges by applying a threshold T all values below T are changed to 0 selecting a good values for T is difficult some false edges will remain if T is too low some edges will disappear if T is too high some edges will disappear due to softening of the edge contrast by shadows 4/22/2017

122 Hysteresis Thresholding
Thresholding with hysteresis requires two thresholds - high and low. we begin by applying a high threshold. This marks out the edges we can be fairly sure are genuine. Starting from these, using the directional information derived earlier, edges can be traced through the image. While tracing an edge, we apply the lower threshold, allowing us to trace faint sections of edges as long as we find a starting point. Once this process is complete we have a binary image where each pixel is marked as either an edge pixel or a non-edge pixel. 4/22/2017

123 Hysteresis Thresholding
Apply two thresholds in the suppressed image T2 = 2T1 two images in the output the image from T2 contains fewer edges but has gaps in the contours the image from T1 has many false edges combine the results from T1 and T2 link the edges of T2 into contours until we reach a gap link the edge from T2 with edge pixels from a T1 contour until a T2 edge is found again 4/22/2017

124 Hysteresis Thresholding
TL Gives Strong Edge pixels Gives Weak Edge pixels T2  T1 Link Edge pixels of T2 until we reach a gap link the edge from T2 with edge pixels from a T1 contour until a T2 edge is found again 4/22/2017 124

125 Hysteresis Thresholding
gaps filled from T1 A T2 contour has pixels along the green arrows Linking: search in a 3x3 of each pixel and connect the pixel at the center with the one having greater value Search in the direction of the edge (direction of Gradient) 4/22/2017

126 Hysteresis Thresholding
Determines final edge pixels using a high and low threshold Image is scanned for pixels with a gradient intensity higher than the high threshold Pixels above the high threshold are added to the edge output All of the neighbors of a newly added pixel are recursively scanned and added if they fall below the low threshold 4/22/2017 126

127 Hysteresis Thresholding
4/22/2017 127

128 Mask used to estimate the Gradient
4/22/2017 128

129 Combining Spatial Enhancement Methods (cont…)
Laplacian filter of bone scan (a) (b) Sharpened version of bone scan achieved by subtracting (a) and (b) (c) Sobel filter of bone scan (a) (d) 4/22/2017 129 129

130 Combining Spatial Enhancement Methods (cont…)
Result of applying a power-law trans. to (g) (h) Sharpened image which is sum of (a) and (f) (g) The product of (c) and (e) which will be used as a mask (f) (e) Image (d) smoothed with a 5*5 averaging filter 4/22/2017 130 130

131 Combining Spatial Enhancement Methods (cont…)
Compare the Original and Final Images 4/22/2017 131 131

132 Any question


Download ppt ": Office Room #:: 7"

Similar presentations


Ads by Google