Presentation is loading. Please wait.

Presentation is loading. Please wait.

Spatial-based Enhancements Lecture 5 prepared by R. Lathrop 10/99 updated 2/05 ERDAS Field Guide 5th Ed. Ch 5:154-162.

Similar presentations


Presentation on theme: "Spatial-based Enhancements Lecture 5 prepared by R. Lathrop 10/99 updated 2/05 ERDAS Field Guide 5th Ed. Ch 5:154-162."— Presentation transcript:

1 Spatial-based Enhancements Lecture 5 prepared by R. Lathrop 10/99 updated 2/05 ERDAS Field Guide 5th Ed. Ch 5:154-162

2 Where in the World?

3 Learning objectives Remote sensing science concepts –Concept of spatial frequency –Texture –Edge Enhancement/Sharpening –Edge Detection/Extraction –Global vs. local operator: Fourier vs. kernel convolution Math Concepts –Kernel convolution Skills –Spatial enhancement: kernel convolution and FFT

4 Spatial frequency Spatial frequency is the number of changes in brightness value per unit distance in any part of an image low frequency - tonally smooth, gradual changes high frequency - tonally rough, abrupt changes

5 Spatial Frequencies Zero Spatial frequencyLow Spatial frequencyHigh Spatial frequency Example from ERDAS IMAGINE Field Guide, 5th ed.

6 Spatial vs. Spectral Enhancement Spatial-based Enhancement modifies a pixel’s values based on the values of the surrounding pixels (local operator) Spectral-based Enhancement modifies a pixel’s values based solely on the pixel’s values (point operator)

7 Moving Window concept Kernel scans across row, then down a row and across again, and so on.

8 Focal Analysis Mathematical calculation of pixel DN values within moving window Mean, Median, Std Dev., Majority Focal value written to center pixel in moving window

9 Noise Removal Noise: extraneous unwanted signal response SNR measures the radiometric accuracy of the data. Want high Signal-to-noise-ratio (SNR) Over low reflectance targets (i.e. dark pixels such as clear water) the noise may swamp the actual signal True Signal Noise Observed Signal +

10 Noise Removal Noise removal techniques to restore image to as close an approximation of the original scene as possible Bit errors: random pixel to pixel variations, average neighborhood (e.g., 3x3) using a moving window (convolution kernel) Destriping: correct defective sensor Line drop: average lines above and below

11 Example: mean or median focal analysis for noise filtering

12 Example: Line drop 105 156 178 154167200202205 ----- ----- ---- ----- ----- ----- ----- ----- 107 152 166 165173204204207 Interpolated: above and below to fill line 105 156 178 154 167 200 202 205 106 154 172 160 170 202 203 206 107 152 166 165 173 204 204 207

13 Texture Texture: variation in BV’s in a local region, gives estimate of local variability. Can be used as another layer of data in classification/ interpretation process. 1st order statistics: mean Euclidean distance 2 nd order: range, variance, std dev 3 rd order: skewness 4 th order: kurtosis Window size will affect results. Often need larger moving window sizes for proper enhancement.

14 Example Image: Ikonos pan Orignal IKONOS pan

15 Texture: variance 3x3 texture7x7 texture

16 Statistical or Sigma Filters: often used for radar imagery enhancement Center pixel is replaced by the average of all pixel values within the moving window that fall within the designated range of sigma: µ+σ Sigma may represent the coefficient of variation. The default sigma value (set to 0.15 in ERDAS IMAGINE) can be modified using multipliers to increase or decrease the range of values within the moving window used to calculate the average. Use the filter sequentially with increasing multipliers to preserve fine detail while smoothing the image.

17 Spatial-based Enhancement Low vs. Hi frequency enhancement Edge Enhancement/Sharpening Edge Detection/Extraction Many spatial-based enhancement (filtering) techniques use kernel convolution, a type of local operation

18 Example: kernel convolution 88666286662286622286222288866628666228662228622228 -1 -1 -1 -1 16 -1 -1 -1 -1 Example from ERDAS IMAGINE Field Guide, 5th ed. Convolution Kernel

19 Pixel Convolution Where i = row location, j = column location f ij = the coefficient of a convolution kernel at position i, j d ij = the BV of the original data at position i, j q = the dimension of the kernel, assuming a square kernel, i.e., either the sum of the coefficients of the kernel or 1 if the sum of coefficients is zero BV = output pixel value

20 Example: kernel convolution Kernel: -1 -1 -1 -1 16 -1 -1 -1 -1 Original: 8 6 6 2 8 6 2 2 8 X Result = 11 J=1j=2j=3 I=1 (-1)(8) +(-1)(6)+(-1)(6)= -8 -6 -6 = -20 I=2 (-1)(2) +(16)(8)+(-1)(6)= -2 +128 -6 = 120 I=3 (-1)(2) +(-1)(2)+(-1)(8)= -2 -2 -8 = -12 F = 16 - 8 = 8 Sum = 88 output BV = 88 / 8 = 11

21 116666 011666 201166 220116 2220 11 Example: kernel convolution 86666286662286622286222288666628666228662228622228 InputOutput Edge

22 Low vs. high spatial frequency enhancements Low frequency enhancers (low pass filters): Emphasize general trends, smooth image High frequency enhancers (high pass filters): Emphasize local detail, highlight edges

23 Example: Low Frequency Enhancement Kernel: 1 1 1 1 1 1 Original: 204 200 197 201 100 209 198 200 210 Output: 204 200 197 201 191 209 198 200 210 Original: 64 60 57 61 125 69 58 60 70 Output: 64 60 57 61 65 69 58 60 70 Low value surrounded by higher values High value surrounded by lower values From ERDAS Field Guide p.111

24 Low pass filter Orignal IKONOS pan 7x7 low pass

25 Gaussian filter Gaussian smoothing filter is similar to a low pass mean filter but uses a kernel that represents the shape of a Gaussian (“bell- shaped”) curve. Example Graphic taken from http://homepages.inf.ed.ac.u k/rbf/HIPR2/gsmooth.htm

26 Example: High Frequency Enhancement Kernel: -1 -1 -1 -1 16 -1 -1 -1 -1 Original: 204 200 197 201 120 209 198 200 199 Output: 204 200 197 201 39 209 198 200 210 Original: 64 50 57 61 125 69 58 60 70 Output: 64 50 57 61 187 69 58 60 70 Low value surrounded by higher values High value surrounded by lower values From ERDAS Field Guide p.111

27 High Pass filter 3x3 high pass 3x3 edge enhance -1 -1 -1 -1 17 -1 -1 -1 -1 -1 -1 -1 -1 9 -1 -1 -1 -1

28 Edge detection Edge detection process: Smooth out areas of low spatial frequency and highlight edges (i.e., local changes between bright vs. dark features) Zero-sum kernels: - linear edge/line detecting templates - directional (compass templates) - non-directional (Laplacian)

29 Zero sum kernels Zero sum kernels: the sum of all coefficients in the kernel equals zero. In this case, F is set = 1 since division by zero is impossible zero in areas where all input values are equal low in areas of low spatial frequency extreme in areas of high spatial frequency (high values become higher, low values lower)

30 Example: Linear Edge Detecting Templates Vertical:-1 0 1Horizontal: -1 -1 -1 -1 0 1 0 0 0 -1 0 1 1 1 1 Diagonal (NW-SE): 0 1 1(NE-SW): 1 1 0 -1 0 1 1 0 -1 -1 -1 0 0 -1 -1 Example: vertical template convolution Original: 2 2 2 8 8 8 Output: 0 18 18 0 0 2 2 2 8 8 8 0 18 18 0 0

31 Linear Edge Detection Horizontal Edge Vertical Edge -1 -2 -1 0 0 0 1 2 1 -1 0 1 -2 0 2 -1 0 1

32 Linear Line Detecting Templates Narrow (single pixel wide) line features (i.e. rivers and roads) are output as pairs of edges using linear edge detection templates. To create a single line edge feature, a linear line detecting template can be used Vertical:-1 2 -1Horizontal: -1 -1-1 -1 2-1 2 2 2 -1 2-1-1 -1 -1

33 Example: Linear Line Detecting Templates Vertical:-1 2 -1Horizontal: -1 -1 -1 -1 2 -1 2 2 2 -1 2 -1-1 -1 -1

34 Linear Line Detection Horizontal Edge Vertical Edge -1 -1 -1 2 2 2 -1 -1 -1 -1 2 -1 -1 2 -1 -1 2 -1

35 Compass gradient masks Produce a maximum output for vertical (or horizontal) brightness value changes from the specified direction. For example a North compass gradient mask enhances changes that increase in a northerly direction, i.e. from south to north: North:1 1 1 1 -2 1 -1 -1 -1

36 Example: Compass gradient masks North:1 1 1South: -1 -1 -1 1 -2 1 -1 -1 -1 1 1 1 Example: North vs. south gradient mask NorthSouth Original: 8 8 8 Output:...Output:... 8 8 8 0 0 0 0 0 0 8 8 818 18 18-18 -18 -18 2 2 218 18 18-18 -18 -18 2 2 2 0 0 0 0 0 0 2 2 2......

37 Directional gradient filters Directional gradient filters produce output images whose BVs are proportional to the difference between neighboring pixel BVs in a given direction, i.e. they calculate the directional gradient Spatial differencing: calculating spatial derivatives (differencing a pixel from its neighbor or some other lag distance); doesn’t use kernel convolution approach Vertical: BV i,j = BV i,j - BV i,j+1 + K Horizontal: BV i,j = BV i,j - BV i-1,j + K constant K added to make output positive (usually K=127)

38 Directional gradient filters Example: horizontal spatial difference BVi,j = BVi,j - BVi-1,j + K Original: 2 2 2 8 8 8 Output: 0 0 6 0 0 2 2 2 8 8 8 0 0 6 0 0 8 8 8 2 2 2 0 0 -6 0 0 8 8 8 2 2 2 0 0 -6 0 0 Positive values signify increase left to right Negative values signify decrease left to right

39 Non-directional Edge Enhancement Laplacian is a second derivative and is insensitive to direction. Laplacian highlights points, lines and edges in the image and suppresses uniform, smoothly varying regions 0 -1 0 1 -2 1 -1 4 -1 -2 4 -2 0 -1 0 1 -2 1

40 Nonlinear Edge Detection Sobel edge detector: a nonlinear combination of pixels Sobel = SQRT(X 2 + Y 2) X:-1 0 1Y: 1 2 1 -2 0 2 0 0 0 -1 0 1-1 -2 -1

41 Nondirectional edge filter Laplacian filterSobel filter

42 Edge Enhancement Edge enhancement process: First detect/map the edges Add or subtract the edges back into the original image to increase contrast in the vicinity of the edge

43 Original IKONOS pan Edge enhancement Laplacian - Original – edge = edge enhanced

44 Original IKONOS pan Unsharp masking to enhance detail 7x7 low - Original – low pass = edge enhanced

45 High Pass Filter (HPF) method for Image Fusion Capture high frequency information from the high spatial resolution panchromatic image using some form of high pass filter This high frequency information then added into the low spatial resolution multi-spectral imagery Often produces less distortion to the original spectral characteristics of the imagery but also less visually attractive

46 Edge Mapping/Extraction BV thresholding of the edge detector output to create a binary map of edges vs. non- edges Threshold too low: too many isolated pixels classified as edges and edge boundaries too thick Threshold too high: boundaries will consist of thin, broken segments

47 Edge Mapping/Extraction: example using Sobel filter Edge image represents a continuous range of values. Can you determine a threshold? +

48 Edge Mapping/Extraction: example using Sobel filter Edge-extracted image DN = 15Edge-extracted image DN = 20

49 Adaptive Filtering The selection of a single threshold to differentiate an edge that is applicable across the entire image may be difficult An adaptive filtering approach that looks at the relative difference on a more local scale (i.e., within a moving window) may achieve better results.

50 Band by Band vs. PCA Most spatial convolution filtering works waveband by waveband. Some edges may be more pronounced in some spectral wavebands vs. others. Alternatively, PCA can be used to composite the multispectral wavebands and run the convolution filtering/edge detection on the PCA Brightness component

51 Fourier Transform Fourier analysis is a mathematical technique for separating an image into its various spatial frequency and directional components. Fourier or spectral analysis models the data as a weighted sum of cosine and sine waveforms of varying direction and spatial frequency. Think of waves with a high vs. low spatial frequency

52 Fourier Transform Can display the frequency domain to view magnitude and directional of different frequency components, can then filter out unwanted components and back-transform to image space. FT is global rather than local operator Useful for noise removal or enhancing particular spatial frequency components

53 Fourier Analysis Example Side scan sonar image of sea bottom Fourier spectrum

54 Fourier Analysis Example Fourier spectrum Low frequencies towards center High frequencies towards edges Image noise often shows as thin line, oriented perpendicular to original image

55 Fourier Analysis Example Low pass filter Back transformed image

56 Fourier Analysis Example Wedge filterBack transformed image

57 Discrete Wavelet Transform Wavelet transform is similar to the FFT DWT: uses short discrete “wavelets” parameterized as a finite sized moving window rather than a continuous global sine wave (as in FT) Form of multi-resolution analysis Starts at the scale of the entire image and then successively breaks down into smaller windows (2x the frequency, 3x, 4x, etc. ) using high and low pass filters for the decomposition of low-pass, vertical, horizontal and diagonal components Graphic from http://www.jinr.ru/prog rams/jinrlib/wasp/docs/ html/img145.png

58 Discrete Wavelet Transform Graphic from http://norum.homeunix.net/~carl/wavelet/ http://norum.homeunix.net/~carl/wavelet/ This site has a DWT tutorial Graphic from http://www.jinr.ru/programs/jinrlib/wasp/ docs/html/img145.png

59 Discrete Wavelet Transform Graphic from http://www.jinr.ru/programs/jinrlib/wasp/do cs/html/img145.png 5 pass Graphic from http://norum.homeunix.net/~carl/wavelet/ http://norum.homeunix.net/~carl/wavelet/ 5 pass DWT

60 Wavelet Transform for image fusion Use the wavelet transform on the high spatial panchromatic imagery. Decompose till you have the same spatial resolution as the low spatial resolution multi-spectral data The multi-spectral and HI-res imagery should have relative pixel sizes differing by a factor of 2. The multi-spectral image is substituted for the low-pass image derived from the HI-res imagery. Reverse the wavelet decomposition to produce high spatial imagery with the multi-spectral information

61 Wavelet Transform for image fusion High spectral res image High spatial res image Resample Histogram Match DWT Fused image h vd h v d Adapted from ERDAS IMAGINE Field Guide s

62 Wavelet Transform for image fusion Must either compress the multi-spectral info into a single band (i.e. through PCA) or process single bands sequentially. The two images should be spectrally identical, i.e., only include the multi-spectral bands that fall within the range of the HI-res panchromatic band Often produces less distortion to the original spectral characteristics of the imagery DWT also used for image compression with only the low-pass information used

63 Spatial-based enhancement Concept of spatial frequency Texture Low vs. Hi frequency enhancement Edge Enhancement/Sharpening Edge Detection/Extraction Global vs. local operator: Fourier vs. kernel convolution

64 How to remove noise from an image? How to highlight edges within the image? How to fuse imagery?

65 Homework 1 Homework: Spatial Filtering; 2 Reading Ch. 8:276-329; 3 Reading ERDAS Ch. 6:157-160, 189-201 4 Article review due to Wednesday (Feb. 14, 2007)


Download ppt "Spatial-based Enhancements Lecture 5 prepared by R. Lathrop 10/99 updated 2/05 ERDAS Field Guide 5th Ed. Ch 5:154-162."

Similar presentations


Ads by Google