# Chapter 3 Image Enhancement in the Spatial Domain.

## Presentation on theme: "Chapter 3 Image Enhancement in the Spatial Domain."— Presentation transcript:

Chapter 3 Image Enhancement in the Spatial Domain

Image Enhancement in the
Spatial Domain The spatial domain: The image plane For a digital image is a Cartesian coordinate system of discrete rows and columns. At the intersection of each row and column is a pixel. Each pixel has a value, which we will call intensity. The frequency domain : A (2-dimensional) discrete Fourier transform of the spatial domain We will discuss it in chapter 4. Enhancement : To “improve” the usefulness of an image by using some transformation on the image. Often the improvement is to help make the image “better” looking, such as increasing the intensity or contrast.

A mathematical representation of spatial domain enhancement:
Background A mathematical representation of spatial domain enhancement: where f(x, y): the input image g(x, y): the processed image T: an operator on f, defined over some neighborhood of (x, y)

Gray-level Transformation

Some Basic Gray Level Transformations

Image Negatives Let the range of gray level be [0, L-1], then

Log Transformations where c : constant

Power-Law Transformation
where c, : positive constants

Power-Law Transformation Example 1: Gamma Correction

Power-Law Transformation
Example 2: Gamma Correction

Power-Law Transformation
Example 3: Gamma Correction

Piecewise-Linear Transformation Functions Case 1: Contrast Stretching

Piecewise-Linear Transformation Functions Case 2:Gray-level Slicing
An image Result of using the transformation in (a)

Piecewise-Linear Transformation Functions Case 3:Bit-plane Slicing
It can highlight the contribution made to total image appearance by specific bits. Each pixel in an image represented by 8 bits. Image is composed of eight 1-bit planes, ranging from bit-plane 0 for the least significant bit to bit plane 7 for the most significant bit.

Piecewise-Linear Transformation Functions
Bit-plane Slicing: A Fractal Image

Piecewise-Linear Transformation Functions
Bit-plane Slicing: A Fractal Image 7 6 5 4 3 2 1

Histogram Processing

Histogram Processing

Histogram Equalization
To improve the contrast of an image To transform an image in such a way that the transformed image has a nearly uniform distribution of pixel values Transformation: Assume r has been normalized to the interval [0,1], with r = 0 representing black and r = 1 representing white The transformation function satisfies the following conditions: T(r) is single-valued and monotonically increasing in the interval

Histogram Equalization
For example:

Histogram Equalization
Histogram equalization is based on a transformation of the probability density function of a random variable. Let pr(r) and ps(s) denote the probability density function of random variable r and s, respectively. If pr(r) and T(r) are known, then the probability density function ps(s) of the transformed variable s can be obtained Define a transformation function where w is a dummy variable of integration and the right side of this equation is the cumulative distribution function of random variable r.

Histogram Equalization
Given transformation function T(r), ps(s) now is a uniform probability density function. T(r) depends on pr(r), but the resulting ps(s) always is uniform.

Histogram Equalization
In discrete version: The probability of occurrence of gray level rk in an image is n : the total number of pixels in the image nk : the number of pixels that have gray level rk L : the total number of possible gray levels in the image The transformation function is Thus, an output image is obtained by mapping each pixel with level rk in the input image into a corresponding pixel with level sk.

Histogram Equalization

Histogram Equalization

Histogram Equalization
Transformation functions (1) through (4) were obtained form the histograms of the images in Fig 3.17(1), using Eq. (3.3-8).

Histogram Matching Histogram matching is similar to histogram equalization, except that instead of trying to make the output image have a flat histogram, we would like it to have a histogram of a specified shape, say pz(z). We skip the details of implementation.

Local Enhancement The histogram processing methods discussed above are global, in the sense that pixels are modified by a transformation function based on the gray-level content of an entire image. However, there are cases in which it is necessary to enhance details over small areas in an image. original global local

Use of Histogram Statistics for Image Enhancement
Moments can be determined directly from a histogram much faster than they can from the pixels directly. Let r denote a discrete random variable representing discrete gray-levels in the range [0,L-1], and p(ri) denote the normalized histogram component corresponding to the ith value of r, then the nth moment of r about its mean is defined as where m is the mean value of r For example, the second moment (also the variance of r) is

Use of Histogram Statistics for Image Enhancement
Two uses of the mean and variance for enhancement purposes: The global mean and variance (global means for the entire image) are useful for adjusting overall contrast and intensity. The mean and standard deviation for a local region are useful for correcting for large-scale changes in intensity and contrast. ( See equations and )

Use of Histogram Statistics for Image Enhancement
Example: Enhancement based on local statistics

Use of Histogram Statistics for Image Enhancement
Example: Enhancement based on local statistics

Use of Histogram Statistics for Image Enhancement
Example: Enhancement based on local statistics

Enhancement Using Arithmetic/Logic Operations
Two images of the same size can be combined using operations of addition, subtraction, multiplication, division, logical AND, OR, XOR and NOT. Such operations are done on pairs of their corresponding pixels. Often only one of the images is a real picture while the other is a machine generated mask. The mask often is a binary image consisting only of pixel values 0 and 1. Example: Figure 3.27

Enhancement Using Arithmetic/Logic Operations
AND OR

Image Subtraction Example 1

Image Subtraction Example 2 When subtracting two images, negative pixel values can result. So, if you want to display the result it may be necessary to readjust the dynamic range by scaling.

A noisy image g(x,y) can be defined by
Image Averaging When taking pictures in reduced lighting (i.e., low illumination), image noise becomes apparent. A noisy image g(x,y) can be defined by where f (x, y): an original image : the addition of noise One simple way to reduce this granular noise is to take several identical pictures and average them, thus smoothing out the randomness.

Noise Reduction by Image Averaging Example: Adding Gaussian Noise
Figure 3.30 (a): An image of Galaxy Pair NGC3314. Figure 3.30 (b): Image corrupted by additive Gaussian noise with zero mean and a standard deviation of 64 gray levels. Figure 3.30 (c)-(f): Results of averaging K=8,16,64, and 128 noisy images.

Noise Reduction by Image Averaging Example: Adding Gaussian Noise
Figure 3.31 (a): From top to bottom: Difference images between Fig (a) and the four images in Figs (c) through (f), respectively. Figure 3.31 (b): Corresponding histogram.

Basics of Spatial Filtering
In spatial filtering (vs. frequency domain filtering), the output image is computed directly by simple calculations on the pixels of the input image. Spatial filtering can be either linear or non-linear. For each output pixel, some neighborhood of input pixels is used in the computation. In general, linear filtering of an image f of size MXN with a filter mask of size mxn is given by where a=(m-1)/2 and b=(n-1)/2 This concept called convolution. Filter masks are sometimes called convolution masks or convolution kernels.

Basics of Spatial Filtering

Basics of Spatial Filtering
Nonlinear spatial filtering usually uses a neighborhood too, but some other mathematical operations are use. These can include conditional operations (if …, then…), statistical (sorting pixel values in the neighborhood), etc. Because the neighborhood includes pixels on all sides of the center pixel, some special procedure must be used along the top, bottom, left and right sides of the image so that the processing does not try to use pixels that do not exist.

Smoothing Spatial Filters
Smoothing linear filters Averaging filters (Lowpass filters in Chapter 4)) Box filter Weighted average filter Box filter Weighted average

Smoothing Spatial Filters
The general implementation for filtering an MXN image with a weighted averaging filter of size mxn is given by where a=(m-1)/2 and b=(n-1)/2

Smoothing Spatial Filters Image smoothing with masks of various sizes

Smoothing Spatial Filters
Another Example

Order-Statistic Filters
Median filter: to reduce impulse noise (salt-and-pepper noise)

Sharpening Spatial Filters
Sharpening filters are based on computing spatial derivatives of an image. The first-order derivative of a one-dimensional function f(x) is The second-order derivative of a one-dimensional function f(x) is

Sharpening Spatial Filters
An Example

Use of Second Derivatives for Enhancement
The Laplacian Development of the Laplacian method The two dimensional Laplacian operator for continuous functions: The Laplacian is a linear operator.

Use of Second Derivatives for Enhancement
The Laplacian

Use of Second Derivatives for Enhancement
The Laplacian To sharpen an image, the Laplacian of the image is subtracted from the original image. Example: Figure 3.40

Use of Second Derivatives for Enhancement
The Laplacian: Simplifications The g(x,y) mask Not only

Use of First Derivatives for Enhancement
The Gradient Development of the Gradient method The gradient of function f at coordinates (x,y) is defined as the two-dimensional column vector: The magnitude of this vector is given by

Use of First Derivatives for Enhancement

Use of First Derivatives for Enhancement