# Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović

## Presentation on theme: "Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović"— Presentation transcript:

Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović http://mi.eng.cam.ac.uk/~oa214/

Lecture Roadmap Face geometry  Lecture 1: Geometric image transformations   Lecture 2: Colour and brightness enhancement   Lecture 3: Denoising and image filtering

– Image Denoising and Filtering –

Image Noise Sources Image noise may be produced by several sources:  Quantization  Photonic  Thermal  Electric

Denoising To effectively perform denoising, we need to consider the following issues:  Signal (uncorrupted image) model Typically piece-wise constant or linear  Noise model (from the physics of image formation) Additive or multiplicative, Gaussian, white, salt and pepper…

Salt and Pepper Noise

Gaussian Noise

Modelling Noise Most often noise is additive: Observed pixel luminance True luminanceNoise process

Additive Gaussian Noise – Example A clear original image was corrupted by additive white Gaussian noise: Original, uncorrupted imageAdditive Gaussian noise

Additive Gaussian Noise – Example A clear original image was corrupted by additive white Gaussian noise: Additive Gaussian noise

Additive Gaussian Noise – Example Taking a slice through the image can help us visualize the behaviour of noise better:

Temporal Average for Video Denoising A video feed of a static scene can be easily denoised by temporal averaging, under the assumption of zero-mean additive noise: Pixel luminance estimate Pixel luminance in frame i Average noise energy is reduced by a factor of N:

Temporal Averaging – Example Consider our noisy CCTV image from the previous lecture and the result of brightness enhancement: Original imageBrightness enhanced image

Temporal Averaging – Example The effect of temporal averaging over 100 frames is dramatic: But note that moving objects cause blur. The clarity of image detail is much improved.

Spatial Averaging Although attractive, a static video feed is usually not available. However, a similar technique can be used by noting:  Images are mostly smoothly varying Original smoothly varying signal and the signal corrupted with zero mean Gaussian noise

Simple Spatial Averaging Thus, we can attempt to denoise the signal by simple spatial averaging: The result of averaging each neighbouring 7 (± 3) pixels

Simple Spatial Averaging – Example Using out synthetically corrupted image: Additive Gaussian noise Spatially averaged using 5 х 5 neighbourhood

Simple Spatial Averaging – Example Consider the difference between the uncorrupted image and the corrupted and denoised images: Before averaging After averaging RMS difference = 29RMS difference = 12

Simple Spatial Averaging – Analysis The result of averaging looks good, but a closer inspection reveals some loss of detail: Difference imageMagnified patch

Simple Spatial Averaging – Analysis To formally analyze the filtering effects, rewrite the original averaging expression: Rectangular pulse Convolution integral

1D Convolution A quick convolution re-cap: f(x)h(x) Flip and slide over

Discrete 1D Convolution In dealing with discrete signals: Flip and slide over …234233228240241 … 001221 f(x)h(x) …234233228240241 … 122100122100 228+ 480+ 482+ 241 + …

2D Convolution The concept of linear filtering as convolution with a filter (or kernel) extends to 2D and the integral becomes: We shall be dealing with separable filters only in which this is equivalent to two 1D convolutions:

Simple Spatial Averaging – Analysis By considering the effects of convolution in the frequency domain, we can now see why there was loss of detail: Rectangular pulse function Fourier transform The sinc function High frequencies are damped

White Noise Model This insight allows to devise the denoising filter in a principled way by considering the SNR over different frequencies: Signal frequency spectrum Noise frequency spectrum Frequency Energy

Frequency Energy White Noise Model This insight allows to devise the denoising filter in a principled way by considering the SNR over different frequencies: Pass Do not pass

The Ideal LPF Again As when we dealt with reconstructing a signal from a set of samples, we can low-pass filter by convolving with the sinc function in the spatial domain:  The key limitation is that the sinc function has a wide spatial support  Thus, in practice we often use filters that offer a better trade-off in terms of spatial support and bandwidth

Gaussian Low Pass Filter The Gaussian LPF is one of the most commonly used LPFs. It possesses the attractive property of minimal space-bandwidth product. 1D Gaussian2D Gaussian as a surface 2D Gaussian as an image

Gaussian LPF – Toy Example Using the Gaussian filter on our toy 1D example produces a nearly perfect filtering result: RMS error reduction from 0.1 to 0.02

Gaussian LPF – Example Using out synthetically corrupted image: Additive Gaussian noise LP filtered using a Gaussian with

Low, Band and High-Pass Filters A quick recap of relevant terminology: Frequency Gain Low-passBand-passHigh-pass

Low, Band and High-Pass Filters A summary of main uses:  Low-pass: denoising  High-pass: removal of non-informative low frequency components  Band-pass: combination of low-pass and high-pass filtering effects

Gaussian High-Pass Filter A high pass filter can be simply constructed from the Gaussian LPF: Convolution with the delta function leaves the function unchanged High-pass filterLow-pass filter

Gaussian HPF – Toy Example Consider the effects of high pass filtering our 1D toy example: Original signalHigh-pass filter output The result is not dependent on the signal mean Maximal responses around discontinuities

Gaussian HPF – Example Consider the effects of high pass filtering an image: Original imageHigh-pass filtered image Information rich intensity discontinuities are extracted.

High Frequency Image Content An example of the importance of high-frequency content: ? + Low-pass filterHigh-pass filter

High Frequency Image Content And the result of the experiment is…

HPFs in Face Recognition High-pass filters are used in face recognition to achieve quasi-illumination invariance: Original image of a localized face High-pass filtered

Filter Design – Matched Filters Consider the convolution sum of a discrete signal with a particular filter: When is the filter response maximal? …234233228240241 … 122100122100 228+ 480+482+ 241 + …

Filter Design – Matched Filters The summation is the same as for vector dot product: The response is thus maximal when the two vectors are parallel i.e. when the filter matches the local patch it overlaps. …234233228240241 … 122100

Filter Design – Intensity Discontinuities Using the observation that maximal filter response is exhibited when the filter matches the overlapping signal, we can start designing more complex filters: Kernel with maximal response to intensity edges 0.50.0-0.5

Filter Design – Intensity Discontinuities Better yet, perform Gaussian smoothing to suppress noise first: Noise suppressing kernel with high response to intensity edges Gaussian kernel

Unsharp Masking Enhancement The main principle of unsharp masking is to extract high frequency information and add it onto the original image to enhance edges: image HPF + output Original edge Enhanced

Unsharp Masking Enhancement Unsharp mask filtering performs noise reduction and edge enhancement in one go, by combining a Gaussian LPF with a Laplacian of Gaussian kernel: Gaussian smoothingConvolution with –ve Laplacian of Gaussian += Result

Unsharp Masking – Example Consider the following synthetic example: Gaussian smoothed then corrupted with Gaussian noise

Unsharp Masking – Example After unsharp masking: Gaussian smoothed then corrupted with Gaussian noise

– That is All for Today –

Similar presentations