H 0 (z) x(n)x(n) o(n)o(n) M G 0 (z) M + vo(n)vo(n) yo(n)yo(n) H 1 (z) 1(n)1(n) M G 1 (z) M v1(n)v1(n) y1(n)y1(n) fo(n)fo(n) f1(n)f1(n) y(n)y(n) Figure.

Slides:



Advertisements
Similar presentations
Digital Filter Banks The digital filter bank is set of bandpass filters with either a common input or a summed output An M-band analysis filter bank is.
Advertisements

Eigen Decomposition and Singular Value Decomposition
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Compressive Sensing IT530, Lecture Notes.
Lecture 7: Basis Functions & Fourier Series
August 2004Multirate DSP (Part 2/2)1 Multirate DSP Digital Filter Banks Filter Banks and Subband Processing Applications and Advantages Perfect Reconstruction.
OPTIMUM FILTERING.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
Linear Transformations
Chapter 5 Orthogonality
MM3FC Mathematical Modeling 3 LECTURE 10 Times Weeks 7,8 & 9. Lectures : Mon,Tues,Wed 10-11am, Rm.1439 Tutorials : Thurs, 10am, Rm. ULT. Clinics : Fri,
1 Copyright © 2001, S. K. Mitra Polyphase Decomposition The Decomposition Consider an arbitrary sequence {x[n]} with a z-transform X(z) given by We can.
Wavelet Transform A very brief look.
Digital Image Processing, 2nd ed. www. imageprocessingbook.com © 2001 R. C. Gonzalez & R. E. Woods 1 Objective To provide background material in support.
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
19. Series Representation of Stochastic Processes
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Separate multivariate observations
Normalised Least Mean-Square Adaptive Filtering
T Digital Signal Processing and Filtering
Discrete-Time and System (A Review)
DTFT And Fourier Transform
Probability Theory and Random Processes
Summarized by Soo-Jin Kim
IMAGE SAMPLING AND IMAGE QUANTIZATION 1. Introduction
CHAPTER 8 DSP Algorithm Implementation Wang Weilian School of Information Science and Technology Yunnan University.
1 Orthonormal Wavelets with Simple Closed-Form Expressions G. G. Walter and J. Zhang IEEE Trans. Signal Processing, vol. 46, No. 8, August 王隆仁.
Week 2ELE Adaptive Signal Processing 1 STOCHASTIC PROCESSES AND MODELS.
CS654: Digital Image Analysis Lecture 15: Image Transforms with Real Basis Functions.
Efficient use of spectrum Less sensitive to noise & distortions Integration of digital services Data Encryption Digital Video.
Copyright © 2001, S. K. Mitra Digital Filter Structures The convolution sum description of an LTI discrete-time system be used, can in principle, to implement.
Chapter 21 R(x) Algorithm a) Anomaly Detection b) Matched Filter.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
Basics of Neural Networks Neural Network Topologies.
Approximate Analytical Solutions to the Groundwater Flow Problem CWR 6536 Stochastic Subsurface Hydrology.
Unit-V DSP APPLICATIONS. UNIT V -SYLLABUS DSP APPLICATIONS Multirate signal processing: Decimation Interpolation Sampling rate conversion by a rational.
Digital image processing Chapter 3. Image sampling and quantization IMAGE SAMPLING AND IMAGE QUANTIZATION 1. Introduction 2. Sampling in the two-dimensional.
Professor A G Constantinides 1 Interpolation & Decimation Sampling period T, at the output Interpolation by m: Let the OUTPUT be [i.e. Samples exist at.
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
DSP-CIS Part-IV : Filter Banks & Subband Systems Chapter-12 : Frequency Domain Filtering Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Principal Component Analysis (PCA)
. Fix-rate Signal Processing Fix rate filters - same number of input and output samples Filter x(n) 8 samples y(n) 8 samples y(n) = h(n) * x(n) Figure.
Frequency Domain Coding of Speech 主講人:虞台文. Content Introduction The Short-Time Fourier Transform The Short-Time Discrete Fourier Transform Wide-Band Analysis/Synthesis.
Discrete-time Random Signals
IMAGE SAMPLING AND IMAGE QUANTIZATION 1. Introduction
Geology 5600/6600 Signal Analysis 14 Sep 2015 © A.R. Lowry 2015 Last time: A stationary process has statistical properties that are time-invariant; a wide-sense.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
Geology 6600/7600 Signal Analysis 23 Oct 2015
Feature Extraction 主講人:虞台文.
Chapter 13 Discrete Image Transforms
Power Spectrum Estimation in Theory and in Practice Adrian Liu, MIT.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Decimation & Interpolation (M=4) 0  /4  /4  /2  /4  /4  /2  /4  /4  /2  /4  M=4 M M Bandwidth -  /4 Figure 12 USBLSB.
Math for CS Fourier Transforms
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Signal Prediction and Transformation Trac D. Tran ECE Department The Johns Hopkins University Baltimore MD
Biointelligence Laboratory, Seoul National University
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Multiresolution Analysis (Chapter 7)
SIGNALS PROCESSING AND ANALYSIS
Wavelets : Introduction and Examples
Feature space tansformation methods
Quadrature-Mirror Filter Bank
Chapter 8 The Discrete Fourier Transform
Chapter 8 The Discrete Fourier Transform
Linear Systems Review Objective
Presentation transcript:

H 0 (z) x(n)x(n) o(n)o(n) M G 0 (z) M + vo(n)vo(n) yo(n)yo(n) H 1 (z) 1(n)1(n) M G 1 (z) M v1(n)v1(n) y1(n)y1(n) fo(n)fo(n) f1(n)f1(n) y(n)y(n) Figure 31 H M-2 (z)  M-2 (n) M G M-2 (z) M v M-2 (n) H M - 1 (z)  M-1 (n) M G M-1 (z) M v M-1 (n) f M-2 (n) f M-1 (n) y M-1 (n)

Without the decimator and interpolator, the input/output relationship is straightforward (32) With the decimator and interpolator, we have to take it step-by-step

(33)

(34) Decimated signal Images

(33) (34) (35)

(33) (34) (35) (36)

(33) (34) (35) (36)

(33) (34) (35) (36) (37)

(33) (34) (35) (36) (37)

(33) (34) (35) (36) (37)

(38) (39) Alias Component (AC) matrix Note Y(z) is a 1x1 matrix!

Aliasing error Aliasing error free condition (40)

Aliasing error Aliasing error free condition (41)

Condition for perfect reconstruction is simple in theory, as However, Is complicated to solve and resulted in IIR synthesis filters.

An effective solution employing Polyphase decomposition Recalling, Or simply i.e.,

Similar treatment to the synthesis filter gives Or simply i.e.,

Diagram illustration: z -1

Diagram illustration: z -1 Can be replaced with their polyphase components, as

M M M M M M

M M M M M M z -1

M M M M M M

M M M M M M Results in perfect reconstruction

Implementing FIR filters for M-channel filter banks, to begin with, Noted that

Let Compute P(z)

It can be seen that both analysis and synthesis filters are FIR

All the analysis filters can be generated from

Spectral Analysis Given f(n) = [f 0, f 1,....., f N-1 ] and an orthonormal basis i.e., The spectral (generalized Fourier) coefficients of f(n) are defined as (66) (67) Eqn. 66 and 67 define the orthonormal transform and its inverse

Spectral Analysis If the members of  are sinusoidal sequences, the transform is known as the Fourier Transform The Parseval theorem - Conservation of Energy in orthonormal transform (68)

An Application - Spectral Analysis N N N f(n)f(n) 00 11  N-1 Orthonormal spectral analyser implemented with multirate filter bank Figure 32

An Application - Spectral Analysis Transform efficiency - measured by decorrelation and energy compactness Correlation - Neighboring samples can be predicted from the current sample : an inefficient representation. Energy Compactness - The importance of each sample in forming the entire signal. If every sample is equally important, everyone of them has to be included in the representation: again an inefficient representation. An ideal transform: 1. Samples are totally unrelated to each other. 2. Only a few samples are necessary to represent the entire signal.

How to derive the optimal transform? Given a signal f(n), define the mean and autocorrelation as and Assume f(n) is wide-sense stationary, i.e. its statistical properties are constant with changes in time Defineand (69) (70)

How to derive the optimal transform? (71) Equation 69 can be rewritten as The covariance of f is given by (72) (73)

How to derive the optimal transform? The signal is transform to its spectral coefficients with eqn 66 Comparing the two sequences:

How to derive the optimal transform? The signal is transform to its spectral coefficients with eqn 66 Comparing the two sequences: a. Adjacent terms are related b. Every term is important a. Adjacent terms are unrelated b. Only the first few terms are important

How to derive the optimal transform? The signal is transform to its spectral coefficients with eqn 66 similar to f, we can define the mean, autocorrelation and covariance matrix for 

How to derive the optimal transform? a. Adjacent terms are relateda. Adjacent terms are unrelated Adjacent terms are uncorrelated if every term is only correlated to itself, i.e., all off-diagonal terms in the autocorrelation function is zero. Define a measurement on correlation between samples: (74)

How to derive the optimal transform? We assume that the mean of the signal is zero. This can be achieved simply by subtracting the mean from f if it is non- zero. The covariance and autocorrelation matrices are the same after the mean is removed.

How to derive the optimal transform? b. Every term is important b. Only the first few terms are important Note: If only the first L-1 terms are used to reconstruct the signal, we have (75)

How to derive the optimal transform? If only the first L-1 terms are used to reconstruct the signal, the error is The energy lost is given by but, hence (76) (77) (78)

How to derive the optimal transform? Eqn. 78 is valid for describing the approximation error of a single sequence of signal data f. A more generic description for covering a collection of signal sequences is given by: (79) An optimal transform mininize the error term in eqn. 79. However, the solution space is enormous and constraint is required. Noted that the basis functions are orthonormal, hence the following objective function is adopted.

How to derive the optimal transform? (80) The term r is known as the Lagrangian multiplier The optimal solution can be found by setting the gradient of J to 0 for each value of r, i.e., Eqn 81 is based on the orthonormal property of the basis functions. (81)

How to derive the optimal transform? The solution for each basis function is given by (82)  r  is an eigenvector of R f and r is an eigenvalue Grouping the N basis functions gives an overall equation (83) R  =  R f      =  which is a diagonal matrix. The decorrelation criteria is satisfied (84)