The Basic Theory of Filtering

Slides:



Advertisements
Similar presentations
Design of Digital IIR Filter
Advertisements

| Page Angelo Farina UNIPR | All Rights Reserved | Confidential Digital sound processing Convolution Digital Filters FFT.
Chapter 8. FIR Filter Design
Lecture 23 Filters Hung-yi Lee.
Signal and System IIR Filter Filbert H. Juwono
DFT/FFT and Wavelets ● Additive Synthesis demonstration (wave addition) ● Standard Definitions ● Computing the DFT and FFT ● Sine and cosine wave multiplication.
Filtering Filtering is one of the most widely used complex signal processing operations The system implementing this operation is called a filter A filter.
Infinite Impulse Response (IIR) Filters
Digital signal processing -G Ravi kishore. INTRODUCTION The goal of DSP is usually to measure, filter and/or compress continuous real-world analog signals.
Digital Signal Processing – Chapter 11 Introduction to the Design of Discrete Filters Prof. Yasser Mostafa Kadah
Unit 9 IIR Filter Design 1. Introduction The ideal filter Constant gain of at least unity in the pass band Constant gain of zero in the stop band The.
AMI 4622 Digital Signal Processing
Ideal Filters One of the reasons why we design a filter is to remove disturbances Filter SIGNAL NOISE We discriminate between signal and noise in terms.
Review of Frequency Domain
LINEAR-PHASE FIR FILTERS DESIGN
MM3FC Mathematical Modeling 3 LECTURE 6 Times Weeks 7,8 & 9. Lectures : Mon,Tues,Wed 10-11am, Rm.1439 Tutorials : Thurs, 10am, Rm. ULT. Clinics : Fri,
EECS 20 Chapter 9 Part 21 Convolution, Impulse Response, Filters Last time we Revisited the impulse function and impulse response Defined the impulse (Dirac.
FILTERING GG313 Lecture 27 December 1, A FILTER is a device or function that allows certain material to pass through it while not others. In electronics.
AGC DSP AGC DSP Professor A G Constantinides 1 Digital Filter Specifications Only the magnitude approximation problem Four basic types of ideal filters.
EE313 Linear Systems and Signals Fall 2010 Initial conversion of content to PowerPoint by Dr. Wade C. Schwartzkopf Prof. Brian L. Evans Dept. of Electrical.
Systems: Definition Filter
Relationship between Magnitude and Phase (cf. Oppenheim, 1999)
Practical Signal Processing Concepts and Algorithms using MATLAB
EE513 Audio Signals and Systems Digital Signal Processing (Systems) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Discrete-Time and System (A Review)
DTFT And Fourier Transform
1 Chapter 8 The Discrete Fourier Transform 2 Introduction  In Chapters 2 and 3 we discussed the representation of sequences and LTI systems in terms.
GG 313 Lecture 26 11/29/05 Sampling Theorem Transfer Functions.
1 Diagramas de bloco e grafos de fluxo de sinal Estruturas de filtros IIR Projeto de filtro FIR Filtros Digitais.
EE Audio Signals and Systems Digital Signal Processing (Synthesis) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
IIR Filter design (cf. Shenoi, 2006) The transfer function of the IIR filter is given by Its frequency responses are (where w is the normalized frequency.
1 BIEN425 – Lecture 10 By the end of the lecture, you should be able to: –Describe the reason and remedy of DFT leakage –Design and implement FIR filters.
1 Lecture 1: February 20, 2007 Topic: 1. Discrete-Time Signals and Systems.
Fundamentals of Digital Signal Processing. Fourier Transform of continuous time signals with t in sec and F in Hz (1/sec). Examples:
Chapter 7 Finite Impulse Response(FIR) Filter Design
1 Introduction to Digital Filters Filter: A filter is essentially a system or network that selectively changes the wave shape, amplitude/frequency and/or.
Copyright 2004 Ken Greenebaum Introduction to Interactive Sound Synthesis Lecture 20:Spectral Filtering Ken Greenebaum.
ES97H Biomedical Signal Processing
1 Conditions for Distortionless Transmission Transmission is said to be distortion less if the input and output have identical wave shapes within a multiplicative.
Chapter 7. Filter Design Techniques
1 Digital Signal Processing Digital Signal Processing  IIR digital filter structures  Filter design.
Course Outline (Tentative) Fundamental Concepts of Signals and Systems Signals Systems Linear Time-Invariant (LTI) Systems Convolution integral and sum.
Z Transform The z-transform of a digital signal x[n] is defined as:
Lecture 3: The Sampling Process and Aliasing 1. Introduction A digital or sampled-data control system operates on discrete- time rather than continuous-time.
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
DISP 2003 Lecture 5 – Part 1 Digital Filters 1 Frequency Response Difference Equations FIR versus IIR FIR Filters Properties and Design Philippe Baudrenghien,
Oh-Jin Kwon, EE dept., Sejong Univ., Seoul, Korea: 2.3 Fourier Transform: From Fourier Series to Fourier Transforms.
What is filter ? A filter is a circuit that passes certain frequencies and rejects all others. The passband is the range of frequencies allowed through.
Chapter 5. Transform Analysis of LTI Systems Section
Review of DSP.
Professor A G Constantinides 1 Digital Filter Specifications We discuss in this course only the magnitude approximation problem There are four basic types.
Lecture 19 Spectrogram: Spectral Analysis via DFT & DTFT
IIR Filter design (cf. Shenoi, 2006)
UNIT - 5 IIR FILTER DESIGN.
Review of DSP.
IIR Filters FIR vs. IIR IIR filter design procedure
Infinite Impulse Response (IIR) Filters
TOPIC 3: FREQUENCY SELECTIVE CIRCUITS
EE Audio Signals and Systems
Description and Analysis of Systems
LINEAR-PHASE FIR FILTERS DESIGN
Ideal Filters One of the reasons why we design a filter is to remove disturbances Filter SIGNAL NOISE We discriminate between signal and noise in terms.
Chapter 8 The Discrete Fourier Transform
FFTs, Windows, and Circularity
Quadrature-Mirror Filter Bank
Chapter 7 Finite Impulse Response(FIR) Filter Design
Tania Stathaki 811b LTI Discrete-Time Systems in Transform Domain Simple Filters Comb Filters (Optional reading) Allpass Transfer.
Tania Stathaki 811b LTI Discrete-Time Systems in Transform Domain Ideal Filters Zero Phase Transfer Functions Linear Phase Transfer.
Chapter 7 Finite Impulse Response(FIR) Filter Design
Review of DSP.
Presentation transcript:

The Basic Theory of Filtering James D. Johnston Audio Architect Microsoft Corporation Steve R. Hastings Software Developer boB Gudgel Self-described Geek

Coffee and Tea Make sure your cup is full Take a deep breath Here we go!

The syllabus Morning: What is a filter Why does a filter “filter”? What is this time/frequency thing? Impulse response FIR vs. IIR Convolution? But why? What are some filter properties? Things to consider when using a filter What do I need to know to design a filter? What is a filterbank, and why do I care?

Afternoon: Steve: boB Where can I get some freeware to design filters? How do I install it? How do I use it? How do I figure out how to build my filters? boB What does filtering sound like? What happens when you get it wrong? How did I implement this, anyhow?

Key Points What does a simple analog filter do? What does a simple digital filter do? Impulse response? Why impulse response? Convolution? Huh? Why does this matter? Fourier and Z-transform relationships: Convolution vs. multiplication (N.B. No actual transforms are discussed) Filters: Zeros vs. poles FIR vs. IIR Phase response vs. Symmetry What can be FIR? What can’t be FIR? Frequency response vs. length Basic implications Filter examples

More Key Points What kind of impulse response implies what kind of performance? What if we want a set of filters That’s a filterBANK That’s an entire field of study itself.

What is a filter? Simply put, it retains history of the input signal. Take this simple lowpass filter for an example. Vin Vout This simple filter is a kind of integrator, in which the capacitor integrates the charge provided through the resistor. As the rate of change of Vout depends on the current through the resistor, low frequencies will be filtered less than high frequencies.

In other words: The output of a filter is a function not only of the input at the present time, but also of previous events. That’s what a linear filter does. No more, no less. We are sticking to linear filters today, thank you! There are many ways to build filters.

The Time and Frequency Response of the Analog Filter Time (impulse) Response Frequency Response

How does the analog filter exhibit memory? In the case shown, the capacitor is the memory element. It retains previous history. It does so by summing the history into one value, the voltage across the capacitor. In such a way, a single component can have a long memory.

Ok, what’s the “impulse response” Impulse response is the response of the circuit to a (mathematical) signal of infinite height (and power), and infinitely short duration. This “unit impulse” has very special characteristics: It contains all frequencies It has energy of “1” at all frequencies. It describes the behavior of the filter at all frequencies. Completely. POINT 3 is what you need to remember. The impulse response of a filter is a complete description of what it does.

Another way to look at the impulse response. The impulse response of a system shows how a filter captures the HISTORY of the signal. In other words: The value of the impulse response at a time ‘t’ demonstrates how much of the HISTORY of the signal is added to the output at time ‘t’ later.

Remember last month? Multiple speakers created a “comb filter”? Yep, that’s a filter. Different distance means “different times in the history of the signal. You can plot a time response for such a filter just like you can plot the one for the simple RC filter above. We saw some of that last month.

Getting back to the analog filter: The analog filter’s output can not change instantly as that would require infinite current in the resistor, that means that the output depends on the HISTORY of the signal. In fact, if you have an input signal, and you integrate the product of the time-reversed impulse response times the signal, you get the filter’s output.

DID YOU SAY INTEGRATE? Well, it can be a sum, rather than an integral, and in fact in the digital domain, it is a sum, rather than an integral. In the digital domain, rather than having all frequencies, you have all frequencies inside the digital passband, which will be ½ the sample rate wide.

I’ll say more about that later. For now, let’s get back to that analog filter again for a minute.

So, what did that filter actually do? It stored part of the history of the signal. As the time went on, it “forgot”, exponentially, the contributions to its output from previous history. Not all impulse responses are so simple. An impulse response may be positive or negative An impulse response may “ring” or not.

How about a “Digital” filter? Output Input + X (1-1/8) Feedback One Sample Delay The digital filter uses history explicitly.

That digital delay In digital terms, a delay of one sample is defined as a multiplication by ‘z’. Some texts use z-1. Either works. The rules are the same, except for how you interpret some things that I’ll leave out for now. Remember a delay of ‘z’ is one sample. If I say I multiplied a signal by z3, it means I delayed it by 3 samples. Yes, it really is that simple.

What is it’s time and frequency response? The result is the same (below half the sampling rate)

What about not below half the sampling rate? Inside the ‘z’ domain, there is no such thing. The ‘z’ domain is also called the “digital domain” by many people. This does involve a potential confusion that I will ignore for the time being. Remember when we talked about the sampling theorem a long time ago? We used an anti-aliasing filter. All other “frequencies” are represented by aliases below fs/2, and you don’t let them in in the first place! All information in the sampled domain is contained inside that bandwidth. All aliases/images contain exactly the SAME information as the baseband spectrum. No more, no less.

Why is the response the same? This filter also stores the history of the signal in an exponentially decaying fashion. In the digital filter, you can see the storage more directly, as the ‘z’ element. In the analog filter, reactive components provide the history in a continuous fashion. I picked the parameters to provide the same visible response as long as we stay well below half the sampling rate. Recursive (IIR) filter outputs depend on both previous outputs and input(s). Analog filters are mostly (but not completely) filters that depend on both output and input.

Back to Impulse Response The response of either filter to an impulse is called its “impulse response”. A digital impulse is simpler (as the bandwidth is finite), and consists of one ‘1’ sample but both have the same use. This is the “time response” plotted in the proceeding diagrams. The impulse response of a filter defines its memory (history) of a signal. Remember. The impulse response of the filter contains exactly all the information about a filter. This is, you will find out, very handy.

Ok, what’s the big deal, JJ? Since the impulse response of the filter defines its interaction with the signal, this means that we can either use the recursive form shown before to implement the filter, or we can simply multiply the time reversed impulse response by the signal and sum (integrate) the result, to get the filter output. The two operations are exactly the same.

Let us use a 9th order Elliptical filter as an example: Impulse response Noise Input (the two exactly overlap) Red: By time reverse, multiply, and sum Green: By direct filtering

Convolution The process of multiplying the time-reversed signal by the impulse response, and summing (integrating) is called “convolution”. Think of it as a way to expressly include the history of the signal in the filter output.

I thought we multiplied transfer functions, jj???? The usual way we see transfer functions expressed: (notice this is in the ‘’, or frequency, domain) S() H() Y() = S()*H() What’s actually happening in the time domain: (Note:  is used here to denote convolution. There are other notations.) s(t) s(t)h(t) y(t) These are two ways of saying the exact same thing!

Multiplication in the time domain is the same as convolution in the frequency domain. Multiplication in the FREQUENCY domain is the same as convolution in the TIME domain. It works either way. If you read a DSP text, you will see the word “duality”. This is duality in action.

For typical filters, convolution is what happens in the time domain. Convolution is merely another way of expressing what happens when you filter a signal. It’s the same as multiplying the signal by the transfer function. This relationship holds for the ‘s’ domain, the ‘z’ domain, and quite some other domains as well.

Convolution is important because: Convolving in the time domain (like we just saw here) is the same as multiplying the Fourier Transform of the signal by the Fourier Transform of the Impulse Response and then taking the Inverse Fourier Transform This works the other way around, too, but isn’t usually as interesting to discuss in most filtering applications except perhaps as window functions. (There are exceptions, for instance, “TNS”.)

Multiplication of Transforms of Signal and Filter Signal Spectrum Filter Filter Spectrum Product of Spectra Inverse transform Of Product

An example of Convolution Time Domain Frequency Domain h(t) |H(w)| s(t) |S(w)| y=s(t) h(t) |Y(w)=H(w)*S(w)| IFFT(Y) |FFT(y)| In this plot, it’s easier to see the linear superposition in the time domain because the two parts of the signal s(t) do not overlap.

What’s my point? Filtering is Convolution. Convolution is filtering. They are the same thing expressed in different domains. There are several ways to do filtering: IIR (Infinite Impulse Response) filters, like the two shown much earlier. They are called IIR because the filter’s impulse response continues to infinity (yes, at infinitely small value for a stable filter). These filters, effectively, use a topology that implements the history inside a few (very important, sensitive) state variables. FIR (Finite Impulse Response) filters, in other words, just do the convolution using a ( potentially arbitrary) impulse response.

So, they are the same? Well, no. In fact, FIR filters have zeros, and IIR filters have poles. (In reality, nearly all IIR filters have both poles and zeros, which is to say that they have both an FIR and an IIR part. FIR and IIR filters can have quite different properties, and usually do, they are two different means to an end. Neither one nor the other is always better.

More about IIR filters IIR filters must be implemented using feedback to implement the poles, in order to be truly IIR. IIR filters are “longer” (in terms of impulse response) compared to the memory they directly use. (i.e. a 2nd order filter can have 1000’s of samples of significant energy in its impulse response.) The impulse response length is what can determine the sharpness of the filter’s frequency and phase response. This extension places substantial requirements on the implementation in terms of accuracy, both of coefficients (analog or digital), and of related processes (digital storage, multiplication, addition). The data is stored in a few variables, so the accuracy required for those variables rises accordingly.

FIR Filters FIR filters are not generally as sensitive to coefficient roundoff FIR filters often require more computation, because you must do a multiply-add for each term in the impulse response FIR filters can be constant delay, IIR filters can not. Sometimes this matters.

What are the meaningful properties of a filter? The amplitude response (plotted in terms of amplitude vs. frequency) The phase response (plotted in terms of phase vs. frequency) What does phase response mean? Linear phase (i.e. constant time delay) Minimum phase Non-minimum-phase Linear phase is an important subset of this class that has all zeros. Attention: We are talking about single filters here, not filter banks. That is another subject, and one that places more constraints on individual filters!

A bit more on phase response “Linear Phase” (constant delay) If a filter has a constant delay, the phase shift of the filter will be t*w, where t is the time delay, and w the natural frequency (2 pi f). This means that a delay can exhibit enormous phase shift. This phase shift, however, is ONLY delay. Non-linear delay This is the part of the phase shift (in and around the filter’s passband) that is not modeled by a straight line) The part that does not correspond to a straight line constitutes non-constant-time phase shift. Phase shift of “1 million degrees” in and of itself tells you nothing!

Some example plots: IIR FIR 13th order elliptical, (poles and zeros) 512 point Symmetric FIR Impulse Response Notice similar length Magnitude Response Notice similar frequency response Phase Response DIFFERENT PHASE RESPONSE Note phase Nonlinearity in passband “Linear” phase

Properties of Impulse Responses Symmetry Antisymmetry Asymmetry DC Gain Fs/2 Gain Frequency response Phase Response

DC gain The DC gain of an impulse response is exactly the sum of all of its non-zero coefficients. For many applications, one wishes to set this to one. This is easy. Divide the entire impulse response by the sum of all values of the impulse response.

Gain at FS/2 This is also easy. Sum all of the EVEN taps Sum all of the ODD taps The difference of the two is the gain of the filter at FS/2

Some useful things to know (I won’t prove them here) A symmetric impulse response implies: The passband phase response (one or multiple passbands) will look like a pure delay (linear phase) “Linear phase”  phi = w*t, where omega is the natural frequency and ‘t’ is the time delay An antisymmetric impulse response has some interesting (and special) properties. They are beyond this introductory tutorial, but are worth looking into for some applications. Such filters will have “linear phase” in the passband, but the intercept of such a filter at DC must be at +_ 90 degrees, and the filter must have a zero at DC. An asymmetric impulse response implies: The passband phase response is not a pure delay. Practically speaking this means that the response is the sum of a symmetric and an antisymmetric response.

Implications of the previous page No IIR filter can be linear phase. If it were, it would have to extend to infinity on both sides, and have infinite delay. Some IIR filters can “come close” under some circumstances. In such cases, they have substantial “pre-ringing” (as they must). A true IIR filter with linear phase must be “non-causal”, i.e. it must be able to “look ahead” in time 9th order Butterworth Impulse response With 108 gain

FIR filters are usually designed as “type 1 linear phase” meaning that they are symmetric, with even filters having two identical center taps, and odd filters symmetric about a single center tap. Symmetric filters with an even number of taps must have a zero at pi and can not be highpass. Even Odd

There are other kinds of FIR filters, in particular antisymmetric even tap filters, which have linear phase in the passband, but do not have zero phase shift at DC. Rather, they have + or – 90 degree phase shift at DC, and must have a zero at DC. These “type 4 filters” can not be used as lowpass filters.

A completely asymmetric FIR filter is a valid filter, and in some cases (phase compensation, etc) may be used. Such filters are usually for special-purpose applications, however.

A comparison of 3 FIR filters

Even vs. odd length Even Odd Zero at pi Nonzero at pi Delay of 15.5 samples. Delay of 16 samples (we will compare 32 tap vs. 33 tap Lowpass FIR’s with identical parameters except for length

So? If you need an integer delay in the filter, use an odd-length filter. (N.B. In many applications, where even filters are applied twice, you can use two even filters.) If you need a zero at pi, use an even-length filter. If you don’t want a zero at pi, you can’t use a symmetric even-length filter. You can use an antisymmetric even length filter if you want a highpass filter, but then you’ll have a zero at DC. This means that symmetric high pass filters are of odd length.

More useful things to know The longer the impulse response is at a given level, the sharper the filter cutoff will be to that level This expresses the old, familiar knowledge that df · dt >= 1 (for a two-sided Gaussian) Yes, this means that if you want 1 Hz resolution, you need a 1 second impulse response.

Frequency Response vs. Length Short Long 32 tap 64 tap The top filters have a wide transition band (.25) The bottom a .05 transition band.

The filters vs. their responses 32 tap first Small Big Note: The passband ripple performs in a similar fashion.

Filter vs. response 64 tap

There are other tradeoffs possible IIR filters can have: Passband ripple only Stop band ripple only Neither passband nor stop band ripple (monotonic response) Both passband and stop band ripple FIR filters as usually designed can have: Ratio of passband ripple to stop band ripple controlled via design parameters. The filter response is not defined in a “transition” band. There are other FIR types possible, they are not that common in most present-day uses.

FIR Example – Passband vs. stopband ripple. Passband weight 10, stop band .1 Passband weight .1, stopband 10 Both filters have 32 taps and the same edge frequency and transition bandwidth

What’s this about “windows”? A window is just another filter, usually a lowpass filter. It is a filter that is most often used to mitigate “edge effects” or other artifacts of truncation or blocking. It is normally MULTIPLIED in the time domain, therefore it CONVOLVES in the frequency domain.

Several examples of windows: These are examples of a windowed sync (brick wall) filter All filters are length 8191 Black is rectangular window, red is Hann, green is Blackman, blue is Hamming, cyan is Kaiser(5), magenta is Bartlett, yellow is Nutall

How are filters described? FIR filters usually are simply listed by either the tap weights (individual values) or by a function that describes the tap weights. This is the same as providing numerator polynomial. IIR filters are described as sets of poles and zeros. More on that now:

Poles? Zeros? WHAT!? Poles and zeros are a way of expressing a transfer function as two polynomials, one in the numerator, and one in the denominator. For either numerator or denominator, a polynomial can be described as a1+ a2 * z1 + a3* z2 … where a1, a2, a3 are the “tap weights’. One can also calculate the roots of the polynomial. The roots of the numerator are the ZEROS. The roots of the denominator are the POLES.

Why poles? Why Zeros? A zero shows a value for the polynomial variable that results in a ZERO output. A pole shows a value for the polynomial that has an INFINTE output. (the response looks like a pole) The meaning of poles and zeros in terms of frequency changes depending on the kind of transfer function (i.e. ‘s’ or Laplace domain, ‘z’ domain, ‘w’ or Fourier domain, or others) but for the commonly used domains will still be some expression of frequency.

A pole/zero plot for a 5th order Butterworth, using bilinear Z form

Expressing Poles and Zeros In the FIR filter, the zeros are expressed by directly providing an impulse response, corresponding to the polynomial that results in the zeros. In an IIR filter, both the poles and zeros are often factored. This leads to a variety of topologies, shown on the next page. Factoring depends on the fact that any real coefficient polynomial can be factored into real roots or complex pairs of roots. A complex pair of roots will always have real coefficients.

Direct form 2nd order Cascade form b1 a1 b1 a2 b2 a2 b2 a3 b3 a4 b4 One Second Order Section (multiple sections are cascaded) a5 b5 These are not the only two possibilities.

The Direct Form The direct form creates a number of difficulties. It increases the size (in terms of bit depth) of numerical coefficients It increases the depth required for accumulators (mantissa for floating point) It’s not, generally speaking, very common or useful for more than 3rd order. Don’t do this. This can result in instabilities due to numerical resolution You can get “limit cycles” and other disturbing nonlinear behavior

Factoring into second-order sections: There are a number of ways to make second order sections. All depend on the fact that you can factor a real-valued polynomial into second-order sections with real coefficient values. How does this relate to filtering? If you convolve a set of factored polynomials, you get the original polynomial That means that if you cascade sections with the polynomials implemented, you MULTIPLY the transfer functions. This is the same old duality in another form. What you’re doing is convolving things a part at a time, and then doing more and more in cascade. A second-order section is easy to check for stability. By factoring both numerator and denominator and grouping things correctly, you can ensure the best gain structure for a given filter. Your computer does this for you!

So we factor FIR’s as well? Generally not. There are several reasons: The coefficient bit-depth growth is not nearly as extreme Coefficients are not generally as large (in FIR filters coefficients are most often considerably smaller than 1. FIR’s can’t go unstable, have limit cycles, or some other kinds of disturbing behavior. Of course, they require more calculation, and they may require a wider accumulator than you expect.

Some examples of Filter Coefficients For an IIR 3rd order bilinear Z Butterworth filter with a cutoff at .125 fs/2, the numerator is: 0.0053 0.0159 0.0159 0.0053 The denominator is: 1.0000 -2.2192 1.7151 -0.4535 For a similar FIR filter, the tap values are: 0.0003 0.0034 0.0067 -0.0031 -0.0312 -0.0397 0.0349 0.1945 0.3343 0.3343 0.1945 0.0349 -0.0397 -0.0312 -0.0031 0.0067 0.0034 0.0003 Notice the difference in the size of the tap weights. In this example, the tap weights are quite moderate for an IIR denominator. Longer filters will have often have a substantially larger range of values. In general, this kind of tradeoff is well beyond the scope of a beginning tutorial, but everyone must be aware of this kind of issue. As you will discover in the next part of this tutorial, most filter design packages take care of this problem.

How to write a transfer function, in the ‘z’ domain: The transfer function for the third order Butterworth is written as: In factored form, it would look like this: Doing factoring is one of the things Matlab, Octave, and other linear algebra and/or filter design packages are for.

More about poles and zeros We’ll show some pole/zero plots, along with the impulse responses, frequency responses and phase responses.

Symmetric FIR (odd length)

Allpass Filter

Designing Filters Steve will show you Octave, a freeware program that allows you to design both IIR and FIR filters. We’ll discuss a bit, here, about designing both kinds of filters.

Designing IIR Filters First, decide what kind of filter you want: Butterworth (no ripple, monotonic amplitude response, requires more poles/zeros) Chebychev 1 (passband ripple, monotonic stopband) Chebychev 2 (stopband ripple, monotonic passband) Elliptical (equiripple passband, equiripple stopband. Shortest filter for a given rejection ratio. Has issues.)

How to do that? Use the “help” function. help butter (for Butterworth) help cheby1 (for Chebychev 1) help cheby2 (for Chebychev 2) help ellip (for Cauer elliptical) Follow the directions. Time does not permit a full examination of all of the calling parameters. All have the form [bb, aa]=butter(3,.125) for example. BB is the zero polynomial AA is the pole polynomial

What does the Frequency Response look like? Use “freqz(bb,aa)” It will give you frequency and phase response.

Designing FIR filters Use “remez” This takes a bit of doing. Before you use “remez” you need to decide: Length of the filter (a single integer) The points at which frequency response changes The amplitudes at each of those points.

So you have len=15 (NOTE: That means a 16 tap filter, the order is 1 less than the filter length) freq=[ 0 .2 .6 1] (that is a list of 4 frequencies, corresponding to DC, .2 .6 and 1 times half the sampling rate, whatever that is). 0 and 1 must be included. amp=[1 1 0 0] (that means that the amplitude at 0 and .2 you want to be close to 1. at .6 and 1, you want it to be zero. bb=remez(len,freq,amp) will give you a filter that is optimized to be as close as possible to that response.

But I care about passband ripple! (or stopband ripple) w=[10 1] (this is half as long as freq and amp vectors, both of which must be even length) Here, the 10 means that the ERROR in the filter design between the first two frequency points is counted 10 times as much as the weight (1) between the second two points. So bb=remez(len,freq,amp,w) will give you a filter with the error weighting you specify. NOTE: you can not weight the error in a transition band. By definition, there is no error in a transition band.

Now, frequency response freqz(bb) That’s all it takes. You will see frequency and phase response. remez always designs symmetric filters, unless you tell it to do something else “help remez” will get you as many options as you wanted to ever know about.

To Take Home: Filtering is the practice of convolving an impulse response (the time response of the filter) with the signal. FIR filters directly implement this convolution. IIR filters use a functional representation that does the convolution implicitly, with some cost in implementation issues.

More to take home: Frequency (complex spectrum) response and impulse response are duals. The relationship df * dt > matters in filter design just like it does in anything else. If you want a sharp filter, you have a long impulse response. If you want a short impulse response, you can not have a sharp filter cutoff. IIR filters do not shorten the impulse response, they simply operate in a different fashion.

To come after the break: Steve Hastings will give you a set of tips for tools that you can get off the net to help you design, plot, and understand filters (and a whole lot more things) boB Gudgel will show you what it sounds like to implement various filters, and offer tips on how to do this kind of work in the real world.

Filterbanks Ok, now what is a filter? And a filterBANK? out in Filter outputs Filterbank in

Filterbanks A filterbank is nothing but a way to implement a set of filters, generally strongly mathematically related, in one operation. A filterbank can always be decomposed into a set of individual filters This is usually a lot more work that it’s worth, but not always.

The famous audio filterbank: This would be the “MDCT”, or “Modified Discrete Cosine Transform”. Annoyingly, it’s not a transform, it’s a FILTERBANK. It is an exact reconstruction filterbank, though, so it does obey most of the rules of transforms, except that it has overlap between blocks, and remains critically sampled. A transform either has no overlap, OR is not critically sampled.

Critically Sampled – Whaaa??? Critically sampled is a simple concept at its heart, it means that in the filtered domain, you have the same number of samples that you do in the unfiltered domain.

The theory of filterbanks is long, deep, and wide And I won’t even try to relate it in an hour. BUT, what you need to remember is that an output of a filterbank is just like running some particular kind of a filter on the signal. The filterbank just does a lot of these at once. It may also: Downsample (i.e. critical sampling) Be oversampled (more values in the filtered domain than in the input and output domains)

What are some applications? For an MDCT, the obvious one is coding: It is critically sampled. Ergo, no extra data to code It does a good job of frequency analysis, so you can relate the perceptual model well, and you can also get good signal processing gain from it. It has an efficient form for calculation, very similar to an FFT of half its length.

Some rules about critically sampled banks. If you’re not careful, very odd things happen when you modify the filtered results. The best known of these is the “pre-echo” in audio codecs. There are also other things that can go wrong Why? That critical sampling means that the filterbank creates a lot of aliasing, and then cancels it on reconstruction. Mess with the signal in the filtered domain, and the aliasing does not cancel.

How about oversampled filterbanks You can avoid aliasing problems, so You can modify the signal You can use it for things like equalizers Gain compressors work well with this kind of filterbank There are other applications that are far too complicated to bring up at present.

So - Filterbanks At their heart, nothing but a handy way of implementing a whole set of filters at the same time. There are more things to this than computational efficiency.