Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Basic Theory of Filtering

Similar presentations


Presentation on theme: "The Basic Theory of Filtering"— Presentation transcript:

1 The Basic Theory of Filtering
James D. Johnston Audio Architect Microsoft Corporation Steve R. Hastings Software Developer boB Gudgel Self-described Geek

2 Coffee and Tea Make sure your cup is full Take a deep breath
Here we go!

3 The syllabus Morning: What is a filter Why does a filter “filter”?
What is this time/frequency thing? Impulse response FIR vs. IIR Convolution? But why? What are some filter properties? Things to consider when using a filter What do I need to know to design a filter? What is a filterbank, and why do I care?

4 Afternoon: Steve: boB Where can I get some freeware to design filters?
How do I install it? How do I use it? How do I figure out how to build my filters? boB What does filtering sound like? What happens when you get it wrong? How did I implement this, anyhow?

5 Key Points What does a simple analog filter do?
What does a simple digital filter do? Impulse response? Why impulse response? Convolution? Huh? Why does this matter? Fourier and Z-transform relationships: Convolution vs. multiplication (N.B. No actual transforms are discussed) Filters: Zeros vs. poles FIR vs. IIR Phase response vs. Symmetry What can be FIR? What can’t be FIR? Frequency response vs. length Basic implications Filter examples

6 More Key Points What kind of impulse response implies what kind of performance? What if we want a set of filters That’s a filterBANK That’s an entire field of study itself.

7 What is a filter? Simply put, it retains history of the input signal.
Take this simple lowpass filter for an example. Vin Vout This simple filter is a kind of integrator, in which the capacitor integrates the charge provided through the resistor. As the rate of change of Vout depends on the current through the resistor, low frequencies will be filtered less than high frequencies.

8 In other words: The output of a filter is a function not only of the input at the present time, but also of previous events. That’s what a linear filter does. No more, no less. We are sticking to linear filters today, thank you! There are many ways to build filters.

9 The Time and Frequency Response of the Analog Filter
Time (impulse) Response Frequency Response

10 How does the analog filter exhibit memory?
In the case shown, the capacitor is the memory element. It retains previous history. It does so by summing the history into one value, the voltage across the capacitor. In such a way, a single component can have a long memory.

11 Ok, what’s the “impulse response”
Impulse response is the response of the circuit to a (mathematical) signal of infinite height (and power), and infinitely short duration. This “unit impulse” has very special characteristics: It contains all frequencies It has energy of “1” at all frequencies. It describes the behavior of the filter at all frequencies. Completely. POINT 3 is what you need to remember. The impulse response of a filter is a complete description of what it does.

12 Another way to look at the impulse response.
The impulse response of a system shows how a filter captures the HISTORY of the signal. In other words: The value of the impulse response at a time ‘t’ demonstrates how much of the HISTORY of the signal is added to the output at time ‘t’ later.

13 Remember last month? Multiple speakers created a “comb filter”?
Yep, that’s a filter. Different distance means “different times in the history of the signal. You can plot a time response for such a filter just like you can plot the one for the simple RC filter above. We saw some of that last month.

14 Getting back to the analog filter:
The analog filter’s output can not change instantly as that would require infinite current in the resistor, that means that the output depends on the HISTORY of the signal. In fact, if you have an input signal, and you integrate the product of the time-reversed impulse response times the signal, you get the filter’s output.

15 DID YOU SAY INTEGRATE? Well, it can be a sum, rather than an integral, and in fact in the digital domain, it is a sum, rather than an integral. In the digital domain, rather than having all frequencies, you have all frequencies inside the digital passband, which will be ½ the sample rate wide.

16 I’ll say more about that later.
For now, let’s get back to that analog filter again for a minute.

17 So, what did that filter actually do?
It stored part of the history of the signal. As the time went on, it “forgot”, exponentially, the contributions to its output from previous history. Not all impulse responses are so simple. An impulse response may be positive or negative An impulse response may “ring” or not.

18 How about a “Digital” filter?
Output Input + X (1-1/8) Feedback One Sample Delay The digital filter uses history explicitly.

19 That digital delay In digital terms, a delay of one sample is defined as a multiplication by ‘z’. Some texts use z-1. Either works. The rules are the same, except for how you interpret some things that I’ll leave out for now. Remember a delay of ‘z’ is one sample. If I say I multiplied a signal by z3, it means I delayed it by 3 samples. Yes, it really is that simple.

20 What is it’s time and frequency response?
The result is the same (below half the sampling rate)

21 What about not below half the sampling rate?
Inside the ‘z’ domain, there is no such thing. The ‘z’ domain is also called the “digital domain” by many people. This does involve a potential confusion that I will ignore for the time being. Remember when we talked about the sampling theorem a long time ago? We used an anti-aliasing filter. All other “frequencies” are represented by aliases below fs/2, and you don’t let them in in the first place! All information in the sampled domain is contained inside that bandwidth. All aliases/images contain exactly the SAME information as the baseband spectrum. No more, no less.

22 Why is the response the same?
This filter also stores the history of the signal in an exponentially decaying fashion. In the digital filter, you can see the storage more directly, as the ‘z’ element. In the analog filter, reactive components provide the history in a continuous fashion. I picked the parameters to provide the same visible response as long as we stay well below half the sampling rate. Recursive (IIR) filter outputs depend on both previous outputs and input(s). Analog filters are mostly (but not completely) filters that depend on both output and input.

23 Back to Impulse Response
The response of either filter to an impulse is called its “impulse response”. A digital impulse is simpler (as the bandwidth is finite), and consists of one ‘1’ sample but both have the same use. This is the “time response” plotted in the proceeding diagrams. The impulse response of a filter defines its memory (history) of a signal. Remember. The impulse response of the filter contains exactly all the information about a filter. This is, you will find out, very handy.

24 Ok, what’s the big deal, JJ?
Since the impulse response of the filter defines its interaction with the signal, this means that we can either use the recursive form shown before to implement the filter, or we can simply multiply the time reversed impulse response by the signal and sum (integrate) the result, to get the filter output. The two operations are exactly the same.

25 Let us use a 9th order Elliptical filter as an example:
Impulse response Noise Input (the two exactly overlap) Red: By time reverse, multiply, and sum Green: By direct filtering

26 Convolution The process of multiplying the time-reversed signal by the impulse response, and summing (integrating) is called “convolution”. Think of it as a way to expressly include the history of the signal in the filter output.

27 I thought we multiplied transfer functions, jj????
The usual way we see transfer functions expressed: (notice this is in the ‘’, or frequency, domain) S() H() Y() = S()*H() What’s actually happening in the time domain: (Note:  is used here to denote convolution. There are other notations.) s(t) s(t)h(t) y(t) These are two ways of saying the exact same thing!

28 Multiplication in the time domain is the same as convolution in the frequency domain.
Multiplication in the FREQUENCY domain is the same as convolution in the TIME domain. It works either way. If you read a DSP text, you will see the word “duality”. This is duality in action.

29 For typical filters, convolution is what happens in the time domain.
Convolution is merely another way of expressing what happens when you filter a signal. It’s the same as multiplying the signal by the transfer function. This relationship holds for the ‘s’ domain, the ‘z’ domain, and quite some other domains as well.

30 Convolution is important because:
Convolving in the time domain (like we just saw here) is the same as multiplying the Fourier Transform of the signal by the Fourier Transform of the Impulse Response and then taking the Inverse Fourier Transform This works the other way around, too, but isn’t usually as interesting to discuss in most filtering applications except perhaps as window functions. (There are exceptions, for instance, “TNS”.)

31 Multiplication of Transforms of Signal and Filter
Signal Spectrum Filter Filter Spectrum Product of Spectra Inverse transform Of Product

32 An example of Convolution
Time Domain Frequency Domain h(t) |H(w)| s(t) |S(w)| y=s(t) h(t) |Y(w)=H(w)*S(w)| IFFT(Y) |FFT(y)| In this plot, it’s easier to see the linear superposition in the time domain because the two parts of the signal s(t) do not overlap.

33 What’s my point? Filtering is Convolution. Convolution is filtering. They are the same thing expressed in different domains. There are several ways to do filtering: IIR (Infinite Impulse Response) filters, like the two shown much earlier. They are called IIR because the filter’s impulse response continues to infinity (yes, at infinitely small value for a stable filter). These filters, effectively, use a topology that implements the history inside a few (very important, sensitive) state variables. FIR (Finite Impulse Response) filters, in other words, just do the convolution using a ( potentially arbitrary) impulse response.

34 So, they are the same? Well, no. In fact, FIR filters have zeros, and IIR filters have poles. (In reality, nearly all IIR filters have both poles and zeros, which is to say that they have both an FIR and an IIR part. FIR and IIR filters can have quite different properties, and usually do, they are two different means to an end. Neither one nor the other is always better.

35 More about IIR filters IIR filters must be implemented using feedback to implement the poles, in order to be truly IIR. IIR filters are “longer” (in terms of impulse response) compared to the memory they directly use. (i.e. a 2nd order filter can have 1000’s of samples of significant energy in its impulse response.) The impulse response length is what can determine the sharpness of the filter’s frequency and phase response. This extension places substantial requirements on the implementation in terms of accuracy, both of coefficients (analog or digital), and of related processes (digital storage, multiplication, addition). The data is stored in a few variables, so the accuracy required for those variables rises accordingly.

36 FIR Filters FIR filters are not generally as sensitive to coefficient roundoff FIR filters often require more computation, because you must do a multiply-add for each term in the impulse response FIR filters can be constant delay, IIR filters can not. Sometimes this matters.

37 What are the meaningful properties of a filter?
The amplitude response (plotted in terms of amplitude vs. frequency) The phase response (plotted in terms of phase vs. frequency) What does phase response mean? Linear phase (i.e. constant time delay) Minimum phase Non-minimum-phase Linear phase is an important subset of this class that has all zeros. Attention: We are talking about single filters here, not filter banks. That is another subject, and one that places more constraints on individual filters!

38 A bit more on phase response
“Linear Phase” (constant delay) If a filter has a constant delay, the phase shift of the filter will be t*w, where t is the time delay, and w the natural frequency (2 pi f). This means that a delay can exhibit enormous phase shift. This phase shift, however, is ONLY delay. Non-linear delay This is the part of the phase shift (in and around the filter’s passband) that is not modeled by a straight line) The part that does not correspond to a straight line constitutes non-constant-time phase shift. Phase shift of “1 million degrees” in and of itself tells you nothing!

39 Some example plots: IIR FIR
13th order elliptical, (poles and zeros) 512 point Symmetric FIR Impulse Response Notice similar length Magnitude Response Notice similar frequency response Phase Response DIFFERENT PHASE RESPONSE Note phase Nonlinearity in passband “Linear” phase

40 Properties of Impulse Responses
Symmetry Antisymmetry Asymmetry DC Gain Fs/2 Gain Frequency response Phase Response

41 DC gain The DC gain of an impulse response is exactly the sum of all of its non-zero coefficients. For many applications, one wishes to set this to one. This is easy. Divide the entire impulse response by the sum of all values of the impulse response.

42 Gain at FS/2 This is also easy. Sum all of the EVEN taps
Sum all of the ODD taps The difference of the two is the gain of the filter at FS/2

43 Some useful things to know (I won’t prove them here)
A symmetric impulse response implies: The passband phase response (one or multiple passbands) will look like a pure delay (linear phase) “Linear phase”  phi = w*t, where omega is the natural frequency and ‘t’ is the time delay An antisymmetric impulse response has some interesting (and special) properties. They are beyond this introductory tutorial, but are worth looking into for some applications. Such filters will have “linear phase” in the passband, but the intercept of such a filter at DC must be at +_ 90 degrees, and the filter must have a zero at DC. An asymmetric impulse response implies: The passband phase response is not a pure delay. Practically speaking this means that the response is the sum of a symmetric and an antisymmetric response.

44 Implications of the previous page
No IIR filter can be linear phase. If it were, it would have to extend to infinity on both sides, and have infinite delay. Some IIR filters can “come close” under some circumstances. In such cases, they have substantial “pre-ringing” (as they must). A true IIR filter with linear phase must be “non-causal”, i.e. it must be able to “look ahead” in time 9th order Butterworth Impulse response With 108 gain

45 FIR filters are usually designed as “type 1 linear phase” meaning that they are symmetric, with even filters having two identical center taps, and odd filters symmetric about a single center tap. Symmetric filters with an even number of taps must have a zero at pi and can not be highpass. Even Odd

46 There are other kinds of FIR filters, in particular antisymmetric even tap filters, which have linear phase in the passband, but do not have zero phase shift at DC. Rather, they have + or – 90 degree phase shift at DC, and must have a zero at DC. These “type 4 filters” can not be used as lowpass filters.

47 A completely asymmetric FIR filter is a valid filter, and in some cases (phase compensation, etc) may be used. Such filters are usually for special-purpose applications, however.

48 A comparison of 3 FIR filters

49 Even vs. odd length Even Odd
Zero at pi Nonzero at pi Delay of 15.5 samples. Delay of 16 samples (we will compare 32 tap vs. 33 tap Lowpass FIR’s with identical parameters except for length

50 So? If you need an integer delay in the filter, use an odd-length filter. (N.B. In many applications, where even filters are applied twice, you can use two even filters.) If you need a zero at pi, use an even-length filter. If you don’t want a zero at pi, you can’t use a symmetric even-length filter. You can use an antisymmetric even length filter if you want a highpass filter, but then you’ll have a zero at DC. This means that symmetric high pass filters are of odd length.

51 More useful things to know
The longer the impulse response is at a given level, the sharper the filter cutoff will be to that level This expresses the old, familiar knowledge that df · dt >= 1 (for a two-sided Gaussian) Yes, this means that if you want 1 Hz resolution, you need a 1 second impulse response.

52 Frequency Response vs. Length Short Long
32 tap 64 tap The top filters have a wide transition band (.25) The bottom a .05 transition band.

53 The filters vs. their responses 32 tap first
Small Big Note: The passband ripple performs in a similar fashion.

54 Filter vs. response 64 tap

55 There are other tradeoffs possible
IIR filters can have: Passband ripple only Stop band ripple only Neither passband nor stop band ripple (monotonic response) Both passband and stop band ripple FIR filters as usually designed can have: Ratio of passband ripple to stop band ripple controlled via design parameters. The filter response is not defined in a “transition” band. There are other FIR types possible, they are not that common in most present-day uses.

56 FIR Example – Passband vs. stopband ripple.
Passband weight 10, stop band .1 Passband weight .1, stopband 10 Both filters have 32 taps and the same edge frequency and transition bandwidth

57 What’s this about “windows”?
A window is just another filter, usually a lowpass filter. It is a filter that is most often used to mitigate “edge effects” or other artifacts of truncation or blocking. It is normally MULTIPLIED in the time domain, therefore it CONVOLVES in the frequency domain.

58 Several examples of windows:
These are examples of a windowed sync (brick wall) filter All filters are length 8191 Black is rectangular window, red is Hann, green is Blackman, blue is Hamming, cyan is Kaiser(5), magenta is Bartlett, yellow is Nutall

59 How are filters described?
FIR filters usually are simply listed by either the tap weights (individual values) or by a function that describes the tap weights. This is the same as providing numerator polynomial. IIR filters are described as sets of poles and zeros. More on that now:

60 Poles? Zeros? WHAT!? Poles and zeros are a way of expressing a transfer function as two polynomials, one in the numerator, and one in the denominator. For either numerator or denominator, a polynomial can be described as a1+ a2 * z1 + a3* z2 … where a1, a2, a3 are the “tap weights’. One can also calculate the roots of the polynomial. The roots of the numerator are the ZEROS. The roots of the denominator are the POLES.

61 Why poles? Why Zeros? A zero shows a value for the polynomial variable that results in a ZERO output. A pole shows a value for the polynomial that has an INFINTE output. (the response looks like a pole) The meaning of poles and zeros in terms of frequency changes depending on the kind of transfer function (i.e. ‘s’ or Laplace domain, ‘z’ domain, ‘w’ or Fourier domain, or others) but for the commonly used domains will still be some expression of frequency.

62 A pole/zero plot for a 5th order Butterworth, using bilinear Z form

63 Expressing Poles and Zeros
In the FIR filter, the zeros are expressed by directly providing an impulse response, corresponding to the polynomial that results in the zeros. In an IIR filter, both the poles and zeros are often factored. This leads to a variety of topologies, shown on the next page. Factoring depends on the fact that any real coefficient polynomial can be factored into real roots or complex pairs of roots. A complex pair of roots will always have real coefficients.

64 Direct form 2nd order Cascade form
b1 a1 b1 a2 b2 a2 b2 a3 b3 a4 b4 One Second Order Section (multiple sections are cascaded) a5 b5 These are not the only two possibilities.

65 The Direct Form The direct form creates a number of difficulties.
It increases the size (in terms of bit depth) of numerical coefficients It increases the depth required for accumulators (mantissa for floating point) It’s not, generally speaking, very common or useful for more than 3rd order. Don’t do this. This can result in instabilities due to numerical resolution You can get “limit cycles” and other disturbing nonlinear behavior

66 Factoring into second-order sections:
There are a number of ways to make second order sections. All depend on the fact that you can factor a real-valued polynomial into second-order sections with real coefficient values. How does this relate to filtering? If you convolve a set of factored polynomials, you get the original polynomial That means that if you cascade sections with the polynomials implemented, you MULTIPLY the transfer functions. This is the same old duality in another form. What you’re doing is convolving things a part at a time, and then doing more and more in cascade. A second-order section is easy to check for stability. By factoring both numerator and denominator and grouping things correctly, you can ensure the best gain structure for a given filter. Your computer does this for you!

67 So we factor FIR’s as well?
Generally not. There are several reasons: The coefficient bit-depth growth is not nearly as extreme Coefficients are not generally as large (in FIR filters coefficients are most often considerably smaller than 1. FIR’s can’t go unstable, have limit cycles, or some other kinds of disturbing behavior. Of course, they require more calculation, and they may require a wider accumulator than you expect.

68 Some examples of Filter Coefficients
For an IIR 3rd order bilinear Z Butterworth filter with a cutoff at .125 fs/2, the numerator is: The denominator is: For a similar FIR filter, the tap values are: Notice the difference in the size of the tap weights. In this example, the tap weights are quite moderate for an IIR denominator. Longer filters will have often have a substantially larger range of values. In general, this kind of tradeoff is well beyond the scope of a beginning tutorial, but everyone must be aware of this kind of issue. As you will discover in the next part of this tutorial, most filter design packages take care of this problem.

69 How to write a transfer function, in the ‘z’ domain:
The transfer function for the third order Butterworth is written as: In factored form, it would look like this: Doing factoring is one of the things Matlab, Octave, and other linear algebra and/or filter design packages are for.

70 More about poles and zeros
We’ll show some pole/zero plots, along with the impulse responses, frequency responses and phase responses.

71 Symmetric FIR (odd length)

72 Allpass Filter

73 Designing Filters Steve will show you Octave, a freeware program that allows you to design both IIR and FIR filters. We’ll discuss a bit, here, about designing both kinds of filters.

74 Designing IIR Filters First, decide what kind of filter you want:
Butterworth (no ripple, monotonic amplitude response, requires more poles/zeros) Chebychev 1 (passband ripple, monotonic stopband) Chebychev 2 (stopband ripple, monotonic passband) Elliptical (equiripple passband, equiripple stopband. Shortest filter for a given rejection ratio. Has issues.)

75 How to do that? Use the “help” function.
help butter (for Butterworth) help cheby1 (for Chebychev 1) help cheby2 (for Chebychev 2) help ellip (for Cauer elliptical) Follow the directions. Time does not permit a full examination of all of the calling parameters. All have the form [bb, aa]=butter(3,.125) for example. BB is the zero polynomial AA is the pole polynomial

76 What does the Frequency Response look like?
Use “freqz(bb,aa)” It will give you frequency and phase response.

77 Designing FIR filters Use “remez” This takes a bit of doing.
Before you use “remez” you need to decide: Length of the filter (a single integer) The points at which frequency response changes The amplitudes at each of those points.

78 So you have len=15 (NOTE: That means a 16 tap filter, the order is 1 less than the filter length) freq=[ ] (that is a list of 4 frequencies, corresponding to DC, and 1 times half the sampling rate, whatever that is). 0 and 1 must be included. amp=[ ] (that means that the amplitude at 0 and .2 you want to be close to 1. at .6 and 1, you want it to be zero. bb=remez(len,freq,amp) will give you a filter that is optimized to be as close as possible to that response.

79 But I care about passband ripple! (or stopband ripple)
w=[10 1] (this is half as long as freq and amp vectors, both of which must be even length) Here, the 10 means that the ERROR in the filter design between the first two frequency points is counted 10 times as much as the weight (1) between the second two points. So bb=remez(len,freq,amp,w) will give you a filter with the error weighting you specify. NOTE: you can not weight the error in a transition band. By definition, there is no error in a transition band.

80 Now, frequency response
freqz(bb) That’s all it takes. You will see frequency and phase response. remez always designs symmetric filters, unless you tell it to do something else “help remez” will get you as many options as you wanted to ever know about.

81 To Take Home: Filtering is the practice of convolving an impulse response (the time response of the filter) with the signal. FIR filters directly implement this convolution. IIR filters use a functional representation that does the convolution implicitly, with some cost in implementation issues.

82 More to take home: Frequency (complex spectrum) response and impulse response are duals. The relationship df * dt > matters in filter design just like it does in anything else. If you want a sharp filter, you have a long impulse response. If you want a short impulse response, you can not have a sharp filter cutoff. IIR filters do not shorten the impulse response, they simply operate in a different fashion.

83 To come after the break:
Steve Hastings will give you a set of tips for tools that you can get off the net to help you design, plot, and understand filters (and a whole lot more things) boB Gudgel will show you what it sounds like to implement various filters, and offer tips on how to do this kind of work in the real world.

84 Filterbanks Ok, now what is a filter? And a filterBANK? out in Filter
outputs Filterbank in

85 Filterbanks A filterbank is nothing but a way to implement a set of filters, generally strongly mathematically related, in one operation. A filterbank can always be decomposed into a set of individual filters This is usually a lot more work that it’s worth, but not always.

86 The famous audio filterbank:
This would be the “MDCT”, or “Modified Discrete Cosine Transform”. Annoyingly, it’s not a transform, it’s a FILTERBANK. It is an exact reconstruction filterbank, though, so it does obey most of the rules of transforms, except that it has overlap between blocks, and remains critically sampled. A transform either has no overlap, OR is not critically sampled.

87 Critically Sampled – Whaaa???
Critically sampled is a simple concept at its heart, it means that in the filtered domain, you have the same number of samples that you do in the unfiltered domain.

88 The theory of filterbanks is long, deep, and wide
And I won’t even try to relate it in an hour. BUT, what you need to remember is that an output of a filterbank is just like running some particular kind of a filter on the signal. The filterbank just does a lot of these at once. It may also: Downsample (i.e. critical sampling) Be oversampled (more values in the filtered domain than in the input and output domains)

89 What are some applications?
For an MDCT, the obvious one is coding: It is critically sampled. Ergo, no extra data to code It does a good job of frequency analysis, so you can relate the perceptual model well, and you can also get good signal processing gain from it. It has an efficient form for calculation, very similar to an FFT of half its length.

90 Some rules about critically sampled banks.
If you’re not careful, very odd things happen when you modify the filtered results. The best known of these is the “pre-echo” in audio codecs. There are also other things that can go wrong Why? That critical sampling means that the filterbank creates a lot of aliasing, and then cancels it on reconstruction. Mess with the signal in the filtered domain, and the aliasing does not cancel.

91 How about oversampled filterbanks
You can avoid aliasing problems, so You can modify the signal You can use it for things like equalizers Gain compressors work well with this kind of filterbank There are other applications that are far too complicated to bring up at present.

92 So - Filterbanks At their heart, nothing but a handy way of implementing a whole set of filters at the same time. There are more things to this than computational efficiency.


Download ppt "The Basic Theory of Filtering"

Similar presentations


Ads by Google