Download presentation
Presentation is loading. Please wait.
1
DSP for Robotics Enthusiasts
By:HLHowell For:RSSC
2
DSP for Robotics Enthusiasts
Agenda: What is DSP Basic forms of DSP Sampling techniques Fast Fourier Transform Criteria for best results Aliasing benefits & issues Filters in time Filters in Frequency IFFT Resampling data Crorellation in time RF I/Q uses Transmit sideband data Receiver signal integrity PRML Delay effects Intersymbol modulation Error rates in PRML IFT resampling Given an input of complex rectangular frequency data produces an output of complex time data with the q element as 0. Used to reproduce a time linear signal from modified polar frequency data. Can be utilized to modify the effective sampling rate, but no new data is added.
3
What is DSP DSP is an acronym for Digital Signal Processing.
It encompasses transforms, correlations, reductions in time and frequency, and other forms of analysis of sampled data from any of a number of forms of Analog to Digital Converters (ADC) At this time, humans use Oscilloscopes with our eyes, speakers with our ears, and our nervous systems to resolve issues with signals. However computers don’t have eyes and ears. We need to convert the information to numbers that the computer can handle, compare, validate and modify to achieve recognition.
4
Sampling Techniques Flash ADC Delta Sigma
Directly converts signal level to a digital representation. Uses a bank of comparators and a logical priority encoder to get the level. Very difficult to design at high resolution due to component accuracy limitations. Delta Sigma Oversamples at a very high speed then decimates to achieve the signal value. Limited in bandwidth, but very high accuracy due to single comparator. SAR DAC (Sequential Approximation Register with Digital to Analog Comparator) Typically preceeded by a sample and hold. The signal is compared to the DAC output and starting with the MSB, the bits are set. If the comparison is still low, leaves it or resets the bit and drops to the next bit. After N comparisons, where N is the bits of the DAC, the value is known. This can be implemented in software or hardware. Slow relative to clock speed (N+2 or N+3 cycles minimum required), but cheap because DAC’s are easier to build than ladders and logic.
5
Fast Fourier Transform
Evolved from work by the Babylonians, refined by Clairaut, Lagrange, Gauss and others, and Finally a breakthough by Joseph Fourier in his paper brought it to what we use today. All waveforms can be reconstructed by a sine and/or cosine series at different frequencies and amplitudes. The Fourier transform can compute the values of that series from the input signal. Typical use today is the Fast Fourier Transform which is modified by the ability of powers of 2 on a computer to both reduce the number of calculations and make the operations quickly. The output is in polar format with the real and imaginary set at the angle of the initial capture. If the signal is completely described or in my terms, circular, then the reusult is clean and will show up as a single spike in the data output. In the real world, it is never that clean and simple, so there will be a phase discontinuity between the end of the capture and the start of the capture. If you cut out a graph of the signal on a perfect capture, and spaced the ends exactly one capture interval apart, the signal would have no indication of start or end. That is circular. Real world, though means the signal is arbitrary, generally not solid frequency and there will be an error between the two end points, which shows up as FM in the time spectrum. i.e. that step repeats over and over throughout the waveform, and its energy gets smeared across the spectrum giving you a slope on both sides of the signal comprised of the power that “leaks” into the other frequencies. That means we need to minimize the ends of the signal, and we use windowing or modulation to do that. The resolution in amplitude is controlled by the accuracy of the ADC. The resolution in frequency and phase is primarily controlled by the number of samples and the frequency of the sample clock. Each output sample is a “bin” made up of real and imaginary values. The binwidth is measured in frequency, which is fs (sample frequency)/N (number of samples). The useable output is limited by a rule proposed by Nyquist, that you need at least 2 samples per sine wave to define it. So maximum useable output is fs/2. That is the Nyquist frequency. Frequencies from Nyquist to fs fold back into the spectrum analysis, and those from fs to 1.5fs fold from 0 back up the spectrum and so on. These are called alias frequencies. You can identify them by changing the sample frequency. Those aliased frequencies will move by the number of folds. i.e. a fold from nyquest back will move 2 times the bin change. Those folded another time will move by 3* the bin change and so on. Thus aliased frequencies can be used to extend the analysis. This is called undersampling. You control which band you see by using filters called Anti-Aliasing filters before the ADC.
6
A Time Series
7
Polar Series Real Imaginary
8
Frequency Magnitude sqrt(R2+I2)
9
Criteria for best results
Signal must be fully represented. i.e. f/fs must be an integer, which means circular. If you cut out the graph, and hold the ends separated by a one sample interval, the curve should look like the curve in similar parts of the window. Unwanted aliasing must be prevented. Use filters on the input to avoid that. The sample window, that is the time the data is averaged by the input capacitance, typically called tau , must be small relative to the period of fs. In classical Analysis, tau is assumed to be the sample period, leading to the calculation of the sine(x)/x roll-off. The smaller the physical sample window, the less the actual effect of sin(x)/x roll-off. In any event, using only the bins from 1 to .75*N will minimize the effects. The ADC resolution will affect the available signal noise to ratio until it is >12 bits. At 12 bits, nominal design will begin to see system noise exceed ADC noise. The ADC noise is called quantization noise in most analysis and specifications. Wideband analysis has more noise energy because noise is wideband. i.e. the bigger the window the stronger the breeze. The window is (number of samples)/(sample frequency). N/Fs In other words, for a given Fs, as N increases, the time the window is open is longer, and the more energy gets into the analysis.
10
Aliasing benefits & issues
Aliasing occurs when a signal >Nyquist is present at the ADC. Signals from 0-nyquist are the fundamental of analysis and increase from left to right. Signals from nyquist-Fs are the secondary signals and fold from right to left as they increase in frequency. From Fs-1.5Fs they again fold from left to right. Check the diagram in the next slide. Amplitude is affected by the sine(x)/x relationship, where x is the equivalent bin of the signal. Generally aliased signals are unwelcome. But in some cases you can use aliasing to analyze signals beyond the normal 0-Nyquist of your setup, but it requires more math and closer tolerance design. The signal period must be 10x tau for good analysis. For most analysis, aliased signals just add noise, and may be non-circular, producing wide band noise in your analysis. This is because phase errors are actually errors in time. As the frequency goes up, a very small variation in time produces a much larger error in phase. So a clock that is .01% out at the fundamental is .02% out at the second harmonic, and 2% out at the 100th harmonic. That is a big deal, producing lots of smearing in your analysis.
11
Filters in Time Filters in time are used to minimize the step error between the start and finish of the time window. They should average to 1.0 to prevent amplitude errors. The window is applied by simply multiplying the input signal by the window, sample by sample. Generally the imaginary portion of a sample is 0, so there is no need to multiply it since 0*anything is 0. If the input sampling is complex, then both the real and imaginary must be multiplied by the window. Typical are the Bartlett, Hanning and raised cosine. This is a link to a location that describes many of the different window functions: Windows The spectrum will still smear, but the grouping caused by the windowing function will reduce the errors introduced by gathering the energy into a few bins around the frequency bin.
12
Filters in Frequency A window in frequency is applied AFTER the fft. Generally used when a signal is captured, then modified prior to applying the inverse FFT to get the signal without high frequency errors and noise. Again the window is applied by multiplying each sample by the window function. In this case the signal is both real and imaginary, so both should be multiplied by the window.
13
IFFT The IFFT is the Inverse Fast Fourier Transform. A transform is a mathematical function that is reversable. That means that a given input can be transformed, then that output can be input with a flag to reverse it, and the original input can be retrieved. Sometimes in signal processing it is desirable to process the signal in time but without the noise. Running the signal through the fft, then filtering it, then running it back through the inverse FFT will produce the signal without the offending noise. Another use is shown on the next slide by resampling the signal. A third use is for correllation.
14
Resampling Data Resampling is generally used to get some additional points for time study. A fast signal is captured, but it is too fast to retrieve the slope or risetime of the signal. The captured signal can be resampled mathematically. This is sometimes called oversampling. Do the FFT, then pad the signal with 0,0 to a different power of 2 larger than the original sample size. Make no other changes. The assumption of the FFT then is that the signal window, that is the sampling time remains the same, and the data was all “low freq” in that window at the same frequencies as the original. Do the inverse FFT, and the signal will now have added data samples during the rise and fall times. This ignores the fact that the actual signal may have been faster or slower, because the data from the higher bins is missing. However since the higher frequencies generally don’t add too much to the original waveform, it will be pretty close. If the anit-aliasing filters were left off, those missing frequencies were aliased into the original waveform, and you could grab the harmonics and place them properly into the expanded spectrum for more accuracy. But you will have points to grab to establish closer tolerance on rise and fall times.
15
Correllation in Time If you need to place and unknown phased signal into a precise time for analysis, the FFT can help do that. Make a time block to show where the signal should appear. Do both FFT’s. Find the phase of the target and the signal fundamentals. Subtract the phase of the signal from the target in degrees, as an array. Calculate the desired phase of the original by adding the difference from the prior step. Convert the signal back to polar and do the IFT. It will be in the location desired. The calculus guys will have a correllation function that does the same thing but different math.
16
RF I/Q Uses I means inphase signal. Q means quadrature signal, one that is 90 degrees different. Achieving I and Q for voice used to require very specific filters, which were time consuming to design and hard to keep aligned correctly. Using dsp, and the Fourier transform, you can calculate the FFT of the voice signal, add 90 degrees to the phases of all bins, do the inverse FFT and you have quadrature across the whole signal. But Why you may ask… Look at the accompanying spreadsheet and you can see the SSB signal generated by modulating the I voice by the I carrier, and the Q voice by the Q carrier, and when these are added together, you get only one sideband. No carrier, no opposing sideband, and better broadcast due to using ¼ the power of AM.
17
RF I/Q Uses (cont.) On receive, retrieving the Signal by one antenna, then splitting it, phase shifting one part by 90 degrees, and fading due to signal shift disappears. There is always a good signal. Moreover by reversing the I/Q modulation, by detecting the two signals with I/Q of the Local Oscillator, the signal can be converted directly to base band and you get good signal out. Single conversion low noise receivers are common (your cellphone is a good example).
18
PRML Partial Response Maximum Likelihood is a detection process for digital signals over any medium like magnetic disks, long transmission lines and so on. When a complex signal is transmitted, it is delayed in time before it arrives at the destination. Time affects different frequencies as a different phase shift. Essentially the modulation changes in phase. This is called Envelop Delay or Group Delay depending on the technology involved. Essentially the transmit medium has a memory for the time duration of the transmission. But this is predictable. If you track the signal over time, the resulting intersymbol modulation becomes predictable, and you can make a good guess at what the current signal is based upon the history up to that point. That is what PRML does. It is in essence a neural network with phase delay to recover the original modulation. It has been in use in modems for nearly 50 years. I don’t have an example, but you can read about it here: response_maximum-likelihood Follow the links in the article especially the Viterbi algorithm link. Each added node reduces the error rate of the algorithm. But there is a built in error rate to all PRML algorithms, which is reached when the run length exceeds two times the binary power of the nodes in the algorithm. For most useful algorithms, it is about 1:10,000,000. In practice small blocks of data are sent never approaching the limit.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.