Presentation is loading. Please wait.

Presentation is loading. Please wait.

Random Processes and Spectral Analysis

Similar presentations


Presentation on theme: "Random Processes and Spectral Analysis"— Presentation transcript:

1 Random Processes and Spectral Analysis
Chapter 6 Random Processes and Spectral Analysis

2 Introduction (chapter objectives)
Power spectral density Matched filters Recall former Chapter that random signals are used to convey information. Noise is also described in terms of statistics. Thus, knowledge of random signals and noise is fundamental to an understanding of communication systems.

3 Introduction Signals with random parameter are random singals ;
All noise that can not be predictable are called random noise or noise ; Random signals and noise are called random process ; Random process (stochastic process) is an indexed set of function of some parameter( usually time) that has certain statistical properties. A random process may be described by an indexed set of random variables. A random variable maps events into constants, whereas a random process maps events into functions of the parameter t.

4 Introduction Random process can be classified as strictly stationary or wide-sense stationary; Definition: A random process x(t) is said to be stationary to the order N if , for any t1,t2,…,tN, : Where t0 si any arbitrary real constant. Furthermore, the process is said to be strictly stationary if it is stationary to the order N→infinite Definition: A random process is said to be wide-sense stationary if Where τ=t2-t1.

5 Introduction Definition: A random process is said to be ergodic if all time averages of any sample function are equal to the corresponding ensemble averages(expectations) Note: if a process is ergodic, all time and ensemble averages are interchangeable. Because time average cannot be a function of time, the ergodic process must be stationary, otherwise the ensemble averages would be a function of time. But not all stationary processes are ergodic.

6 Introduction Definition : the autocorrelation function of a real process x(t) is: Where x1=x(t1), and x2=x(t2), if the process is a second-order stationary, the autocorrelation function is a function only of the time difference τ=t2-t1. Properties of the autocorrelation function of a real wide-sense stationary process are as follows:

7 Introduction Definition : the cross-correlation function for two real process x(t) and y(t) is: if x=x(t1), and y=x(t2) are jointly stationary, the cross-correlation function is a function only of the time difference τ=t2-t1. Properties of the cross-correlation function of two real jointly stationary process are as follows:

8 Introduction Two random processes x(t) and y(t) are said to be uncorrelated if : For all value of τ, similarly, two random processes x(t) and y(t) are said to be orthogonal if For all value of τ. If the random processes x(t) and y(t) are jointly ergodic, the time average may be used to replace the ensemble average. For correlation functions, this becomes:

9 Introduction Definition: a complex random process is:
Where x(t) and y(t) are real random processes. Definition: the autocorrelation for complex random process is: Where the asterisk denotes the complex conjugate. the autocorrelation for a wide-sense stationary complex random process has the Hermitian symmetry property:

10 Introduction For a Gaussian process, the one-dimension PDF can be represented by: some properties of f(x) are: (1) f(x) is a symmetry function about x=mx; (2) f(x) is a monotony increasing function at(- infinite,mx) and a monotony decreasing funciton at (mx, ), the maximum value at mx is 1/[(2π)(1/2)σ];

11 Introduction The cumulative distribution function (CDF) for the Gaussian distribution is: Where the Q function is defined by: And the error function (erf) defined as: And the complementary error function (erfc) defined as: And

12 6.2 Power Spectral Density (definition)
The definition of the PSD for the case of deterministic waveform is Eq.(2-66): Definition: The power spectral density (PSD) for a random process x(t) is given by: where

13 6.2 Power Spectral Density (Wiener-Khintchine Theorem)
When x(t) is a wide-sense stationary process, the PSD can be obtained from the Fourier transform of the autocorrelation function: Conversely, Provided that R(τ) becomes sufficiently small for large values of τ, so that This theorem is also valid for a nonstationary process, provided that we replace R(τ) by < R(t,t+τ) >. Proof: (notebook p)

14 6.2 Power Spectral Density (Wiener-Khintchine Theorem)
There are two different methods that may be used to evaluate the PSD of a random process: 2 using the indirect method by evaluating the Fourier transform of Rx(τ) , where Rx(τ) has to obtained first Properties of the PSD: (1) Px(f) is always real; (2) Px(f)>=0; (3) When x(t) is real, Px(-f)= Px(f); (4) When x(t) is wide-sense stationary,

15 6.2 Power Spectral Density
Example 6-3: (notebook p)

16 6.2 Power Spectral Density
summary,the general expression for the PSD of a digital signal can obtained by starting from: Where f(t) is the signaling pulse shape, and Ts is the duration of one symbol. {an} is a set of random variables that represent the data. The autocorrelation of data is: By truncating x(t) we get: Where T/2=(N+1/2)Ts, its Fourier transform is:

17 6.2 Power Spectral Density
According to the definition of PSD, we get: Thus:

18 6.2 Power Spectral Density
furthermore Thus an equivalent expression of PSD is: Where the autocorrelation of the data is: In which Pi is the probability of getting the product (anan+k), of which there are I possible value

19 6.2 Power Spectral Density
Note that the quantity in brackets in Eq.(6.70b) is similar to the discrete Fourier transform of the data autocorrelation function R(k), except that the frequency variable ω is continuous; that the PSD of the baseband digtial signal is influenced by both the “spectrum” of the data and the spectrum of the pulse shape used for the line code; that spectrum may contain delta functions if the mean value of data, an, is nonzero, that is: this is the case that the data symbols are uncorrelated.

20 6.2 Power Spectral Density
thus Where D=1/Ts. And the Poisson sum formula is used. For the general case where there is correlation between the data, let the data autocorrelation function R(k) be expressed in terms of the normalized–data autocorrelation function ρ(k) , the PSD of the digital signal is

21 6.2 Power Spectral Density
where is a spectral weight function obtained form the Fourier transform of the normalized autocorrelation impulse train

22 6.2 Power Spectral Density
White noise processes: Definition: A random process x(t) is said to be a white-noise process if the PSD is constant over all frequencies; that is: Where N0 is a positive constant. The autocorrelation function for the white-noise process is obtained by taking the inverse Fourier transform of eq. Above. The result is:

23 6.2 Power Spectral Density
White Guassian Noise:n(t) is a random process (random signal) Gaussian – Gaussian PDF(probability-density-function) White -- a flat PSD (Power-Spectrum-Density) or a impulse-like auto-correlation

24 6.2 Power Spectral Density
Bandpass White Gaussian Noise:n(t) is a (narrow) bandpass random process (random signal) of 2BHz, while the baseband signal is BHz) Gaussian – Gaussian PDF (probability-density-function) White -- a flat PSD (Power-Spectrum-Density) in a band of BHz or a sinc-like auto-correlation

25 6.2 Power Spectral Density
Measurement of PSD Analog techniques Numerical computation of the PSD Note: in either case the measurement can only approximate the true PSD, because the measurement is carried out over a finite time interval instead of the infinite interval.

26 Input-Output Relationships for Linear System
Theorem: if a wide-sense stationary random process x(t) is applied to the input of a time-invariant linear network with impulse response h(t) the output autocorrelation is: The output PSD is: Linear network h(t) H(f) Input x(t) output y(t) X(f) Rx(τ) Px(f) Y(f) Ry(τ) Py(f) Fig.6-6 Linear system Where H(f)=F{h(t)}.

27 6.8 Matched Filters Matched filtering is a technique for designing a linear filter to minimize the effect of noise while maximize the signal. A general representation for a matched filter is illustrated as follows: Matched filter h(t) H(f) r(t)=s(t)+n(t) Fig.6-15 matched filter r0(t)=s0(t)+n0(t) The input signal is denoted by s(t) and the output signal by s0(t), Similar notation is used for the noise. The signal is assumed to be (absolutely) time limited to the interval (0,T) and is zero otherwise. The PSD, Pn(f),of the additive input noise n(t) is known, if signal is present, its waveform is also known.

28 6.8 Matched Filters The matched-filter design criterion:
Finding a h(t) or , equivalently H(f), so that the instantaneous output signal power is maximized at a sampling time t0, that is: Is a maximum at t=t0. Note: the matched filter does not preserve the input signal waveshape. Its objective is to distort the input signal waveshape and filter the noise so that at the sampling time t0, the output signal level will be as large as possible with respect to the rms output noise level.

29 6.8 Matched Filters Theorem: the matched filter is the linear filter that maximizes (S/N)out=s02(t0)/<n02(t)>, and that has a transfer function given by: Where s(f)=F[s(t)] is the Fourier transform of the known input signal s(t) of duration T sec. Pn(f) is the PSD of the input noise, t0 is the sampling time when (S/N)out is evaluated, and K is an arbitrary real nonzero constant. Matched filter h(t) H(f) r(t)=s(t)+n(t) Fig.6-15 matched filter r0(t)=s0(t)+n0(t)

30 6.8 Matched Filters Proof: the output signal at time t0 is:
The average power of the output noise is: Then: With the aid of Schwarz inequality: Where A(f) and B(f) may be complex function of the real variable f. equality is obtained only when:

31 6.8 Matched Filters Leting: then:
The maximum (S/N)out is obtained when H(f) is chosen such that equality is attained. This occurs when A(f)=KB*(f). Or :

32 6.8 Matched Filters Results for White Noise
For white noise, Pn(f)=N0/2, thus we get: Theorem when the input noise is white, the impulse response of the matched filter becomes: h(t)=Cs(t0-t) (6-160) Where C is an arbitrary real positive constant, t0 is the time of the peak signal output, and s(t) is the known input signal waveshape. The impulse response of the matched filter (white–noise case) is simply the known signal waveshape that is “played backward” and translated by an amount to. Thus, the filter is said to be “matched” to the signal.

33 6.8 Matched Filters An important property: the actual value of (S/N)out that is obtained form the matched filter is: The result states that (S/N)out depends on the signal energy and PSD level of the noise, and not on the particular signal waveshape that is used. It can also be written in another terms. Assume that the input noise power is measured in a band that is W hertz wide. The signal has a duration of T seconds. Then,

34 6.8 Matched Filters Example 6-11 Integrate-and-Dump (Matched) filter
a) Input signal t b) “backwards” signal t c) matched-Filter impulse response t t0==t2 h(t)=s(t0-t) d) Signal output of matched filter t1 t

35 6.8 Matched Filters Fig.6-17

36 6.8 Matched Filters correlation processing
Theorem: The matched filter may be realized by correlating the input with s(t) for the case of white noise. that is: Where s(t) is the known signal waveshape and r(t) is the processor input, as illustrated in Fig.6-18 Fig Matched-filter realization by correlation processing

37 6.8 Matched Filters Proof: the output of the matched filter at time t0 is: Because of h(t)=Cs(t0-t) (6-160) so This is over

38 6.8 Matched Filters Example 6-12 Matched Filter for Detection of a BPSK signal

39 6.8 Matched Filters (Transversal Matched Filter)
we wish to find the set of transversal filter coefficients {ai;i=1,2,…..,N} such that signal-to-average–noise–power ratio is maximized Fig Transversal matched filter

40 6.8 Matched Filters (Transversal Matched Filter)
The output signal at time t=t0 is: Similarly, the output noise at time t=t0 is: The average noise power is:

41 6.8 Matched Filters (Transversal Matched Filter)
Thus the output-peak-signal to average-noise-power ratio is: Using Lagrange’s method of maximizing the numerator while constraining the denominator to be a constant, we can get: Is the matrix notation of (6-174)

42 6.8 Matched Filters (Transversal Matched Filter)
Where the known signal vector and the known autocorrelation matrix for the input noise and the unknown transversal matched filter coefficient vector are given by the transversal matched filter coefficient vector are given by

43 6.8 Matched Filters (Transversal Matched Filter)

44 6.8 Matched Filters (Transversal Matched Filter)

45 Homework


Download ppt "Random Processes and Spectral Analysis"

Similar presentations


Ads by Google