Presentation is loading. Please wait.

Presentation is loading. Please wait.

Communication Theory I. Frigyes 2009-10/II.. Frigyes: Hírkelm2 hirkelm01bEnglish.

Similar presentations

Presentation on theme: "Communication Theory I. Frigyes 2009-10/II.. Frigyes: Hírkelm2 hirkelm01bEnglish."— Presentation transcript:

1 Communication Theory I. Frigyes /II.

2 Frigyes: Hírkelm2 hirkelm01bEnglish

3 Frigyes: Hírkelm3 Topics (0. Math. Introduction: Stochastic processes, Complex envelope) 1. Basics of decision and estimation theory 2. Transmission of digital signels over analog channels: noise effects 3. Transmission of digital signels over analog channels: dispersion effects 4. Analóg jelek átvitele – analóg modulációs eljárások (?) 5. Channel characterization: wireless channels, optical fibers 6. A digitális jelfeldolgozás alapjai: mintavételezés, kvantálás, jelábrázolás 7. Elvi határok az információközlésben. 8. A kódelmélet alapjai 9. Az átvitel hibáinak korrigálása: hibajavító kódolás; adaptív kiegyenlítés 10. Spektrális hatékonyság – hatékony digitális átviteli eljárások

4 (0. Stochastic processes, the complex envelope)

5 Frigyes: Hírkelm5 Stochastic processes Also called random waveforms. 3 different meanings: As a function of ξ number of realizations: a series of infinite number of random variables ordered in time As a function of time t: a member of a time-function family of irregular variation As a function of ξ and t: one member of a family of time functions drawn at random

6 Frigyes: Hírkelm6 Stochastic processes example: t ξ f(t,ξ 1 ) f(t,ξ 2 ) f(t,ξ 3 ) f(t 1,ξ)f(t 2,ξ) f(t 3,ξ) 1 2 3

7 Frigyes: Hírkelm7 Stochastic processes: how to characerize them? According to the third definition And with some probability distribution. As the number of random variables is infinite: with their joint distribution (or density) (not only infinite but continuum cardinality) Taking these into account:

8 Frigyes: Hírkelm8 Stochastic processes: how to characerize them? (Say: density) First prob. density of x(t) second: joint t 1,t 2 n th :n-fold joint The stochastic process is completly characterized, if there is a rule to compose density of any order (even for n→  ). (We’ll see processes depending on 2 parameters)

9 Frigyes: Hírkelm9 Stochastic processes: how to characerize them? Comment: although precisely the process (function of t and ξ) and one sample function (function of t belonging to say ξ 16 ) are distinguished we’ll not always make this distinction.

10 Frigyes: Hírkelm10 Stochastic processes: how to characerize them? Example: semi-random binary signal: Values : ±1 (P 0 =P 1 = 0,5) Change: only at t=k×T First density_: Second::

11 Frigyes: Hírkelm11 Continuing the example: In the same time-slot In two distinct time-slots 45 o

12 Frigyes: Hírkelm12 Stochastic processes: the Gaussian process A stoch. proc. is Gaussian if its n-th density is that of an n-dimensional vector random variable m is the expected value vector, K the covariance matrix. nth density can be produced if are given are given

13 Frigyes: Hírkelm13 Stochastic processes: the Gaussian process An interesting property of Gaussian processes (more precisely: of Gaussian variables): These can be realizations of one process at different times

14 Frigyes: Hírkelm14 Stochastic processes: stationary processes A process is stationary if it does not change (much) as time is passing E.g. the semirandom binary signal is (almost) like that Phone: to transmit Hz sufficient (always, for everybody). (What could we do if this didn’t hold?) etc.

15 Frigyes: Hírkelm15 Stochastic processes: stationary processes Precise definitions: what is almost unchanged: A process is stationary (in the strict sense) if for the distribution function of any order and any at any time and time difference Is stationary in order n if the first n distributions are stationary E.g.: the seen example is first order stationary In general: if stationary in order n also in any order

16 Frigyes: Hírkelm16 Stochastic processes: stationary processes Comment: to prove strict sense stationarity is difficult But: if a Gaussian process is second order stationary (i.e. in this case: if K(t 1,t 2 ) does not change if time is shifted) it is strict sense (i.e. any order) stationary. As: if we know K(t 1,t 2 ) n th density can be computed (any n)

17 Frigyes: Hírkelm17 Stochastic processes: stationarity in wide sense Wide sense stationary: if the correlation function is unchanged if time is shifted (to be defined) A few definitions:. a process is called a Hilbert-process if (That means: instantaneous power is finite.)

18 Frigyes: Hírkelm18 Stochastic processes: wide sense stationary processes (Auto)correlation function of a Hilbert- process: The process is wide sense stationary if the expected value is time-invariant and R depends only on τ= t 2 -t 1 for any time and any τ.

19 Frigyes: Hírkelm19 Stochastic processes: wide sense – strict sense stationary processes If a process is strict-sense stationary then also wide-sense If at least second order stationary: then also wide sense. I.e.:

20 Frigyes: Hírkelm20 Stochastic processes: wide sense – strict sense stationary processes Further: if wide sense stationary, not strict sense stationary in any sense Exception: Gaussian process. This: if wide sense stationary, also in stict sense.

21 Frigyes: Hírkelm21 Stochastic processes: once again on binary transmission As seen: only first order stationary (E x =0) Correlation: if t 1 and t 2 in the same time-slot: if in different:

22 Frigyes: Hírkelm22 Stochastic processes: once again on binary transmission The semi-random binary transmission can be transformed in random by introducing a dummy random variable e distributed uniformly in (0,1) like x: e T

23 Frigyes: Hírkelm23 Stochastic processes: once again on binary transmission Correlation: If |t 1 -t 2 |>T, (as e  T) if |t 1 -t 2 |  T so

24 Frigyes: Hírkelm24 Stochastic processes: once again on binary transmission I.e. : -TT τ

25 Frigyes: Hírkelm25 Stochastic processes: other type of stationarity Given two processes, x and y, these are jointly stationary, if their joint distributions are alle invariant on any τ time shift. Thus a complex process is stationary in the strict sense if x and y are jointly stationary. A process is periodic (or ciklostat.) if distributions are invariant to kT time shift

26 Frigyes: Hírkelm26 Stochastic processes: other type of stationarity Cross-correlation: Two processes are jointly stationary in the wide sense if their cross correlation is invariant on any time shift

27 Frigyes: Hírkelm27 Stochastic processes: comment on complex processes Appropriate definition of correlation for these: A complex process is stationary in the wide sense if both real and imaginary parts are wide sense stationary and they are that jointly as well

28 Frigyes: Hírkelm28 Stochastic processes: continuity There are various definitions Mean square continuity That is valid if the correlation is continuous

29 Frigyes: Hírkelm29 Stochastic processes: stochastic integral x(t) be a stoch. proc. Maybe that Rieman integral exists for all realizations: Then s is a random variable (RV). But if not, we can define an RV converging (e.g. mean square) to the integral-approximate sum:

30 Frigyes: Hírkelm30 Stochastic processes: stochastic integral For this

31 Frigyes: Hírkelm31 Stochastic processes: stochastic integral - comment In σ s 2 the integrand is the (auto)covariancie-function: This depends only on t 1 -t 2 =τ if x is stationary (at least wide sense)

32 Frigyes: Hírkelm32 Stochastic processes: time average Integral is needed – among others –to define time average Time average of a process is its DC component; time average of its square is the mean power definition:

33 Frigyes: Hírkelm33 Stochastic processes: time average In general this is a random variable. It would be nice if this were the statistical average. This is really the case if Similarly we can define

34 Frigyes: Hírkelm34 Stochastic processes: time average This is in general also a RV. But equal to the correlation if If these equalities hold the process is called ergodic The process is mean square ergodic if

35 Frigyes: Hírkelm35 Stochastic processes: spectral density Spectral density of a process is, by definition the Fourier transform of the correlation function

36 Frigyes: Hírkelm36 Stochastic processes: spectral density A property: Consequently this integral >0; (we’ll see: S˙(ω)>0)

37 Frigyes: Hírkelm37 Spectral density and linear transformation As known in time functions output function is convolution h(t): impulse response FILTER h(t) x(t)y(t)

38 Frigyes: Hírkelm38 Spectral density and linear transformation Comment.: h(t<0)≡ 0; (why?); and: h(t) = F -1 [H(ω)] It is plausible: the same for stochastic processes Based on that it can be shown : (And also)

39 Frigyes: Hírkelm39 Spectral density and linear transformation FurtherS(ω) ≥ 0 (all frequ.) For: if not, there is a domain where S(ω) <0 (ω 1, ω 2 ) Sx(ω)Sx(ω) S y (ω) (its integral is negative) FILTER h(t) x(t)y(t) H(ω)

40 Frigyes: Hírkelm40 Spectral density and linear transformation S(ω) is the spectral density (in rad/sec). As: ω H(ω)

41 Frigyes: Hírkelm41 Modulated signals – the complex envelope In previous studies we’ve seen that in radio, optical transmission one parameter is influenced (e.g. made proportional) of a sinusoidal carrier by the modulating signal. A general modulated signal:

42 Frigyes: Hírkelm42 Modulated signals – the complex envelope Here d(t) and/or (t) carries the information – e.g. are in linear relationship with the modulating signal An other description method (quadrature form): d,, a and q are real time functions – deterministic or realizations of a stoch. proc.

43 Frigyes: Hírkelm43 Modulated signals – the complex envelope Their relationship: As known x(t) can also be written as:

44 Frigyes: Hírkelm44 Modulated signals – the complex envelope Here a+jq is the complex envelope. Question: when, how to apply. To beguine with: Fourier transform of a real function is conjugate symmetric: But if so: X(ω>0) describes the signal completly: knowing that we can form the ω<0 part and, retransform.

45 Frigyes: Hírkelm45 Modulated signals – the complex envelope Thus instead of X(ω) we can take that: By the way: The relevant time function: ↓ „Hilbert” filter

46 Frigyes: Hírkelm46 Modulated signals – the complex envelope We can write: The shown inverse Fourier transform is 1/t. So Imaginary part is the so-callerd Hilbert- transzform of x(t)

47 Frigyes: Hírkelm47 Modulated signals – the complex envelope Now introduced function is the analytic function assigned to x(t) (as it is an analytic function of the z=t+ju complex variable). An analytic function can be assigned to any (baseband or modulated) function; relationship between the time function and the analytic function is

48 Frigyes: Hírkelm48 Modulated signals – the complex envelope It is applicable to modulated signals: analytic signal of cosω c t is e jωct. Similarly that of sinω c t is je jωct. So if quadrature components of the modulated signal a(t), q(t) are band limited and their band limiting frequency is < ω c / 2π (narrow band signal) then NB. Modulation is a linear operation in a,q: frequency displacement.

49 Frigyes: Hírkelm49 Modulated signals – the complex envelope Thus complex envelope determines uniquely the modulated signals. In the time domain Comment: according to its name can be complex. (X(ω) is not conjugate symmetric around ω c.) Comment 2: if the bandwidt B>f c, is not analytic, its real part does not define the modulated signal.) Comment 3: a és q can be independent signals (QAM) or can be related (FM or PM).

50 Frigyes: Hírkelm50 Modulated signals – the complex envelope In frequency domain? On analytic signal we saw. X(ω) X˚(ω) X̃(ω)X̃(ω) ω X(ω)

51 Frigyes: Hírkelm51 H(ω) M(ω) Modulated signals – the complex envelope Linear transfor- mation – bandpass filter –acts on x̃(t) as a lowpass filter. If H(ω) is asymm: x̃(t) is complex – i.e. is a crosstalk between a(t) és q(t) között (there was no sin component – now there is.) X(ω)=F[m(t)cosω c t] X˚(ω) X̃(ω) Y(ω) Y˚(ω) Ỹ(ω)

52 Frigyes: Hírkelm52 Modulated signals – the complex envelope, stochastic processes Analytic signal and complex envelope are defined for deterministic signals It is possible for stochastic processes as well No detailed discussion One point:

53 Frigyes: Hírkelm53 Modulált jelek – a komplex burkoló; sztochasztikus folyamatok x(t) is stationary (R x is independent from t), iff

54 Frigyes: Hírkelm54 Narrow band (white) noise White noise is of course not narrow band Usually it can be made narrow band by a (fictive) bandpass filter: ω X(ω) S n (ω) =N 0 /2 H(ω)

55 Frigyes: Hírkelm55 Narrow band (white) noise properties

56 1. Basics of decision theory and estimation theory

57 Frigyes: Hírkelm57 Detection-estimation problems in communications 1. Digital communication: one among signals known (by the receiver) – in presence of noise E.g.: (baseband binary communication) Decide: which was sent? DIGITAL SOURCE Transmission channel SINK

58 Frigyes: Hírkelm58 Detection-estimation problems in communications 2. (Otherwise) known signal has unknown parameter(s) (statistics are known) - Same block schematic; example: non- coherent FSK

59 Frigyes: Hírkelm59 Detection-estimation problems in communications Other example: non-coherent FSK, over non- selective Rayleigh fading channel

60 Frigyes: Hírkelm60 Detection-estimation problems in communications 3. Signalshape undergoes random changes Example: antipodal signal set in very fast fading DIGITAL SOURCE Transmission channel T(t) SINK

61 Frigyes: Hírkelm61 Detection-estimation problems in communications 4. Analog radio communications: one parameter of the carrier is proportional to the time-contous modulating signal. E.g..: analog FM; estimate: m(t) Or: digial transmission over frequency-selective fading. For decision: h(t) must be known (i.e. estimated

62 Frigyes: Hírkelm62 Basics of decision theory Simplest example: simple binary transmission; decision is based on N independent samples Model: SOURCE H0H0 H1H1 CHANNEL (Only statistics are known) OBSEVATION SPACE (OS) DECIDER Decision rule H 0 ? H 1 ? Ĥ Comment: now ˆ has nothing to do with Hilbert transform

63 Frigyes: Hírkelm63 Basics of decision theory Two hypothesis (H 0 és H 1 ) Observation: N samples→the OS is of N- dimensions Observation: r T =(r 1,r 2 …,r N ) Decision: which was sent Results: 4 possibilities 1. H 0 sent & Ĥ=H 0 (correct) 2. H 0 sent & Ĥ=H 1 (erroneous) 3. H 1 sent & Ĥ=H 1 (correct) 4. H 1 sent & Ĥ=H 0 (erroneous)

64 Frigyes: Hírkelm64 Bayes decision Bayes decision : a.) probabilities of sending H 0 or H 1 are a- priori known: b.) each decision has some cost (C ik ) (we decide in favor of i while sent was k) c.) it is sure: false decision is more expensive than correct:

65 Frigyes: Hírkelm65 Bayes decision d.) decision rule: the average cost (so called risk, K) should be minimal OS FORRÁS „H 1 ” „H 0 ” (Z 1 ) (Z 0 ) p r|H1 (R|H 1 ) p r|H0 (R|H 0 ) domain of r; two pdf-s correspond to each point.

66 Frigyes: Hírkelm66 Bayes decision Question: how to partition OS in order to get minimal K? For that: K in detail: As some decision is taken: And so

67 Frigyes: Hírkelm67 Bayes decision From that: Term 1 & 2 are constant And: both integrands >0 Thus: Z 1, where the first integrand is larger Z 0, where the second FORRÁS „H 1 ” „H 0 ” (Z 1 ) (Z 0 ) p r|H1 (R|H 1 ) p r|H0 (R|H 0 )

68 Frigyes: Hírkelm68 Bayes decision And here we decide in favor of H 1 : decision: H 0

69 Frigyes: Hírkelm69 Bayes decision It can also be written: decide for H 1 if Otherwise for H 0 Lefthand side: likelyhood ratio, Λ(R) Righthand: (from certain aspect) the treshold, η Comment: Λ depends only on the realisation of r (on: what did we measure?) η only on the a-priori probabilities and costs

70 Frigyes: Hírkelm70 Example on Baysean decision H 1 : constant voltage + Gaussian noise H 0 : Gaussian noise only (designation: φ(r;m r,σ 2 ) Decision: on N independent samples of r At sample #i This resulting in

71 Frigyes: Hírkelm71 Example on Baysean decision its logaritm resulting in threshold

72 Frigyes: Hírkelm72 Comments to the example 1. The threshold contains known quantities only, independent of the measured values 2. Result depends only on the sum of r i -s – we have to know only that; so called sufficient statistics l(R ): 2.a Like in this example: OS dimension is whatever, l(R ) is always 1D i.e.„1 coordinate” – the others are independent on hypothesis

73 Frigyes: Hírkelm73 SOURCE H0H0 H1H1 CHANNEL (Only statistics known) OBSERVATION SPACE (OS) Decision rule DECISION SPACE (DS) DECIDER Ĥ Thus the decision process

74 Frigyes: Hírkelm74 Comments to the example 3. Special case: C 00 =C 11 =0 és C 01 =C 10 =1 (i.e. probability of erroneous decision) If P 0,1 ≡0,5, the treshold N.m/2

75 Frigyes: Hírkelm75 An other example, for home Similar but now the signal is not constant but Gaussian noise with variance σ S 2 I.e. H 1 :Π φ(R i ;0,σ S 2 +σ 2 ) H 0 :Π φ(R i ;0,σ 2 ) Questions: threshold; sufficient statistics

76 Frigyes: Hírkelm76 Third example - discrete Given two Poisson sources with different expected values; which was sent? Remember: Poisson-distribution: Hypotheses:

77 Frigyes: Hírkelm77 Third example - discrete Likelihood-ratio: Decision rule: (m 1 >m 0 ) For sake of precision:

78 Frigyes: Hírkelm78 Comment A possible situation: a-priori probebilities are not known A possible method then: compute maximal K (as a function of P i ; and chose the decision rule wich minimizes that (so called minimax decision) (Note: this is not optimal at any P i ) But we don’t deal with this in detail.

79 Frigyes: Hírkelm79 Probability of erroneous decision For that: compute relevant integral In example 1 (with N=1): Gs pdf should be integrated over the hatched domains Threshold: d0d0 d1d1

80 Frigyes: Hírkelm80 Probability of erroneous decision Thus: If lnη=0: d 0 =d 1 =m/2 (threshold: the point of intersection) Comment: N samples:

81 Frigyes: Hírkelm81 Decision with more than 2 hypotheses M possible outcomes (e.g.:non-binary digital communication – we’ll see: why for) Like before: each decision has a cost Their average is the risk With Bayes-decision: this is minimized Like before: Observation Space decision rule: partitioning of the OS

82 Frigyes: Hírkelm82 Decision with more than 2 hypotheses Like before, risk: From that (with M = 3)

83 Frigyes: Hírkelm83 Döntés kettőnél több hipotézisnél Likelyhood ratio-series : Decision rule(s):

84 Frigyes: Hírkelm84 Döntés kettőnél több hipotézisnél (M =3) This defines 3 streight lines (in the 2D decision space) H0H0 H1H1 H2H2 Λ 1 (R) Λ 2 (R)

85 Frigyes: Hírkelm85 Ecample: special case – error probability The average error probability is minimized Then we get: H2H2 H0H0 H1H1 Λ 1 (R) Λ 2 (R) P 0 /P 2 P 0 /P 1 Λ 2 = (P 1 /P 2 )Λ 1.

86 Frigyes: Hírkelm86 The previous, detailed H2H2 H0H0 H1H1 Λ 1 (R) Λ 2 (R) P 0 /P 2 P 0 /P 1 Λ 2 = (P 1 /P 2 )Λ 1.

87 Frigyes: Hírkelm87 Example –special case, error probability: a-posteriori prob. Very important, based on the precedings: we have If we divide each by p r (R) and apply Bayes theorem we get: ( these are a-posteriori probabilities)

88 Frigyes: Hírkelm88 Example –special case, error probability: a-posteriori prob. I.e. we have to decide on the max. a- posteriori probabilities. Rather plausible: probability of correct decision is the highest if we decide on what is the most probable

89 Frigyes: Hírkelm89 Bayes- theorem (conditional probabilities) For discrete variables: Continuous: a discrete, b continuous:

90 Frigyes: Hírkelm90 Comments 1.The Observation Space is N dimensional (N is number of the observations). The Decision Space is M-1 dimensional (M is number of the hypotheses). 2. Explicitly we dealt only with independent samples; investigation is much more complicated is these are correlated 3. We’ll see that in digital transmission the case N>1 is often not very important

91 Frigyes: Hírkelm91 Basics of estimation theory: parameter-estimation Task is to estimate unknown parameter(s) of an analog or digital signal Examples: voltage measurment in noise digital signal; phase measurement

92 Frigyes: Hírkelm92 Basics of estimation theory: parameter-estimation Frequency estimation synchronizing an unaccurate oscillator Power estimation of interfering signals interference cancellation (via the antenna or multiuser detection) SNR estimation etc

93 Frigyes: Hírkelm93 Basics of estimation theory: parameter-estimation The parameter can be: i. a random variable (pdf is assumed to be known) or ii. an unknown deterministic value Model: BECSLÉSI TÉR PARAMETER SPACE ESTIMATION SPACE Domain of the estimated parameter Becslési szabály p a (A) OBSERVATION SPACE (OS) Mapping to OS ii. means: we have no a-priori knowledge about its magnitude i.means: we have some a-priori knowledge

94 Frigyes: Hírkelm94 Example – details (estimation 1) We want to measure voltage a We know that And that Gaussian noise is added φ(r;0,σ n 2 ) Observable parameter is r = a+n Mapping of the parameter to OS:

95 Frigyes: Hírkelm95 Parameter estimation –parameter is a RV Similar principle: estimation ha some cost; its average is the risk; we want to minimize that realization of the parameter : a Observation vector: R Estimated value: â (R) Cost is in the general case a 2-variable function: C(a,â) Error of the estimation is ε = a-â(R) Often the cost depnds only on that : C = C (ε)

96 Frigyes: Hírkelm96 Parameter estimation – parameter is a RV Examples: Risk is Joint pdf can be written:

97 Frigyes: Hírkelm97 Parameter estimation – parameter is a RV Applying to the square cost function (subscript ms: mean square) K=min (i.e. )where the inner integral = min (as the outer is i. pozitiv and ii. does not depend on A); this holds where

98 Frigyes: Hírkelm98 Parameter estimation – parameter is a RV The second integral =1, thus i.e. a-posteriori expected value of a. (According to the previous definition : a-posteriori knowledge: what is gained from measurement/investigation.)

99 Frigyes: Hírkelm99 Comment Comming back to the risk: Inner integral is now: the conditional variance, σ a 2 (R). Thus

100 Frigyes: Hírkelm100 Parameter estimation – parameter is a RV An other cost function: 0 in a band Δ elsewhere 1. The risk is now (un: uniform) K=min, if result of the estimation is maximum of the conditional pdf (if Δ is small): max. a-posteriori – MAP-estimation Δ

101 Frigyes: Hírkelm101 Parameter estimation – parameter is a RV Than derivative of the log conditional pdf=0 (so called MAP-equation) Applying Bayes-theorem log-cond. pdf can be written

102 Frigyes: Hírkelm102 Parameter estimation – parameter is a RV First term is the statistical relationship between A-R Secnd is the a-priori knowledge Last term does not depend on A. Thus what is maximal is And so the MAP equation is:

103 Frigyes: Hírkelm103 Once again Minimum Mean Square Error (MMSE) estimate is the average of the a-posteriori. pdf Maximum a-posteriori (MAP) estimate is the maximum of the a-posteriori. pdf

104 Frigyes: Hírkelm104 Example (estimate-2) Again we have Gauss ian a+n, but N independent samples What we need to (any) estimate is

105 Frigyes: Hírkelm105 Example (estimate-2) Note that p(R) is constant from the point of view of the conditional pdf, thus its form is irrelevant. Thus

106 Frigyes: Hírkelm106 Example (estimate-2) This is a Gaussian distribution and we need its expected value. For that the square must be completed in the exponent, resulting in

107 Frigyes: Hírkelm107 Example (estimate-2) In Gaussian pdf average = mode thus

108 Frigyes: Hírkelm108 Example (estimate-3) a is now also φ(A;0,σ n 2 ), but only s(A), a nonlinear function of it can be observed (e.g. phase of a carrier); noise is added: Az a-posteriori sűrűségfüggvény

109 Frigyes: Hírkelm109 Példa (becslés-3) Remember: the MAP equation: Aplying that to the preceding

110 Frigyes: Hírkelm110 Parameter estimation – the parameter is a real constant In that case it is the measurement result what is a RV. E.g. in the case of a square cost function the risk is: This is minimal if â(R)=A. But this has no sense: that’s the value what we want to estimate. In that case – i.e. if we have no a-priori knowledge – the method can be: we search for a function of A – an estimator – which is „good” (has average close to A and low variance)

111 Frigyes: Hírkelm111 Közbevetve: a tételek eddig 1. Sztohasztikus folyamatok: főként a fogalmak definiciója (sztoh. foly.; val. sűrűségek-eloszlások, erős stacioanaritás; korrelációs fv. gyenge stacionaritás, időátlag, ergodicitás spektrális sűrűség, lin. transzformáció) 2. Modulált jelek – komplex burkoló (mod. jelek leírása, időfüggv., analitikus függv. (frekv-idő) komplex burkoló, egyikből a másik, szűrő)

112 Frigyes: Hírkelm112 Közbevetve: a tételek eddig/2 3. A döntéselmélet alapjai (Milyen feladatok; költség, kockázat, a-priori val, Bayes-f. döntés; a megfigyelési tér opt. felosztása, küszöb, elégséges statisztika, M hipotézis (nem kell rész-letezni, csak az eredmény), a-post. val.) 4. A becsléselmélet alapjai (Val.-vált paraméter, költségfüggv. min. ms. max. a-post.; determ. param, likelyhood fv. ML, MVU becslés, torzítatlan; Cramér-Rao; hatékony)

113 Frigyes: Hírkelm113 Parameter estimation – the parameter is a real constant – criteria for the estimator Average of the – somehow chosen – estimator: If this is = A: unbiased estimate; operation of estimation is searching of the average If bias B is constant: can be subtracted from the average (e.g. in optics: background radiation) if B=f(A): biased estimate

114 Frigyes: Hírkelm114 Parameter estimation – the parameter is a real constant – criteria for the estimator Error variance: Best would be: unbiased and low variance more precisely:unbiased and minimális variance for any A (MVU) Such estimator does or does not exist An often applied estimator: maximum likelihood Likelihood function.: as a function of A Estimator of A then: maximum of the likelihood function

115 Frigyes: Hírkelm115 A few of details This is certainly better than that: variance of the measuring result is lower Θ Further:

116 Frigyes: Hírkelm116 Max. likelihood (ML) estimation Necessary condition of max of ln likelihood Remember: in the case of a RV: MAP estimate Thus: ML is the same without a-priori knowledge

117 Frigyes: Hírkelm117 Max. likelihood (ML) estimation For any unbiased estimate: variance cannot be lower than Cramér-Rao lower bound, CRLB, or an other form:

118 Frigyes: Hírkelm118 Max. likelihood (ML) estimation Proof of CRLB: via Schwartz- unequality If the estimate is equal to CRLB: efficient estimate. Example of existing MVU but no (or unknown) efficient estimate (var θ ^ 1 >CRLB)

119 Frigyes: Hírkelm119 Example 1 (estimation, nonrandom) Voltage + noise, but the voltage is a real, nonrandom Max likelihood estimation

120 Frigyes: Hírkelm120 Example 1 (estimation, nonrandom) Is it biased?? Expected value of â is the (a-priori not known) true value; i.e. it is unbiased

121 Frigyes: Hírkelm121 Example 2 (estimation, nonrandom): phase of a sinusoid What can be measured is a function of the quantity to be estimated (Independent Samples are taken at equal times) A likelyhood function to be maximized:

122 Frigyes: Hírkelm122 Example 2 (estimation, nonrandom): phase of a sinusoid This is = max, if the function below = min =0 The righthand side tends to 0, with large N

123 Frigyes: Hírkelm123 Example 2 (estimation, nonrandom): phase of a sinusoid And then finally:

124 2. Transmission of digital signals over analog channels: effect of noise

125 Frigyes: Hírkelm125 Introductory comments Theory of digital transmission is (at least partly) application of decision theory Definition of digital signals/transmission: Finite number of signal shapes (M) Each has the same finite duration (T) The receiver knows (a priori) the signal shapes (they are stored) So the task of the receiver is hypothesis testing.

126 Frigyes: Hírkelm126 Introductory comments – degrading effects DECISION MAKER BANDPASS FILTER FADING CHANNEL + n(t)n(t) NONLINEAR AMPLIFIER s(t)s(t) INTER- FERENCE ωcωc ωcωc z0(t)z0(t) z1(t)z1(t) z2(t)z2(t) ω1ω1 ω2ω2 CCI ACI

127 Frigyes: Hírkelm127 Introductory comments Quality parameter: error probability (I.e. the costs are: ) Erroneous decision may be caused by: additíve noise linear distortion nonlinear distortion additive interference (CCI, ACI) false knowlledge of a parameter e.g. synchronizing error

128 Frigyes: Hírkelm128 Introductory comments Often it is not one signal of which the error probability is of interest but of a group of signals – e.g. of a frame. (A secondquality parameter: erroneous recognition of T : the jitter.)

129 Frigyes: Hírkelm129 Transmission of single signals in additive Gaussian noise Among the many sources of error now we regard only this one Model to be investigated: SOURCE SIGNAL GENERATOR + DECISION MAKER SINK TIMING (T) n(t)n(t) mimi {m i }, P i si(t)si(t) r(t)= s i (t)+n(t) ˆmˆm

130 Frigyes: Hírkelm130 Transmission of single signals in additive Gaussian noise Specifications: a-priori probabilities P i are known support of the real time finctions is (0,T) their energy is finite (E: square integral of the time functions) relationship is mutual and unique (i.e. there is no error in the transmitter)

Download ppt "Communication Theory I. Frigyes 2009-10/II.. Frigyes: Hírkelm2 hirkelm01bEnglish."

Similar presentations

Ads by Google