Download presentation

Presentation is loading. Please wait.

Published byStephany Tesler Modified about 1 year ago

1
Information Theory 1EE322 Al-Sanie

2
Introduced by Claude Shannon. Shannon (1948), Information theory, The Mathematical theory of Communication Claude Shannon: April 30, February 24, 2001 EE322 Al-Sanie2

3
What is the irreducible complexity below which a signal cannot be compressed ? Entropy What is the ultimate transmission rate for reliable communication over noisy channel? Channel capacity Two foci: a) data compression and b) reliable communication through noisy channels. EE322 Al-Sanie3

4
Amount of Information Before the event occurs, there is an amount of uncertainty. When the event occurs there is an amount of surprise. After the occurrence of the event, there is gain in the amount of information, the essence of which of which may be viewed as the resolution of uncertainty. The amount of information is related to the inverse of probability of occurrence. EE322 Al-Sanie4

5
The amount of information is related to the inverse of probability of occurrence. The base of the logarithm is arbitrary. It is the standard practice today to use a logarithm to base 2. EE322 Al-Sanie5 Uncertainty Surprise Probability Information

6
Discrete Source The discrete source is a source that emits symbols from a finite alphabet The source output is modeled as discrete random variable S which takes one symbol of the alphabet with probabilities Of course, this set of probabilities must satisfy the condition EE322 Al-Sanie6

7
Example of Discrete Source Analog source SamplerQuantizer EE322 Al-Sanie7

8
Discrete memoryless source: If the symbol emitted by the source during successive signaling intervals are statistically independent. We define the amount of information gained after observing the event S=s k, which occurs with probability p k, as: EE322 Al-Sanie8

9
Properties of amount of information: If we are absolutely certain of the outcome of an event, even before it occurs, there is no information gained.. The event yields a gain of information (or no information) but never a loss of information.. The event with lower probability of occurrence has the higher information. For statistically independent events s k and s l. EE322 Al-Sanie9

10
Entropy (The average information content per source symbol) Consider discrete memoryless source that emits symbols from a finite alphabet The amount of information I(s k ) is a discrete random variable that takes on values I(s 0 ), I(s 1 ), …, I(s K-1 ) with probabilities p 0, p 1, …, p K-1 respectively. EE322 Al-Sanie10

11
The mean of I(s k ) over the source alphabet is given by H is called the entropy of a discrete memoryless source. It is a measure of the average information content per source symbol. EE322 Al-Sanie11

12
Example A source emits one of four symbols s 0, s 1, s 2, and s 3 with probabilities ½, ¼, 1/8, and 1/8, respectively. The successive symbols emitted by the source are statistically independent. p 0 =1/2 p 1 =1/4 p 2 =1/8 p 3 =1/8 I(s 0 )=1 I(s 1 )=2 I(s 2 )=3 I(s 3 )=3 EE322 Al-Sanie12

13
Properties of the entropy. where K is the number of Symbols H=0 if p k =1 and p i =0 for i≠k. H=log 2 (K) if p k =1/K maximum uncertainty when all symbols occur with the same probabilities EE322 Al-Sanie13

14
Example: Entropy of Binary Memoryless Source Consider a binary source for which symols 0 occurs with probability p 0 and symbol 1 with probability p 1 =1-p 0. EE322 Al-Sanie14

15
Entropy function H(p 0 ) of binary source. EE322 Al-Sanie15 H=0 when p 0 =0 H=0 when p 0 =1 H=1 when p 0 =0.5 (equally likely symbols)

16
Extension of Discrete Memoryless Source It is useful to consider blocks rather than individual symbols, with each block consisting of n successive symbols. We may view each block as being produced by extended source with K n symbols, where K is the number of distinct symbols in the alphabet of the original source. The entropy of the extended source is EE322 Al-Sanie16

17
Example: Entropy of Extended source EE322 Al-Sanie17

18
EE322 Al-Sanie18 The entropy of the extended source:

19
Source Coding Source Coding is an efficient representation of symbols generated by the discrete source. The device that performs source coding is called source encoder. Source code: Assign short code words to frequent symbols Assign long code word to rare source symbols EE322 Al-Sanie19

20
EE322 Al-Sanie20

21
Example: Source code (Huffman code) for English alphabet EE322 Al-Sanie21

22
The source encoder should satisfy the following: 1. The code words produced by the encoder are in binary form. 2. The source code is uniquely decodable, so that the original source symbols can be reconstructed from the encoded binary sequence. 22 sksk Binary code word Binary code word sksk

23
Discrete Source Source encoder ModulatorChannelDemodulator Source decoder EE322 Al-Sanie23

24
Example A source emits one of four symbols s 0, s 1, s 2, and s 3 with probabilities ½, ¼, 1/8, and 1/8, respectively. The successive symbols emitted by the source are statistically independent. Consider the following source code for this source: The average code word length EE322 Al-Sanie24 codewordpsymbol 01/2s0 101/4s1 1101/8s2 1111/8s3

25
Compare the two source codes (I and II) for the pervious source If the source emits symbols with symbol rate 1000 symbols /s. If we use code I: average bit rate=1000X1.75=1750 bits/s If we use code II: average bit rate=1000X2=2000 bits/s EE322 Al-Sanie25 codeword for code I psymbol 01/2s0 101/4s1 1101/8s2 1111/8s3 codeword for code II psymbol 001/2s0 011/4s1 101/8s2 111/8s3

26
Let the binary code word assigned to symbol s k by the encoder has length l k, measured in bits. We define the average code-word length of the source encoder (the average number of bits per symbol) as What is the minimum value of ? The answer to this question is in Shannon’s first theorem “The source Coding Theorem” EE322 Al-Sanie26

27
Source Coding Theorem Given a discrete memoryless source of entropy H, the average code-word length for any distortionless source encoding scheme is bounded as: The minimum length The efficiency of the source encoder: EE322 Al-Sanie27

28
Example: The previous example EE322 Al-Sanie28

29
Uniquely Decodable Source Code A code is said to be uniquely decodable (U.D.) if the original symbols can be recovered uniquely from sequences of encoded bits. The source code should be uniquely decodable code. EE322 Al-Sanie29 sksk Binary code word Binary sequence sksk

30
Example codewordsymbol 00s0s0 s1s1 11s2s2 EE322 Al-Sanie30 codewordsymbol 0s0s0 1s1s1 11s2s2 codewordsymbol 00s0s0 01s1s1 11s2s2 This code is not UD because the symbols s 0 and s 1 have the same code words This code is not UD: the sequence: … can be decoded as …→ s 1 s 1 s 1 or …→ s 1 s 2 or …→ s 2 s 1 This code is UD code

31
Prefix-free Source Codes A prefix-free code: is a code in which no codeword is a prefix of any other codeword. Example: EE322 Al-Sanie31 codewordsymbol 0s0s0 10s1s1 110s2s2 111s4s4 codewordsymbol 0s0s0 01s1s1 011s2s2 0111s4s4 Prefix-free Code Not Prefix-free Code

32
A prefix-free code has the important property that it is always uniquely decodable. But the converse in not necessarily true. Prefix-free → UD UD not necessarily prefix-free Example: UD but not prefix-free EE322 Al-Sanie32 codewordsymbol 0s0s0 01s1s1 011s2s2 0111s4s4

33
Prefix-free codes have the advantage of being instantaneously decodable, i.e., a symbol can be decoded by the time the last bit in it is reached. Example: EE322 Al-Sanie33 codewordsymbol 0s0s0 10s1s1 110s2s2 111s4s4 The sequence …. is decoded as s 1 s 3 s 2 s 0 s 0 …

34
Huffman Code Huffman is an important prefix-free source code. The Huffman encoding algorithm proceeds as follows: 1.The source symbols are listed in order of decreasing probability. The two source symbols of lowest probability are assigned a 0 and a 1. 2.These two source symbols are regarded as being combined into a new symbol with probability equal to the sum of the two probabilities. The probability of the new symbol is placed in the list in accordance with its value. 3.The procedure is repeated until we are left with a final list of two symbols for which a 0 and 1 are assigned. 4.The code word for each symbol is found by working backward and tracing the sequence of 0 and 1. EE322 Al-Sanie34

35
Example: Huffman code A source emits one of four symbols s 0, s 1, s 2, and s 3 with probabilities ½, ¼, 1/8, and 1/8, respectively. EE322 Al-Sanie35 S0 1/2 S1 1/4 S2 1/8 S3 1/ codewordpsymbol 01/2s0 101/4s1 1101/8s2 1111/8s3

36
Example: The previous example EE322 Al-Sanie36

37
Example: Huffman code EE322 Al-Sanie37

38
EE322 Al-Sanie38

39
Example: Huffman code EE322 Al-Sanie39 S1 S2 S3 S4 S5 S6 S7 s8 00s1 010s2 011s3 100s4 101s5 110s6 1110S7 1111s8

40
Discrete memoryless channel EE322 Al-Sanie40

41
Discrete memoryless channels A discrete memoryless channel is a statistical model with an input X and an output Y that is a noisy version of X; both X and Y are random variables. Every unit of time, the channel accepts an input symbol X selected from an alphabet and in response it emits an output symbol Y from an alphabet. The channel is said to be discrete when both of the alphabets and have finite size. It said to be memoryless when the current output symbol depends only on current input symbol and not any of the previous ones. EE322 Al-Sanie41

42
The input alphabet: The output alphabet: The transition probabilities: EE322 Al-Sanie42

43
The event that the channel input X=x j occurs with probability The joint probability distribution of random variable X and Y is given by The probabilities of output symbols EE322 Al-Sanie43

44
Example: Binary Symmetric Channel (BSC) It is a special case of the discrete memoryless channel with J=K=2. The channel has two input symbols x 0 =0 and x 1 =1. The channel has two output symbols y 0 =0 and y 1 =1. The channel is symmetric the probability of receiving 1 if a 0 is sent is the same as the probability of receiving a 0 if a 1 is sent. i.e P(y=1/x=0)=P(y=0/x=1)=p EE322 Al-Sanie44

45
Transition probability diagram of binary symmetric channel EE322 Al-Sanie45

46
Information Capacity Theorem (Shannon’s third theorem) The information capacity of a continuous channel of bandwidth B Hertz, perturbed by additive white Gaussian noise of power spectral density N o /2 and limited in bandwidth to B is given by Where P is the average transmitted signal power. And N o B= σ 2 =noise power EE322 Al-Sanie46

47
The dependence of C on B is linear, whereas its dependence on signal-to-noise ratio P/N o B is logarithmic. Accordingly it is easier to increase the capacity of a channel by expanding bandwidth than increasing the transmitted power for a prescribed noise variance. EE322 Al-Sanie47

48
The channel capacity theorem implies that we can transmit information at the rate of C bits per second with arbitrarily small probability of error by employing sufficiently complex encoding system. It is not possible to transmit at a rate higher than C by any encoding system without a definite probability of error. Hence, the channel capacity defines the fundamental limit on the rate of error-free transmission for power-limited, band-limited Gaussian channel. EE322 Al-Sanie48

49
If R b ≤C it is possible to transmit with small probability of error by employing sufficiently complex encoding system. If R b >C it is not possible to transmit with definite probability of error. EE322 Al-Sanie49

50
Implications of the information Capacity Theorem Consider an ideal system that transmits data at bit rate R b equal to the information capacity C. R b =C The average transmitted power may expressed as But EE322 Al-Sanie50

51
A plot o bandwidth efficiency R b /B versus E b /N o is called the bandwidth efficiency diagram. (Figure in next slide). The curve labeled “capacity boundary” corresponds to the ideal system for which R b =C. EE322 Al-Sanie51

52
EE322 Al-Sanie52 Bandwidth- efficiency diagram.

53
Based on the previous figure, we make the following observations: 1.For infinite bandwidth, the ratio E b /N o approaches the limiting value This value is called the Shannon limit. 2.The capacity boundary curve (R b =C) separate two regions R b

54
a) Comparison of M-ary PSK against the ideal system for P e 10 5 and increasing M. (b) Comparison of M-ary FSK against the ideal system for P e 10 5 and increasing M. EE322 Al-Sanie54

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google