X= {x 0, x 1,….,x J-1 } Y= {y 0, y 1, ….,y K-1 } Channel Finite set of input (X= {x 0, x 1,….,x J-1 }), and output (Y= {y 0, y 1,….,y K-1 }) alphabet.

Slides:



Advertisements
Similar presentations
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Advertisements

Binary Symmetric channel (BSC) is idealised model used for noisy channel. symmetric p( 01) =p(10)
Information Theory EE322 Al-Sanie.
Capacity of Wireless Channels
1 exercise in the previous class Determine the stationary probabilities. Compute the probability that 010 is produced. A BC 0/0.4 0/0.5 1/0.6 0/0.81/0.5.
Chain Rules for Entropy
Protein- Cytokine network reconstruction using information theory-based analysis Farzaneh Farhangmehr UCSD Presentation#3 July 25, 2011.
Chapter 6 Information Theory
Background Knowledge Brief Review on Counting,Counting, Probability,Probability, Statistics,Statistics, I. TheoryI. Theory.
Fundamental limits in Information Theory Chapter 10 :
Information Theory Rong Jin. Outline  Information  Entropy  Mutual information  Noisy channel model.
Distributed Source Coding 教師 : 楊士萱 老師 學生 : 李桐照. Talk OutLine Introduction of DSCIntroduction of DSC Introduction of SWCQIntroduction of SWCQ ConclusionConclusion.
June 1, 2004Computer Security: Art and Science © Matt Bishop Slide #32-1 Chapter 32: Entropy and Uncertainty Conditional, joint probability Entropy.
Noise, Information Theory, and Entropy
Probability Review Thursday Sep 13.
Probability and Statistics Review Thursday Sep 11.
Recognition stimulus input Observer (information transmission channel) response Response: which category the stimulus belongs to ? What is the “information.
Albert Gatt Corpora and Statistical Methods. Probability distributions Part 2.
1 Statistical NLP: Lecture 5 Mathematical Foundations II: Information Theory.
Some basic concepts of Information Theory and Entropy
§1 Entropy and mutual information
STATISTIC & INFORMATION THEORY (CSNB134)
Information Theory & Coding…
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy.
Channel Coding Part 1: Block Coding
§4 Continuous source and Gaussian channel
CY2G2 Information Theory 5
1 Quasi-Anonymous Channels Ira S. Moskowitz --- NRL Richard E. Newman --- UF Paul F. Syverson --- NRL Center for High Assurance Computer Systems Code 5540.
Chapter 11: The Data Survey Supplemental Material Jussi Ahola Laboratory of Computer and Information Science.
Introduction to Information theory A.J. Han Vinck University of Duisburg-Essen April 2012.
Channel Capacity.
§3 Discrete memoryless sources and their rate-distortion function §3.1 Source coding §3.2 Distortionless source coding theorem §3.3 The rate-distortion.
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Essential Information Theory I AI-lab
1 Anonymity and Covert Channels in Simple, Timed Mix-firewalls Richard E. Newman --- UF Vipan R. Nalla -- UF Ira S. Moskowitz --- NRL
§2 Discrete memoryless channels and their capacity function
Cooperative Communication in Sensor Networks: Relay Channels with Correlated Sources Brian Smith and Sriram Vishwanath University of Texas at Austin October.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
1 Information Theory Nathanael Paul Oct. 09, 2002.
Information Theory Basics What is information theory? A way to quantify information A lot of the theory comes from two worlds Channel.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Source Coding Efficient Data Representation A.J. Han Vinck.
1 Covert Channels and Anonymizing Networks Ira S. Moskowitz --- NRL Richard E. Newman --- UF Daniel P. Crepeau --- NRL Allen R. Miller --- just hanging.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
The Finite-state channel was introduced as early as 1953 [McMillan'53]. Memory captured by channel state at end of previous symbol's transmission: - S.
The Channel and Mutual Information
Basic Concepts of Information Theory A measure of uncertainty. Entropy. 1.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Essential Information Theory II AI-lab
Computational Linguistics Seminar LING-696G Week 5.
This file contains figures from the book: Information Theory A Tutorial Introduction by Dr James V Stone 2015 Sebtel Press. Copyright JV Stone. These.
Mutual Information and Channel Capacity Multimedia Security.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
UNIT –V INFORMATION THEORY EC6402 : Communication TheoryIV Semester - ECE Prepared by: S.P.SIVAGNANA SUBRAMANIAN, Assistant Professor, Dept. of ECE, Sri.
(C) 2000, The University of Michigan 1 Language and Information Handout #2 September 21, 2000.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by J.W. Ha Biointelligence Laboratory, Seoul National University.
Chapter 4: Information Theory. Learning Objectives LO 4.1 – Understand discrete and continuous messages, message sources, amount of information and its.
Statistical methods in NLP Course 2 Diana Trandab ă ț
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Statistical methods in NLP Course 2
Introduction to Information theory
Corpora and Statistical Methods
Multiple Access Covert Channels
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
Subject Name: Information Theory Coding Subject Code: 10EC55
Distributed Compression For Binary Symetric Channels
Presentation transcript:

X= {x 0, x 1,….,x J-1 } Y= {y 0, y 1, ….,y K-1 } Channel Finite set of input (X= {x 0, x 1,….,x J-1 }), and output (Y= {y 0, y 1,….,y K-1 }) alphabet. Current output symbol (y k ) depends only on current input symbol x j.

x 0 = 0 x 1 = 1 y 0 = 0 y 1 = 1 P(y 0 /x 0 ) = P(0/0) = 1 P(y 1 /x 1 ) = P(1/1) = 1 P(y 0 /x 1 ) = P(0/1) = 0 P(y 1 /x 0 ) = P(1/0) = 0 TransmittedReceived

The conditional probability P(y k /x j ) is the probability of receiving a certain symbol y k given a certain symbol x j was transmitted. Ex: In a Noiseless channel:  The Probability of receiving a 0 given that a 0 was transmitted = P(0/0) = 1  The Probability of receiving a 0 given that a 1 was transmitted = P(0/1) = 0  The Probability of receiving a 1 given that a 0 was transmitted = P(1/0) = 0  The Probability of receiving a 1 given that a 1 was transmitted = P(1/1) = 1

x 0 = 0 x 1 = 1 y 0 = 0 y 1 = 1 P(y 0 /x 0 ) = P(0/0) = 1- P e P(y 1 /x 1 ) = P(1/1) = 1- P e P(y 0 /x 1 ) = P(0/1) = P e P(y 1 /x 0 ) = P(1/0) = P e TransmittedReceived

x 0 = 0 x 1 = 1 y 0 = 0 y 1 = 1 P(y 0 /x 0 ) = P(0/0) = 1- P e P(y 1 /x 1 ) = P(1/1) = 1- P e P(y 0 /x 1 ) = P(0/1) = P e P(y 1 /x 0 ) = P(1/0) = P e Fixed Output Fixed Input

X= {x 0, x 1,….,x J-1 } Y= {y 0, y 1, ….,y K-1 } Channel Fixed Output Fixed Input

The probability of the each symbol emitted from the source at the transmitter side. P(x j ) = P(X=x j ) The probability of receiving a certain symbol y k given a certain symbol x j was transmitted. P(y k / x j ) = P( Y=y k / X=x j )

The probability of sending a certain symbol x j,and receiving a certain symbol y k. P(x j, y k ) = P(X= x j, Y=y k ) =P(Y=y k / X= x j ) P(X= x j ) =P(y k /x j ) P(x j )

The probability of receiving a certain symbol y k. P(y k ) = P(Y=y k ) = x 0 = 0 x 1 = 1 y 0 = 0 y 1 = 1 O R

P(x j, y k ) = P(y k, x j ) P(y k /x j ) P(x j )=P(x j /y k ) P(y k ) P(x j /y k ) = =

x 0 = 0 x 1 = 1 y 0 = 0 y 1 = 1 P(y 0 /x 0 ) = P(0/0) = 1- P e P(y 1 /x 1 ) = P(1/1) = 1- P e P(y 0 /x 1 ) = P(0/1) = P e P(y 1 /x 0 ) = P(1/0) = P e =

= x 0 = 0 x 1 = 1 y 0 = 0 y 1 = 1 P(y 0 /x 0 ) = P(0/0) = 1- q P(y 2 /x 0 ) = P(e/0) = q TransmittedReceived y 2 = e P(y 1 /x 1 ) = P(1/1) = 1- q P(y 2 /x 1 ) = P(e/1) = q

The average information transmitted over the channel per symbol. H(X) = The average information lost due to the channel per symbol, given that a certain symbol y k is received. H(X/y k ) =

The mean of the entropy over all the received symbols. = = P(x j, y k ) = H(X/Y) = H(X/Y) O R Equivocation of X with respect to Y

= = P(y k, x j ) H(Y/X) = H(Y/X) O R Equivocation of Y with respect to X = H(Y/X) = H(Y/ x j ) =

The average information the receiver receives per symbol. I(X,Y) = H(X) – H(X/Y)

= - =1 = = = = - -

 I(X,Y) = H(X) – H(X/Y)  I(Y,X) = H(Y) – H(Y/X)  I(X,Y) = I(Y,X)  I(X,Y) = I (Y, X) = H(X) + H(Y) – H(X,Y) Where H(X,Y)= I(X,Y) H(X)H(Y) H(X/Y)H(Y/X) H(X,Y)

 The channel capacity of a discrete memoryless channel is defined as the maximum rate at which the information can be transmitted through the channel.  It is the maximum mutual information over all the possible distributions of input probabilities P(x j ) C =

x 0 = 0 x 1 = 1 y 0 = 0 y 1 = 1 P e ” = 1- P e PePe PePe P (x 0 ) = P 0 I(X,Y) = P (x 1 ) = P 0 ” = 1- P 0 = k=0, j=0 : k=0, j=1: k=1, j=0: k=1, j=1:

+ = = - = = = = + I(X,Y)

+ I(X,Y) = C = I(X,Y) is maximum when all the transmitted symbols are equiprobable. i.e. P(x 0 ) = P(x 1 ) = 0.5 P 0 ” = 1- P 0 = 0.5 C = I(X,Y) = + C = 1

P (x 0 ) = P 0 I(X,Y) = P (x 1 ) = P 0 ” = 1- P 0 = k=0, j=0: k=0, j=1: k=1, j=0: k=1, j=1: x 0 = 0 x 1 = 1 y 0 = 0 y 1 = 1 y 2 = e k=2, j=0: + k=2, j=1: + q” = 1- q q q

0 I(X,Y) C = I(X,Y) is maximum when all the transmitted symbols are equiprobable. i.e. P(x 0 ) = P(x 1 ) = 0.5 P 0 ” = 1- P 0 = 0.5 C = I(X,Y) C 0 = ++ +

Given J input symbols, and K output symbols, the channel capacity of a symmetric discrete memoryless channel is given by: