Presentation on theme: "DCSP-10 Jianfeng Feng Department of Computer Science Warwick Univ., UK"— Presentation transcript:
DCSP-10 Jianfeng Feng Department of Computer Science Warwick Univ., UK
Channel coding; Hamming distance The task of source coding is to represent the course information with the minimum of symbols. When a code is transmitted over a channel in the presence of noise, errors will occur. The task of channel coding is to represent the source information in a manner that minimises the error probability in decoding.
It is apparent that channel coding requires the use of redundancy. If all possible outputs of the channel correspond uniquely to a source input, this is no possibility of detecting errors in the transmission. To detect, and possibly correct errors, the channel code sequence must be longer the source sequence. The rate of R of a channel code is the average ratio of the source sequence length to the channel code length. Thus R<1.
A good channel code is designed so that, if a few errors occur in transmission, the output can still be decoded with the correct input. This is possible because although incorrect, the output is sufficiently similar to the input to be recognisable.
The idea of similarity is made more firm by the definition of a Hamming distance. Let x and y be two binary sequence of the same length. The hamming distance between these two codes is the number of symbols that disagree.
Two example distances: 0100->1001 has distance 3 (red path); 0110->1110 has distance 1 (blue path)
The Hamming distance between and is 2. The Hamming distance between and is 3. The Hamming distance between "toned" and "roses" is 3.
Suppose the code x is transmitted over the channel. Due to error, y is received. The decoder will assign to y the code x that minimises the Hamming distance between x and y.
It can be shown that to detect n bit errors, a coding scheme requires the use of codewords with a Hamming distance of at least n+1. it can be also shown that to correct n bit errors requires a coding scheme with a least a Hamming distance of 2n+1 between the codewords. By designing a good code, we try to ensure that the Hamming distance between possible codewords x is larger than the Hamming distance arising from errors.
Channel Capacity One of the most famous of all results of information theory is Shannon's channel capacity theorem. For a given channel there exists a code that will permit the error-free transmission across the channel at a rate R, provided R
C = B log 2 ( 1 + (S/N) ) b/s
As we have already noted, the astonishing part of the theory is the existence of a channel capacity. Shannon's theorem is both tantalizing and frustrating.
It is offers error-free transmission, but it makes no statements as to what code is required. In fact, all we may deduce from the proof of the theorem is that is must be a long one. No none has yet found a code that permits the use of a channel at its capacity. However, Shannon has thrown down the gauntlet, in as much as he has proved that the code exists.
We shall not give a description of how the capacity is calculated. However, an example is instructive. The binary channel is a channel with a binary input and output. Associated with each output is a probability p that the output is correct, and a probability 1-p it is not.
For such a channel, the channel capacity turns output to be: C =1+ p log 2 p+ (1-p) log 2 (1-p) Here, p is the bit error probability. If p=0, then C=1. If p=0.5, then C=0. Thus if there is equal of receiving a 1 or 0, irrespective of the signal sent, the channel is completely unreliable and no message can be sent across it.
So defined., the channel capacity is a non-dimensional number. We normally quote the capacity as a rate, in bits/second. To do this we relate each output to a change in the signal. For the binary channel we have C = B [1+p log 2 p+(1-p) log 2 (1-p)] We note that C