Presentation is loading. Please wait.

Presentation is loading. Please wait.

Modulation and Coding Schemes

Similar presentations


Presentation on theme: "Modulation and Coding Schemes"— Presentation transcript:

1 Modulation and Coding Schemes
Αν. Καθηγητής Γεώργιος Ευθύμογλου Module Title

2 Εισαγωγή Bit level end-to-end system model Modulation Demodulation
Gray encoding Demodulation Hard decision Soft decision Channel Coding Convolutional codes Puncturing Spatial Modulation Module Title

3 Bit level end-to-end system model
0 1… ej7π/4, ejπ/4 ,… 0 1…

4 Bit level end-to-end system model
bitseq =[ ] Find modulation symbols for BPSK at baseband (carrier freq. =0) 1  +1 0  -1 for QPSK at baseband (carrier freq. =0) 10  i ej7π/4 11  i ejπ/4 00  i ej5π/4 01  i ej3π/4

5 Symbol and bit Rate Assume the transmit duration of a BPSK symbol is Ts It follows that the transmit symbol rate is Rs = 1/Ts The transmit bandwidth is also BW = Rs (in Hz) The bit rate is Rb = (#bit/symbol)/Ts = 1/Ts Assume the transmit duration of a QPSK symbol is Ts The bit rate is Rb = (#bit/symbol)/Ts = 2/Ts * QPSK has twice the bit rate of BPSK with the same transmit bandwidth

6 Digital modulations at baseband
bitseq =[ ] Find modulation symbols function [mod_symbols, sym_table, M]=modulator(bitseq,b) N_bits=length(bitseq); if b==1 % BPSK modulation sym_table=exp(j*[0 -pi]); sym_table=sym_table([1 0]+1); inp=bitseq; mod_symbols=sym_table(inp+1); M=2;

7 Digital modulations at baseband
function [mod_symbols, sym_table, M] = modulator(bitseq,b) if b==2 % QPSK modulation N_bits=length(bitseq); sym_table=exp(j*pi/4*[ ]); sym_table=sym_table([ ]+1); inp=reshape(bitseq,b,N_bits/b); mod_symbols=sym_table([2 1]*inp+1); M=4;

8 Digital modulations at baseband
sym_table = i i i i >>xsym=[0:3]; >> scatterplot(sym_table); >> text(real(sym_table)+0.1, … imag(sym_table), dec2bin(xsym)); >> axis([ ]);

9 Modulation and Coding Schemes (MCS)
4-Phase Shift Keying Quadrature Amplitude Modulation

10 Modulation and Coding Schemes (MCS)
4-PSK transmits If 11  If 01  If 00  If 10  The transmit signal is written as linear combination of 2 orthogonal functions Coefficients determine transmit modulation symbol.

11 Modulation and Coding Schemes (MCS)
16-QAM transmits If  If  If  The transmit signals differ in phase and/ or amplitude. Bit Rate is given

12 Digital Modulation at Passband
If the modulated signal has the waveform where fc is the carrier frequency, then a baseband simulation recognizes that this equals to and models only the part inside the square brackets. The modulated signal at baseband is given by the complex signal

13 Digital Modulation at Passband
Example QPSK: analyze tx signal to basis functions a1= {-1, +1}*0.707 b1= {-1, +1}*0.707 Tx:

14 MCS based on received SNR
Example of QPSK with SNR = 14 dB Receiver detection obtain Euclidean distance between received signal z and all tx symbols Maximum likelihood detector selects

15 MCS based on received SNR
The MCS used depends on the received SNR at the receiver side. Example for 3.5MHz bandwidth: ID MCS Received power (dBm) SNR (dB) 1 BPSK 1/2 -91 6.4 2 QPSK 1/2 -88 9.4 3 QPSK 3/4 -86 11.2 4 16-QAM ½ -81 16.4 5 16-QAM 3/4 -79 18.2 6 64-QAM 2/3 -74 22.7 7 64-QAM 3/4 -73 24.4

16 Convolutional channel encoder

17 Example of convolutional encoder
(n,k,L) = (2, 1, 7) Generator polynomial: G = [ , ] = [155, 117]

18 Puncturing for adjusting coding rate
At the transmitter, data bits are encoded using the standard (1558,1718) convolutional encoder. The coding rate can be adjusted to 1/2, 2/3, or 3/4 via puncturing ! Example of puncturing pattern [ ], means that at the encoder output, in every group of 6 output bits, #4 and #5 are not sent… At the input to the decoder in every group of 4 bits (with transmit values of +1 or -1), 2 bits will be inserted at positions 4 and 5, having value of 0 !

19 Data Rates based on MCS selection
Bit rate depends on the modulation and coding scheme, as follows In all cases, the transmitted bandwidth is given by

20 Spectral efficiency of MCS
Assume BW=1 Hz ID Modulation &Coding Scheme Spectral efficiency of MCS (bit/sec/Hz) 1 BPSK 1/2 1 x ½ = 0.5 2 QPSK 1/2 2 x ½ =1.0 3 QPSK 3/4 2 x ¾ =1.5 4 16-QAM ½ 4 x ½ = 2.0 5 16-QAM 3/4 4 x 3/4 = 3.0 6 64-QAM 2/3 6 x 2/3 = 4.0 7 64-QAM 3/4 6 x ¾ = 4.5

21 Channel coding In the previous section, we considered a symbol-by-symbol detection. In this section, we will consider a block-by-block detection.

22 Forward Error Correction (FEC)

23 Channel Coding (Κωδικοποίηση καναλιού)
Ανίχνευση και διόρθωση λαθών Convolutional coder with rate ½ (for 1 information bit we send 2 coded bits). At the receiver the extra bits are used to detect and correct a number of errors that occurred in the channel.

24 CONVOLUTIONAL CODE GENERATOR
THIS > NEXT STATE STATE BA -C--> CBA=(XY) next BA > 000=(00) ===> 00 > 100=(11) ===> 10 > 001=(11) ===> 00 > 101=(00) ===> 10 > 010=(10) ===> 01 > 110=(01) ===> 11 > 011=(01) ===> 01 > 111=(10) ===> 11 (n,k,L) = (2,1,3) G = [111, 101]

25 Example: tx message 1011 1 1 1 y2(n) 1 x(n-2) x(n-1) x(n) y1(n)
x(n) x(n-1) x(n-2) y1(n) y2(n) 1 1 1 1 Tx CODEWORD

26 Trellis Diagram of convolutional code
input symbol is 1 input symbol is 0 arcs labeled with output symbols

27 Trellis diagram for convolutional code
Info bits BA 0/00 0/00 0/00 0/00 0/00 0/00 00 1/11 1/11 1/11 1/11 1/11 1/11 0/11 0/11 0/11 0/11 0/11 0/11 01 1/00 1/00 1/00 1/00 1/00 1/00 0/10 0/10 0/10 0/10 0/10 0/10 10 1/01 1/01 1/01 1/01 1/01 1/01 0/01 0/01 0/01 0/01 0/01 0/01 11 1/10 1/10 1/10 1/10 1/10 1/10 CodeWord

28 Convolutional code output: continuous trellis
In the trellis diagram below, the path corresponds to information codeword Each trellis node is a state in a time instance and branches that connect the nodes correspond to state transitions. Input bits: Encoded output symbols:

29 Channel coding principle
k*R info bits are mapped to n*R bits How many codewords (different combinations) of n*R bits exist? Ans: 2^(n*R) How many different combinations of k*R bits exist? Ans: 2^(k*R) How many are valid codewords? that is, correspond to an input sequence of k*R bits? Ans: 2^(k*R) Therefore out of 2^(n*R) codewords only 2^(k*R) are valid !!!

30 Channel coding principle
Therefore out of 2^(n*R) codewords only 2^(k*R) are valid !!! Which are those? Those that correspond to a continuous trellis !!! Therefore if we receive nR bits that because of errors correspond to a non-continuous trellis (not valid), then we should find the “closest” codeword that has a continuous trellis diagram. Then use this new received codeword to obtain back the info bits.

31 Channel coding principle
The correcting capability of a code depends on the Hamming distances between the valid codewords. Especially, the correcting capability depends on the minimum Hamming distance between two codewords, known as free distance, dfree. Q: how do we find the “closest” valid codeword to the received (not-valid) one??? dfree

32 Viterbi decoding It is easy to understand that an exhaustive search of codewords is impossible for practical values of codewords (length > 100) Luckily, Viterbi proposed a low complexity decoding scheme with performance close to the ideal exhaustive search. This scheme was named after him… and is the well know Viterbi algorithm. The Viterbi algorithm uses the trellis representation of the code, starts with the first code symbols (n bits), and at each node of the trellis assigns a value that represents the difference of a valid codeword to the received codeword.

33 Viterbi decoding It is found that as we traverse to a depth known as (usually 32 states), we can obtain the continuous trellis path that has the smallest difference (Hamming or Euclidean distance) from the received codeword and decide for the received codeword. Then going back we can output the information bits that correspond to the detected codeword. Two measures of “closeness” exist: Hamming distance when input to Viterbi algorithm are bit values, obtained from hard decision demodulation. Euclidean distance when input to Viterbi algorithm are soft bit values.

34 Example of Viterbi decoding
Assume convolutional encoder with 4 states – 2 paths arrive to each state with different total metrics (e.g., Hamming distances) Viterbi algorithm: we keep the path with the smaller Hamming distance until that node. There is no reason to keep the other path!

35 Example of Viterbi decoding
At the end of the trellis there is one path that has the smaller cumulative error (2), that is Hamming distance from a valid codeword. The value (2) means that there 2 bit difference between the received codeword and the valid codeword that is selected. The selected path in blue color is traced back and in each transition 1 information bit is given as output. In the above diagram only “survived” paths at each trellis node are shown.

36 Viterbi Algorithm Dynamic programming algorithm, computes most likely message sequence leading up to every intermediate state & associated cost • Branch metric: BM(xmit,rcvd) for each branch of the trellis – proportional to negative log likelihood, i.e. negative log probability that we receive rcvd, given that xmit was sent – “Hard decision”: use bits, compute Hamming distance between xmit and rcvd – “Soft decision”: use received voltages directly • Path metric: PM[s,i] for each state s of the 2L‐1 transmitter states and bit time i, where 0 ≤ i < L‐1 – PM[s,i] = smallest sum of BM(xmit, rcvd), minimized over all message sequences that place transmitter in state s at time i – PM[s,i+1] computed from PM[s,i] and the BM for outgoing branches at s, i

37 Branch Metric (BM) for Hard Decision Decoding
BM = Hamming distance between expected coded bits and received coded bits • Compute BM for each transition arc in trellis – Example: received bits = 00 BM(00,00) = 0 BM(01,00) = 1 BM(10,00) = 1 BM(11,00) = 2 • Will be used in computing PM[s,i+1] from PM[s,i]. 0/00 1/11 0/11 1/00 0/10 1/01 0/01 1/10 00 01 10 11 Time i i+1 STATE Rx 00 2 1

38 Computing PM[s,i] Starting point: All PM[s, i] known, label in trellis box for each state at time i. Example: PM[00,i] = 1 means 1 bit different when comparing the received bits to what would have been transmitted when sending the coded bits that correspond to the trellis that ends at state 00 at time i. 0/00 1 2 3 1/11 0/11 1/00 0/10 1/01 0/01 1/10 00 01 10 11 Time i i+1 STATE Rx 00

39 Computing PM[s,i] Q: If the trellis is in state s at time i+1, what states could it have been in at time i? Ans: For each state s, there are two predecessor states α and β in the trellis diagram Example: for state 01, α=10 and β=11. Any path that arrives in state s at time i+1 must have left the trellis in state α or state β at time i. 0/00 1 2 3 1/11 0/11 1/00 0/10 1/01 0/01 1/10 00 01 10 11 Time i i+1 STATE Rx 00

40 Computing PM[s,i] e.g., which is the more likely path into state 01 at time i+1? PM[01,i+1] = min{PM[10,i] + 1 ,PM[11,i] + 1} = min(3+1 , 2+1) = 3 0/00 1 2 3 1/11 0/11 1/00 0/10 1/01 0/01 1/10 00 01 10 11 Time i i+1 STATE Rx 00 3

41 Computing PM[s,i] Formalizing the computation:
PM[s,i+1] = min(PM[α,i]+BM[α→s], PM[β,i]+BM[β→s]) • Remember which arc was min. Saved arcs will generate path through trellis at the end. If both arcs have same sum, select one in random manner. E.g. when computing PM[10, i+1] 0/00 1 2 3 1/11 0/11 1/00 0/10 1/01 0/01 1/10 00 01 10 11 Time i i+1 STATE Rx 00 1 3 3 3

42 Hard Viterbi decoding in action…
Received 0/00 1/11 0/11 1/00 0/10 1/01 0/01 1/10 00 01 10 11 BA Complete the path metrics and find decoded bits !!!

43 Viterbi decoding metrics
It is found that as we traverse to a depth known as (usually 32 states), we can obtain the continuous trellis path that has the smallest difference (Hamming or Euclidean distance) from the received codeword and decide for the received codeword. Then going back we can output the information bits that correspond to the detected codeword. Two measures of “closeness” exist: Hamming distance when input to Viterbi algorithm are bit values, obtained from hard decision demodulation. Euclidean distance when input to Viterbi algorithm are soft bit values.

44 Soft decision path metric
Για soft decision decoding τα j=1,…,n λαμβανόμενα coded bits κατά την i-th χρονική στιγμή χρησιμοποιούνται κατευθείαν στο decoder. Για παράδειγμα, για BPSK διαμόρφωση, τα λαμβανόμενα σύμβολα δίνονται ως: are random variables due to additive Gaussian noise (with mean value 0 and variance ) with conditional distribution

45 Soft decision path metric
For maximum likelihood decoding, the decoder must select the codeword for which Every valid codeword C corresponds to a continuous path in the trellis. For a convolutional code of rate 1/n, the previous equation can be written as where i is the i-th branch of the trellis. Notice that we multiply the likelihood of each bit

46 Soft decision path metric
Taking log-likelihood, previous relationship becomes where the quantity gives the branch metric of the i-th branch of the trellis. In practice, the summation is not to infinity but includes many received symbols (1 symbol = n bits). The number of states transitioned before a decisiοn is made is called trace-back length, tblen.

47 Soft decision path metric
The branch metric of the i-th branch of the trellis is given by To maximize the log-likelihood metric Bi is equivalent to minimize the Euclidean distance between

48 Best path in the Trellis
After we transition TBLEN number of states in the trellis, we select the state with the “best” metric Then, we trace back and we output the information bits.

49 Best path in the Trellis
To maximize the log-likelihood metric Bi is equivalent to minimize the Euclidean distance between The final result is obtained by ignoring scaling terms that are independent from

50 Best path in the Trellis
Επομένως, maximize the log-likelihood metric Bi is equivalent to minimize the Euclidean distance between Όπου το τελευταίο αποτέλεσμα το παίρνουμε παραλείπoντας scaling όρους και όρους που είναι ίδιο ανεξάρτητα με το

51 Hard/Soft decision Viterbi decoding in action…
Rx bits 0/00 1/11 0/11 1/00 0/10 1/01 0/01 1/10 00 01 10 11 BA Rx values , , , , , , -0.8 Complete the path metrics for soft decision Viterbi decoding and find decoded bits !!!

52 Example of MCS The MCS used depends on the received SNR at the receiver side. Example for 3.5MHz bandwidth: ID MCS Received power (dBm) SNR (dB) 1 BPSK 1/2 -91 6.4 2 QPSK 1/2 -88 9.4 3 QPSK 3/4 -86 11.2 4 16-QAM ½ -81 16.4 5 16-QAM 3/4 -79 18.2 6 64-QAM 2/3 -74 22.7 7 64-QAM 3/4 -73 24.4

53 Example of convolutional encoder

54 Example of convolutional encoder
(n,k,L) = (2, 1, 7) Generator polynomial: G = [ , ] = [155, 117]

55 Puncturing for adjusting code rate
At the transmitter, data bits are encoded using the standard (1558,1718) convolutional encoder. The coding rate can be adjusted to 1/2, 2/3, or 3/4 via puncturing ! Example of puncturing pattern [ ], means that at the encoder output, in every group of 6 output bits, #4 and #5 are not sent… At the input to the decoder in every group of 4 bits (with transmit values of +1 or -1), 2 bits will be inserted at positions 4 and 5, having value of 0 !

56 Puncturing for adjusting code rate
This example processes a punctured convolutional code. It begins by generating 30,000 random bits and encoding them using a rate-3/4 convolutional encoder with a puncture pattern of [ ]. The resulting vector contains 40,000 bits, which are mapped to values of -1 and 1 for transmission. The punctured code, punctcode, passes through an additive white Gaussian noise channel. Then vitdec decodes the noisy vector using the 'unquant' decision type. Finally, the example computes the bit error rate and the number of bit errors.

57 Puncturing for adjusting code rate
len = 30000; msg = randi([0 1], len, 1); % Random data t = poly2trellis(7, [ ]); % Define trellis. punctcode = convenc(msg, t, [ ]); % Length is (2*len)*2/3. tcode = 1-2*punctcode; % Map "0" bit to 1 and "1" bit to -1 ncode = awgn(tcode, 3, 'measured'); % Add noise. % Decode the punctured code decoded = vitdec(ncode, t, 96, 'trunc', 'unquant', [ ]); [numErrP, berP] = biterr(decoded, msg); % Bit error rate

58 Concept behind Spatial modulation
Spatial Multiplexing Transmit Diversity Spatial Modulation

59 3D Constellation Diagram for 4 tx antennas
M. Di Renzo, H. Hass and P.M. Grant “Spatial Modulation for Multiple-Antenna Wireless Systems-A Survey”, IEEE Comm. Mag., 49(12), pp , 2011

60 3D Constellation Diagram (2/4)
M. Di Renzo, H. Hass and P.M. Grant “Spatial Modulation for Multiple-Antenna Wireless Systems-A Survey”, IEEE Comm. Mag., 49(12), pp , 2011

61 3D Constellation Diagram (3/4)
First 4 bits are mapped to Symbol in yellow transmitted from Tx3 M. Di Renzo, H. Hass and P.M. Grant “Spatial Modulation for Multiple-Antenna Wireless Systems-A Survey”, IEEE Comm. Mag., 49(12), pp , 2011

62 3D Constellation Diagram (4/4)
Second 4 bits are mapped to symbol in yellow transmitted from Tx0 M. Di Renzo, H. Hass and P.M. Grant “Spatial Modulation for Multiple-Antenna Wireless Systems-A Survey”, IEEE Comm. Mag., 49(12), pp , 2011

63 Transmitter

64 Wireless Channel

65 Receiver Detection Estimate which antenna transmitted (recover bits mapped to this antenna number) Estimate the modulation symbol from this antenna (recover bits mapped to this symbol)

66 Some issues on SM systems
Differences between SM and Transmit Antenna Selection (TAS) SM is open loop (spatial multiplexing), TAS is closed loop (transmit diversity) In TAS antenna switching depends on the end-to-end performance. In SM antenna switching depends on the incoming bit stream Spectral efficiency of multiple antenna schemes: SIMO: log(M) b/s/Hz, MIMO: Nt*log(M) b/s/Hz, SM: log(M)+log(Nt). SM is suboptimal in this sense.


Download ppt "Modulation and Coding Schemes"

Similar presentations


Ads by Google