Presentation is loading. Please wait.

Presentation is loading. Please wait.

MPEG-4 AVC (H.264). Introduction The H.264 is aimed at very low bit rate, real- time, low end-to-end delay, and mobile applications such as conversational.

Similar presentations


Presentation on theme: "MPEG-4 AVC (H.264). Introduction The H.264 is aimed at very low bit rate, real- time, low end-to-end delay, and mobile applications such as conversational."— Presentation transcript:

1 MPEG-4 AVC (H.264)

2 Introduction The H.264 is aimed at very low bit rate, real- time, low end-to-end delay, and mobile applications such as conversational services and internet video. Enhanced visual quality at very low bit rates and particularly at rate below 24kb/s.

3 Structure of H.264/AVC Video Coder VCL: Designed to efficiently represent the video content NAL: formats the VCL representation of the video and provides head information for conveyance by a variety of transport layers or storage media.

4

5 Video Coding Layer

6 Basic Structure of VCL Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

7 Intra-frame Prediction Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

8 Intra-frame encoding of H.264 supports Intra_4  4, Intra_16  16 and I_PCM. I_PCM allows the encoder directly send the values of encoded sample. Intra_4  4 and Intra_16  16 allows the intra prediction.

9 Intra 4  4 –9 modes –Used in texture area Intra 16  16 –4 modes –Used in flat area

10 Four modes of Intra_16  16 –Mode 0 (vertical) : extrapolation from upper samples(H) –Mode 1 (horizontal): extrapolation from left samples(V) –Mode 2 (DC): mean of upper and left-hand samples (H+V) –Mode 3 (Plane) : a linear “plane” function is fitted to the upper and left-hand samples H and V. This works well in areas of smoothly-varying luminance

11 Example: Original image

12 Nine modes of Intra_4  4 –The prediction block P is calculated based on the samples labeled A-M. –The encoder may select the prediction mode for each block that minimizes the residual between P and the block to be encoded

13 Example: Consider a 4  4 block and its neighbors labeled below. Suppose we use the mode 4 for prediction. Then a = (A + 2M + I + 2)/4

14 Example:

15

16 Motion Estimation/Compensation Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

17 Features of the H.264 motion estimation –Various block sizes –¼ sample accuracy 6-tap filtering to ½ sample accuracy simplified filtering to ¼ sample accuracy –Multiple reference pictures –Generalized B-Frames

18 Variable Block Size Block-Matching –In the H.264, a video frame is first splitted using fixed size macroblocks. –Each macroblock may then be segmented into subblocks with different block sizes. –A macroblock has a dimension of 16  16 pixels. The size of the smallest subblock is 4  4 0 16x16 01 8x16 MB Types 8x8 01 23 16x8 1 0 8x8 0 4x84x8 01 01 23 4x4 8x4 1 0 8x8 Types

19 Example: This example shows the effectiveness of block matching operations with smaller sizes. Frame 1

20 Frame 2

21 Difference between Frame 1 and Frame 2

22 Results of block-matching operation with size 16×16

23 Results of block-matching operation with size 8×8

24 Results of block-matching operation with size 4×4

25 To use a subblock with size less than 8  8, it is necessary to first split the macroblock into four 8  8 subblocks.

26 Example:

27 Encoding a motion vector for each subblock can cost a significant number of bits, especially if small block sizes are chosen. Motion vectors for neighboring subblocks are often highly correlated and so each motion vector is predicted from vectors of nearby, previously coded subblocks. The difference between the motion vector of the current block and its prediction is encoded and transmitted.

28 The method of forming the prediction depends on the block size and on the availability of nearby vectors. Let E be the current block, let A be the subblock immediately to the left of E, let B be the subblock immediately above E, and let C be the subblock above and to the right of E. It is not necessary that A, B, C, and E have the same size. C DB AE

29 There are two modes for the prediction of motion vectors: Median prediction Use for all block sizes excluding 16×8 and 8×16 Directional segmentation prediction Use for 16×8 and 8×16

30 C DB AE Median prediction If C not exist then C=D If B, C not exist then prediction = V A If A, C not exist then prediction = V B If A, B not exist then prediction = V C Otherwise Prediction = median(V A,,V B,V C )

31 Directional segmentation prediction Vector block size 8×16 Left: prediction = V A Right: prediction = V C Vector block size 16×8 Up: prediction = V B Down: prediction =V A

32 Fractional Motion Estimation In H.264, the motion vectors between current block and candidate block has ¼-pel resolution. The samples at sub-pel positions do not exist in the reference frame and so it is necessary to create them using interpolation from nearby image samples.

33 b=round((E-5F+20G+20H-5I+J)/32) h=round((A-5C+20G+20M-5R+T)/32) j=round((aa-5bb+20b+20s-5gg+hh)/32) Interpolation of ½-pel samples.

34 Interpolation of ¼-pel samples. a=round((G+b)/2) d=round((G+h)/2) e=round((b+h)/2)

35 Multiple Reference Frames

36

37 The motion estimation techniques based on multiple reference frame technique provides opportunities for more precise inter-prediction, and also improved robustness to lost picture data. The drawback of multiple reference frames is that both the encoder and decoder have to store the reference frames used for Inter-frame prediction in a multi-frame buffer.

38 Mobile & Calendar (CIF, 30 fps) 01234 26 27 28 29 30 31 32 33 34 35 36 37 38 R [Mbit/s] PSNR Y [dB] PBB... with generalized B pictures PBB... with classic B pictures PPP... with 5 previous references PPP... with 1 previous reference ~15%

39 Generalized B Frames Basic B-frames: The basic B-frames cannot be used as reference frames.

40 Generalized B-frames: The generalized B-frames can be used as reference frames.

41 Mobile & Calendar (CIF, 30 fps) 01234 26 27 28 29 30 31 32 33 34 35 36 37 38 R [Mbit/s] PSNR Y [dB] PBB... with generalized B pictures PBB... with classic B pictures PPP... with 5 previous references PPP... with 1 previous reference >25%

42 Mobile & Calendar (CIF, 30 fps) 01234 26 27 28 29 30 31 32 33 34 35 36 37 38 R [Mbit/s] PSNR Y [dB] PBB... with generalized B pictures PBB... with classic B pictures PPP... with 5 previous references PPP... with 1 previous reference ~40%

43 Transformation/Quantization Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

44 The Discrete Cosine transform (DCT) operates on x, a block of N × N samples and creates X, and N × N block of coefficients. The forward DCT: The reverse DCT: Transformation

45 The elements of A are: where That is,

46 Example: The transform matrix A for a 4×4 DCT is:

47 orwhere That is,

48 The H.264 transform is based on the 4×4 DCT but with some fundamental differences: 1.It is an integer transfer,. 2.The core part of the transform can be implemented using only additions and shifts. 3.A scaling multiplication is integrated into the quantizer, reducing the total number of multiplications.

49 Recall that where

50 1. We call (CxC T ) the core 2D transform. 2. E is a matrix of scaling factors. 3.  indicates that each element of (CxC T ) is multiplies by the scaling factor in the same position in matrix E (i.e.,  is scalar multiplication rather than matrix multiplication) ( where d = c/b) Post-scaling

51 To simplify the implementation of the transform, d is approximated by 0.5. In order to ensure that the transform remains orthogonal, b also needs to be modified so that:

52 The final forward transform becomes

53 The inverse transform is given by: Pre-Scaling

54 Quantization H.264 assumes a scalar quantization. The quantization should satisfy the following requirements: (a)avoid division and/or floating point arithmetic (b)incorporate the post and pre-scaling matrices E f and E i.

55 The basic forward quantizer operation is Z(u,v)= round( X(u,v)/QStep ) where X(u,v) is a transform coefficient, Z(u,v) is a quantized coefficient, and QStep is a quantizer step size.

56 There are 52 quantizers (i.e.,Quantization Parameter ( QP) =0-51). Increase of 1 in QP means an increase of QStep by approximately 12% Increase of 6 in QP means an increase of QStep by a factor of 2.

57 The post-scaling factor (PF) (i.e., a 2, ab/2 or b 2 /4) is incorporated into the forward quantizer in the following way: 1.The input block x is transformed to give a block of unscaled coefficients W=C f xC f T. 2.Then, each coefficient in W is quantized and scaled in a single operation: where PF is a 2, ab/2 or b 2 /4 depending on the position ( u, v ). Z(u,v)= round( W(u,v)×PF /QStep ) Why? Post-Scaling

58 In order to simplify the arithmetic, the factor ( PF/QStep ) is implemented as a multiplication by a factor MF and a right shift, avoiding any division operations. Z(u,v)= round( W(u,v)×MF /2 qbits ) where and qbits=15+  QP/6 

59 Note that the round operation does not have to be the nearest integer operation. In the reference model software, the round operation is realized by |Z(u,v)|=(|W(u,v)|×MF+f)>>qbits sign(Z(u,v))=sign(W(u,v)) where f is 2 qbits /3 for Intra blocks and 2 qbits /6 for Inter blocks.

60 Example: Suppose QP=4 and (u,v)=(0,0). Therefore, QStep=1.0, PF=a 2 =0.25, and qbits=15. From We have MF=8192

61 The MF value for various QP s ( QP  5) are shown below. For QP >5, the factors MF remain unchanged, but qbits increases by 1 for each increment of six in QP. That is, qbits =16 for 6  QP  11, qbits =17 for 12  QP  17, and so on. Table_for_MF

62 Pre-Scaling The de-quantized coefficient is given by The inverse transform involving pre-scaling operations proceeds in the following way: 1. The dequantized block is pre-scaled to block for core 2D inverse transform. 2. The reconstructed block is then given by

63 The pre-scaling factor ( PF ) (i.e., a 2, ab or b 2 ) is incorporated in the computation of, together with a constant scaling factor of 64 to avoid rounding errors. The values at the output of the inverse transform should be divided by 64 to remove the constant scaling factor.

64 The H.264 standard does not specify QStep or PF directly. Instead, the parameters V=QStep×PF×64 is defined. The V values for various QP s ( QP  5) are shown below. Table_for_V

65 For QP >5, the V value increases by a factor of 2 for each increment of six in QP. That is, where

66 The Complete Transformation, Quantization, Rescaling and Inverse Transformation Encoding: 1.Input 4×4 block: x 2.Forward core transform: W=C f xC f T 3.Post-scaling and quantization: Z(u,v)= round( W(u,v)×MF /2 qbits ) Decoding: 1.Pre-scaling: 2.Inverse core transform: 3.Re-scaling:

67 Example: 511810 98412 110114 196157 1. Suppose QP=10, and input block x = 140-67 -19-397-92 2217831 -27-32-59-21 2. Forward core transform: W =

68 3. MF=8192,3355 or 5243, qbits=16 and f is 2 qbits /3. Z= 1700 -20-5 3112 -2-5 4. V=32, 50 or 40 because 2  QP/6  =2. 5440-320 -40-1000-250 96403280 -80-50-200-50

69 5. Output of the inverse core transform after division by 64 is 413810 88412 110 3 185147

70 Entropy Coding Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

71 Here we present two basic variable length coding (VLC) techniques used by H.264: the Exp-Golomb code and context adaptive VLC (CAVLC). Exp-Golomb code is used universally for all symbols except for transform coefficients. CAVLC is used for coding of transform coefficients. No end-of-block, but number of coefficients is decoded. Coefficients are scanned backward. Contexts are built dependent on transform coefficients.

72 Exp-Golomb codes are variable length codes with a regular construction. First 9 codewords of Exp-Golomb codes Exp-Golomb code

73 Each codeword of Exp-Golomb codes is constructed as follows: [M zeros][1][INFO] where INFO is an M-bit field carrying information. Therefore, the length of a codeword is 2M+1.

74 Given a code_num, the corresponding Exp-Golomb codeword can be obtained by the following procedure: (a)M=  log 2 [code_num+1])  (b)INFO=code_num+1-2 M Example: code_num=6 M=  log 2 [6+1])  =2 INFO=6+1-2 2 =3 The corresponding Exp-Golomb codeword =[M zeros][1][INFO]=00111

75 Given a Exp-Golomb codeword, its code_num can be found as follows: (a)Read in M leading zeros followed by 1. (b)Read M-bit INFO field (c)code_num=2 M +INFO-1 Example: Exp-Golomb codeword=00111 (a)M=2 (b)INFO=3 (c)code_num=2 2 +3-1=6

76 A parameter v to be encoded is mapped to code_num in one of 3 ways: ue(v) : Unsigned direct mapping, code_num=v. (Mainly used for macroblock type and reference frame index) se(v): Signed mapping. v is mapped to code_num as follows. code_num=2|v|, (v  0) code_num=2v-1,(v>0) (Mainly used for motion vector difference and delta QP)

77 me(v): Mapped symbols. Parameter v is mapped to code_num according to a table specified in the standard. This mapping is used for coded_block_pattern parameters. An example of such a mapping is shown below.

78 CAVLC This is the method used to encode residual and zig-zag ordered blocks of transform coefficients.

79 The CAVLC is designed to take advantage of several characteristics of quantized 4×4 blocks: After prediction, transformation and quantization, blocks are typically sparse (containing mostly zeros). The highest non-zero coefficients after the zig/zag are often sequences of +/- 1. The number of non-zero coefficients in neighboring blocks is correlated. The level (magnitude) of non-zero coefficients tends to be higher at the start of the zig-zag scan, and lower towards the high frequencies.

80 The procedure described below is based on the document entitled JVT Document JVT-C028, Gisle Bjøntegaard and Karl Lillevold, “Context-adaptive VLC (CVLC) coding of coefficients,” Fairfax, VA, May 2002. The H.264 CAVLC is an extension of this work.

81 The CAVLC encoding of a block of transform coefficients proceeds as follows. 1.Encode the number of coefficients and trailing ones. 2.Encode the sign of each trailing ones. 3.Encode the levels of the remaining no-zero coefficients. 4.Encode the total number of zeros before the last coefficients. 5.Encode each run of zeros.

82 The first step is to encode the number of coefficients (NumCoef) and trailling ones (T1s). NumCoef can be anything from 0 (no coefficient in the block) to 16 (16 non-zero coefficients). T1s can be anything from 0 to 3. If there are more than 3 trailing +/- 1s, only the last 3 are treated as ``special cases” and the others are coded as normal coefficients. Encode the number of coefficients and trailing ones

83 Example: Consider the 4×4 block shown below -240 3000 0010 100 The Num-Coef=7, and T1s=3

84 Three tables can be used for the encoding of Num_Coeff and T1: Num-VLC0, Num-VLC1 and Num-VLC2. Num-VLC0

85 The selection of tables depends on the number of non-zero coefficients in upper and left-hand previously coded blocks N U and N L. A parameter N is calculate as follows: If blocks U and L are available (i.e., in the same coded slice), N=(N U +N L )/2 If only block U is available, N=N U. If only block L is available, N= N L. If neither is available, N=0.

86 The selection of table is based on N in the following way: NSelected Table 0,1Num-VLC0 2,3Num-VLC1 4,5,6,7Num-VLC2 8 or aboveFLC The FLC is of the following form: xxxxyy (i.e., 6 bits) where xxxx and yy represent Num_Coeff and T1, respectively.

87 For each T1, a single bit encodes the sign (0=+,1=-). These are encoded in reverse order, starting with the highest frequency T1. Encode the sign of each trailing ones

88 The level (sign and magnitude) of each remaining non-zero coefficient in the block is encoded in reverse order. There are 5 VLC tables to choose from, Lev_VLC0 to Lev_VLC4. Lev_VLC0 is biased towards lower magnitudes; Lev_VLC1 is biased towards slightly higher magnitudes, and so on. Encode the levels of the remaining no-zero coefficients

89

90 This is used only when it is impossible for a coefficient to have values +/- 1. It will happen when T1s<3.

91

92 To improve coding efficiency, the tables are changed along with the coding process based on the following procedure.

93 The following shows the table for encoding the total number of zeros before the last coefficient (TotZeros) Encode the total number of zeros before the last coefficient

94 Encode each run of zeros At this stage it is known how many zeros are left to distribute (call this ZerosLeft). When encoding or decoding a non-zero coefficient for the first time, ZerosLeft begins at TotZeros, and decreases as more non-zero coefficients are encoded or decoded. The number of preceding zeros before each non-zero coefficient (called RunBefore) needs to be coded to properly locate that coefficient. Before coding the next RunBefore, ZerosLeft is updated and used to select one out of 7 tables.

95 Why the maximum number is 14? zero-left

96 Example: 030 0 10 1000 0000 Consider the following interframe residual 4×4 block The zigzag re-ordering of the block is shown below: 0,3,0,1,-1,-1,0,1,0,0,0,0,0,0,0,0 Therefore, NumCoeff=5, TotZero=3, T1s=3 Assume N=0

97 Encoding: ValueCodeComments NumCoeff=5, T1s=30001011 Use Num-VLC0 sign of T1 (1)0 Starting at highest frequency sign of T1(-1)1 1 Level= +11 Use Lev-VLC0 Level= +30010 Use Lev-VLC1 TotZeros=31110 Also depends on NumCoeff ZerosLeft=3;RunBefore=100 RunBefore of the 1 st Coeff ZerosLeft=2;RunBefore=01 RunBefore of the 2 nd Coeff ZerosLeft=2;RunBefore=01 RunBefore of the 3 rd Coeff ZerosLeft=2;RunBefore=101 RunBefore of the 4 th Coeff ZerosLeft=1;RunBefore=1 No code required; last coeff The transmitted bitstream for this block is 0001011011100101110001101

98 Decoding: CodeValueOutput ArrayComments 0001011NumCoeff=5, T1s=3 Empty 0+1 T1 sign 1--1,1 T1 sign 1--1,-1,1 T1 sign 1+11,-1,-1,1 level value 0010+3+3,1,-1,-1,1 level value 1110TotZeros=3+3,1,-1,-1,1 00RunBefore=1+3,1,-1,-1,0,1 RunBefore of the 1 st Coeff 1RunBefore=0+3,1,-1,-1,0,1 RunBefore of the 2 nd Coeff 1RunBefore=0+3,1,-1,-1,0,1 RunBefore of the 3 rd Coeff 01RunBefore=1+3,0,1,-1,-1,0,1 RunBefore of the 4 th Coeff 0,+3,0,1,-1,-1,0,1 ZeroLeft=1

99 De-block Filter Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

100 The beblocking filter improves subjective visual quality. The filter is highly context adaptive. It operates on the boundary of 4×4 subblock as shown below. q3 q2 q1 q0 p0 p1 p2 p3 q3q2q1q0p0p1p2p3

101 The choice of filtering outcome depends on the boundary strength and on the gradient of image samples across the boundary. The boundary strength parameter Bs is selected according to the following rules.

102 A group of samples from the set (p2,p1,p0,q0,q1,q2) is filtered only if: (a) Bs>0 and (b) |p0-q0| <  and |p1-p0| <  and |q1-q0| <  where  and  are thresholds defined in the standard. The threshold values increase with the average quantizer parameter QP of two blocks q and p.

103 When QP is small, anything other than a very small gradient across the boundary is likely to be due to image features that should be preserved and so the thresholds  and  are low. When QP is larger, blocking distortion is likely to be more significant and  and  are higher so that more boundary samples are filtered.

104 without deblock filtering with deblock filtering

105 Data Partitioning and Network Abstraction Layer

106 A video picture is coded as one or more slices. Each slice contains an integral number of macroblocks from 1 to total number of macroblocks in a picture. The number of macroblocks per slice need not to be constant within a picture.

107 There are five slice modes. Three commonly use modes are: 1.I-slice: A slice where all macroblocks of the slice are coded using intra prediction. 2.P-slice: In addition to the coding types of the I- slice, some macroblocks of the P-slice can be coded using inter-prediction (predicted from one reference picture buffer only). 3.B-slice: In addition to the coding types available in a P-slice, some macroblocks of the B-slice can be predicted from two reference picture buffers.

108 Note that the coded data in a slice can be placed in three separate Data Partitions (A, B and C) for robust transmission. Partition A contains the slice header and header data for each marcoblock in the slice. Partition B contains coded residual data for Intra slice macroblocks. Partition C contains coded residual data for Inter slice macroblocks.

109 In the H.264, the VCL data will be mapped into NAL units prior to transmission or storage. Each NAL unit contains a Raw Byte Sequence Payload (RBSP), a set of data corresponding to coded video data or header information. The NAL units can be delivered over a packet-based network or a bitstream transmission link or stored in a file. NAL header RBSPNAL header RBSPNAL header RBSP sequence of NAL units

110 RBSP typeDescription Parameter Set Global parameter for a sequence such as picture dimensions, video format. Supplemental Enhancement Information Side messages that are not essential for correct decoding of the video sequences. Picture Delimiter Boundary between pictures (optional). If not present, the decoder infers the boundary based on the frame number contained within each slice header. Coded Slice Header and data for a slice; this RBSP contains actual coded video data. Data Partition A, B or C Three units containing Data Partitioned slice layer data (useful for error decoding). End of Sequence End of Stream Filler Data Contains ‘dummy’ data

111 Example: Sequence parameter set SEIPicture parameter set I Slice (Coded slice) Picture delimiter P Slice (Coded slice) P Slice (Coded slice) The following figure shows an example of RBSP elements....

112 Profiles Baseline Main Extended High

113


Download ppt "MPEG-4 AVC (H.264). Introduction The H.264 is aimed at very low bit rate, real- time, low end-to-end delay, and mobile applications such as conversational."

Similar presentations


Ads by Google