Presentation is loading. Please wait.

Presentation is loading. Please wait.

21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering.

Similar presentations


Presentation on theme: "21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering."— Presentation transcript:

1 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering

2 22 Overview Audio signals are typically continuous-time and continuous-amplitude in nature Sampling allows for a discrete-time representation of audio signals Amplitude quantization is also needed to complete the digitization process Quantization determines how much distortion is presented in the digital signal

3 23 Binary Numbers Decimal notation –Symbols: 0, 1, 2, 3, 4, …, 9 –e.g., Binary notation –Symbols: 0, 1 –e.g.,

4 24 Negative Numbers Folded binary –Use the highest order bit as an indicator of sign Two’s complement –Follows the highest positive number with the lowest negative –e.g., 3 bits, We use folded binary notation when we need to represent negative numbers

5 25 Quantization Mapping Quantization Dequantization Continuous valuesBinary codes Continuous values

6 26 Quantization Mapping (cont.) Symmetric quantizers –Equal number of levels (codes) for positive and negative numbers Midrise and midread quantizers

7 27 Uniform Quantization Equally sized range of input amplitudes are mapped onto each code Midrise or midread Maximum non-overload input value, Size of input range per R-bit code, Midrise Midread Let

8 28 2-Bit Uniform Midrise Quantizer 01 10 11 3/4 1/4 -1/4 -3/4 1.0 1.0 0.0 00

9 29 Uniform Midrise Quantizer Quantize: code(number) = [s][|code|] Dequantize: number(code) = sign*|number|

10 30 2-Bit Uniform Midtread Quantizer 01 11 2/3 0.0 -2/3 1.0 1.0 0.0 00/ 10

11 31 Uniform Midread Quantizer Quantize: code(number) = [s][|code|] Dequantize: number(code) = sign*|number|

12 32 Two Quantization Methods Uniform quantization –Constant limit on absolute round-off error –Poor performance on SNR at low input power Floating point quantization –Some bits for an exponent –the rest for an mantissa –SNR is determined by the number of mantissa bits and remain roughly constant –Gives up accuracy for high signals but gains much greater accuracy for low signals

13 33 Floating Point Quantization Number of scale factor (exponent) bits : Rs Number of mantissa bits: Rm Low inputs –Roughly equivalent to uniform quantization with High inputs –Roughly equivalent to uniform quantization with

14 34 Floating Point Quantization Example Rs = 3, Rm = 5 [s0000000abcd] scale=[000] mant=[sabcd] [s0000000abcd] [s0000001abcd] scale=[001] mant=[sabcd] [s0000001abcd] [s000001abcde] scale=[010] mant=[sabcd] [s000001abcd1] [s1abcdefghij] scale=[111] mant=[sabcd] [s1abcd100000]

15 35 Quantization Error Main source of coder error Characterized by A better measure Does not reflect auditory perception Can not describe how perceivable the errors are Satisfactory objective error measure that reflects auditory perception does not exist

16 36 Quantization Error (cont.) Round-off error Overload error Overload

17 37 Round-Off Error Comes from mapping ranges of input amplitudes onto single codes Worse when the range of input amplitude onto a code is wider Assume that the error follows a uniform distribution Average error power For a uniform quantizer

18 38 Round-Off Error (cont.) 4 bits 8 bits 16 bits Input power (dB) SNR(dB)

19 39 Overload Error Comes from signals where Depends on the probability distribution of signal values Reduced for high High implies wide levels and therefore high round-off error Requires a balance between the need to reduce both errors

20 40 Entropy A measure of the uncertainty about the next code to come out of a coder Very low when we are pretty sure what code will come out High when we have little idea which symbol is coming Shanon: This entropy equals the lowest possible bits per sample a coder could produce for this signal

21 41 Entropy with 2-Code Symbols When there exist other lower bit rate ways to encode the codes than just using one bit for each code symbol p Entropy 0 1

22 42 Entropy with N-Code Symbols Equals zero when probability equals 1 Any symbol with probability zero does not contribute to entropy Maximum when all probabilities are equal For equal-probability code symbols Optimal coders only allocate bits to differentiate symbols with near equal probabilities

23 43 Huffman Coding Create code symbols based on the probability of each symbols occurrence Code length is variable Shorter codes for common symbols Longer codes for rare symbols Shannon: Reduce bits over fixed-bit coding, if the symbols are not evenly distributed

24 44 Huffman Coding (cont.) Depend on the probabilities of each symbol Created by recursively allocating bits to distinguish between the lowest probability symbols until all symbols are accounted for To decode, we need to know how the bits were allocated –Recreate the allocation given the probabilities –Pass the allocation with the data

25 45 Example of Huffman Coding A 4-symbol case –Symbol 00 01 10 11 –Probability 0.75 0.1 0.075 0.075 Results –Symbol 00 01 10 11 –Code 0 10 110 111 – 0 1 0 1 0 1 0

26 46 Example (cont.) Normally 2 bits/sample for 4 symbols Huffman coding required 1.4 bits/sample on average Close to the minimum possible, since 0 is a “comma code” here –Example: [01101011011110]

27 47 Another Example A 4-symbol case –Symbol 00 01 10 11 –Probability 0.25 0.25 0.25 0.25 Results –Symbol 00 01 10 11 –Code 00 01 10 11 Adds nothing when symbol probabilities are roughly equal 0101 01


Download ppt "21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering."

Similar presentations


Ads by Google