Download presentation

Presentation is loading. Please wait.

Published byCeleste Goodspeed Modified over 3 years ago

1
Sying-Jyan Wang Dept. Computer Science and Engineering National Chung-Hsing University

2
Outline Fundamental Information theory Data compression Digital testing: an introduction Code-based test data compression: a case study Maximum compression Improving compression rate 2014/6/62

3
33

4
4 Compressibility of a Data Set Data compression (source coding) The process of encoding information using fewer bits than an unencoded representation would use through use of specific encoding schemes. How much compression can you get? Encoding scheme Data How to quantify the compressibility of a data set? 4

5
2014/6/65 Shannon Entropy (1/4) A measure of the uncertainty associated with a random variable Quantify the information contained in a message in the sense of an expected value (usually in bits), or A measure of the average information content one is missing when one does not know the value of the random variable Self-information: log p(x) 5 C.E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal, vol. 27, pp. 379-423, 623- 656, July, October, 1948

6
2014/6/66 Shannon Entropy (2/4) Self-information How much information carried by a particular value of a random variable? XXXXXXXXX XXXX XXXX XX XXXXXX XXXXXXXXX. XXXXXXXXX XXXX XeXX XX XXXXXX XXXXXXXXX. XXXXXXXXX XXXX wXnt XX XXXXXX XXXXXXXXX. Professor Wang wXnt to Taipei yesterday. 6

7
2014/6/67 Shannon Entropy (3/4) Tossing a coin with known, probabilities of coming up heads or tails Maximum entropy of the next toss -- if the coin is fair 1-bit information A double-headed coin 0-bit (no) information 7

8
2014/6/68 Shannon Entropy (4/4) Shannon's entropy: an absolute limit on the best possible lossless compression of any communication A source produces a sequence of letters chosen from among A, B,C, D with probabilities 1/2, 1/4, 1/8, 1/8, successive symbols being chosen independently H = 7/4 bis per symbol Encoding: A0 B10 C110 D111 8 AAAABBDC

9
Entropy in Thermodynamics A measure of the unavailability of a systems energy to do work; also a measure of disorder; the higher the entropy the greater the disorder. A measure of disorder; the higher the entropy the greater the disorder. In thermodynamics, a parameter representing the state of disorder of a system at the atomic, ionic, or molecular level; the greater the disorder the higher the entropy. A measure of disorder in the universe or of the availability of the energy in a system to do work. 2014/6/69

10
10 Eight Allotropes of Carbon (From Wikipedia) (a)Diamond (b)Graphite (c)Lonsdaleite (d)Fullerene (C60) (e)Fullerene (C540) (f)Fullerene (C70) (g)Amorphous carbon (h)Carbon nanotube (CNT) Allotropy is a behavior exhibited by certain chemical elements: these elements can exist in two or more different forms, known as allotropes of that element. In each allotrope, the element's atoms are bonded together in a different manner. Allotropes are different structural modifications of an element.[1] 10

11
2014/6/611 Carbon Allotropes Spans a Range of Extremes Synthetic diamond nanorods are the hardest materials known. Graphite is one of the softest materials known. Diamond is the ultimate abrasive. Graphite is a very good lubricant. Diamond is an excellent electrical insulator. Graphite is a conductor of electricity. Diamond is the best known thermal conductor Some forms of graphite are used for thermal insulation Diamond is highly transparent. Graphite is opaque. Diamond crystallizes in the cubic system. Graphite crystallizes in the hexagonal system. Amorphous carbon is completely isotropic. Carbon nanotubes are among the most anisotropic materials ever produced. 11

12
2014/6/612 Amorphous Carbon vs. Carbon Nanotube 12

13
2014/6/613 Shannons Entropy vs. Entropy: An Analogy Various crystalline structures of the same element exhibit different properties How about different structure of the same set of data? Can we put the data into a format that is easier to compress? f f –1 13

14
2014/6/614

15
2014/6/615 Digital Testing: A Quick Overview (1/3) All manufactured components are subject to physical defects Digital testing is the process to verify whether a chip (system) meets its specification Requirements Input vectors only applied to the chip input pins Only circuit outputs are observable 15 TEST VECTORS MANUFACTURED CIRCUIT COMPARATOR CIRCUIT RESPONSE PASS/FAIL CORRECT RESPONSES

16
2014/6/616 Digital Testing: A Quick Overview (2/3) Why Testing is Difficult? Test application time explodes for exhaustive testing of VLSI For a combinational circuit with 50 inputs, we need 2 50 = 1.126x10 15 test patterns. Assume one test per 10 -7 sec, it takes 1.125x10 8 sec = 3.57yrs. to test such a circuit. Test generation of sequential circuits are even more difficult. Lack of Controllability and Observability of Flip-Flops (Latches) How to generate input test vectors (patterns) Functional testing Fault-oriented test generation

17
2014/6/617 Digital Testing: A Quick Overview (3/3) Fault models Modeling the effects of physical defects on the logic function and timing. Common fault models Stuck-at fault, bridging fault, memory fault, etc. Single stuck-at fault model The most widely used fault model Assumptions Only one line is faulty. Gates are fault-free. Faulty line permanently set to 0 or 1. Fault can be at an input or output of a gate.

18
2014/6/618 Test Pattern Generation: An Example 18 X X 1 0 X 0 1

19
2014/6/619 Scan Based Design-for-Testability (DFT) Sequential circuits are hard to test Scan based design Modify all flip-flops to provide Parallel load Scan shift Provide controllability and observability of internal state variables for testing. Turn the sequential test problem into a combinational one. State Vector Combinational Logic X Z yY Scan In Scan Out DQ DI SI

20
Test Compaction & Compression Why? Modern chips are large and require a lot of test data More than 1 Gb for a chip Automatic Test Equipments (ATE) are expensive Need to reduce test time Test data contain a lot of X bits May be up to 97% Compaction Reduce number of test patterns by merging compatible patterns Compression Reduce data volume through coding and architectural approaches

22
2014/6/622 Code-Based Test Data Compression Partition test data into symbols, and replace symbols by codewords Fixed-to-fixed Lengths of both symbols and codewords are fixed Linear decompressor, dictionary code Fixed-to-variable Huffman code Variable-to-fixed Run-length code Variable-to-variable Golomb, FDR, VHIC AAAABBDC 22

23
2014/6/623 Fixed-Symbol-Length Codes (1/2) Notations: p 0, p 1, p X : Probabilities of 0, 1, and X in a test set Entropy analysis Assume p X = 0 Symbol length is n Entropy: 23

24
2014/6/624 Fixed-Symbol-Length Codes (2/2) Maximum compression rate: No compression when p 0 = p 1 = 0.5 24

25
2014/6/625 Variable-Symbol-Length Codes (1/3) Most variable-symbol-length codes are based on run-length encoding Runs of both 0s and 1s are encoded Theorem 1: The expected run length of 0s is 1/p 1, and the expected run length of 1s is 1/p 0. Only runs of 0s are encoded Corollary 2: The expected run length 1/ p 1 in case X-bits are assigned to 0s. Why? Since a run of 0s is interrupted by a 1, the expected run length of 0s is decided by the occurrence frequency of 1s. Similarly, the expected run length L 1 = 1/p 0 in case X-bits are assigned to 1s. 25

26
2014/6/626 Variable-Symbol-Length Codes (2/3) Entropy Analysis Assume 0-filling, and let y = p 0 +p X 26

27
2014/6/627 Variable-Symbol-Length Codes (3/3) Maximum compression rate: No compression when p 0 = p 1 = 0.5 27

28
2014/6/628 Implications No compression if the data are truly random Higher compression rate if probabilities of 0s and 1s are skewed Extremely skewed data unlikely: low information rate How to improve maximum compression rate? X-filling Xs are redundant information that can be compressed ATPG Generate patterns with desired distribution Partition scan cells Rearrange data to create desired distribution 28

29
2014/6/629 Scan Cells Partition (1/4) In a given test set The number of specified 1s and 0s in each scan cell may not be balanced A not-so-random distribution in the cell By putting similar cells together, we can create symbols with smaller entropy Higher maximum compression rate Easier-to-compress data

30
2014/6/630 Scan Cells Partition (2/4)

31
2014/6/631 Scan Cells Partition (3/4) Bi-partitioned scan chain is not good enough… Not natural Two parts have to be encoded separately To achieve even better encoding efficiency Introduce an inversion between the two parts From the encoders point of view, there is only one partition

32
2014/6/632 Scan Cells Partition (4/4) Multiple scan chains

33
2014/6/633 Routing Consideration How does physical order of scan cells affect compressibility? 1000 random moves in the initial chain S9234 S15850 S38417 Not Partitioned Partitioned

34
2014/6/634 Results (1/5) Mintest patterns Use bit-raising to create Xs Partition scan cells 0-dominant cells: #0 #1 > 1-dominant cells: #1 #0 > Balanced cells: | #1 #0 > | Balanced cells are assigned according to geometric adjacency Cells in a partitioned are connected for minimum routing length

35
2014/6/635 Results (2/5)

36
2014/6/636 Results (3/5) Routing Lengths S38417 Shortest length = 0 = 5

37
2014/6/637 Results (4/5) Shift power Partitioned scan chains Lower entropy implies more homogeneous symbols Fewer signal transitions Lower shift power for input patterns 0-filling More 0s in output response Lower overall shift power

38
2014/6/638 Results (5/5) How about the compression rate of another test set? Assume the same partitioned scan chain order Generate a new test set by ATLANTA 2/3 of Xs in chain-0 are assigned 0 2/3 of Xs in chain-1 are assigned 1

39
About C.E. Shannon (1916-2001) Bachelors, U. Michigan, 1936 EE & Mathematics Master, MIT, 1937 A Symbolic Analysis of Relay and Switching Circuits On designing electric computer using Boolean algebra … possibly the most important, and also the most famous, master's thesis of the century. -- Howard Gardner, of Harvard University PhD, Mathematics, MIT, 1940 An Algebra for Theoretical Genetics Institute of Advanced Study, 1940-1941 AT&T Bell Labs, 1941-1956 Information theory Visiting Professor, MIT, 1956 Tenured Professor, MIT, 1958, for only several semesters Gambling & Investing using Kelly criterion with Ed Thorpe

40
~ THE END ~ Thank you

Similar presentations

OK

1 Chapter 1 Introduction. 2 Outline 1.1 A Very Abstract Summary 1.2 History 1.3 Model of the Signaling System 1.4 Information Source 1.5 Encoding a Source.

1 Chapter 1 Introduction. 2 Outline 1.1 A Very Abstract Summary 1.2 History 1.3 Model of the Signaling System 1.4 Information Source 1.5 Encoding a Source.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on coalition government in canada Ppt on meaning of education Ppt on human nutrition and digestion article Ppt on column chromatography set Jit ppt on manufacturing software Ppt on e-commerce business models Ppt on second law of thermodynamics states Ppt on amplitude shift keying remote Ppt on tata company profile Download ppt on c#