Presentation is loading. Please wait.

Presentation is loading. Please wait.

Testing.

Similar presentations


Presentation on theme: "Testing."— Presentation transcript:

1 Testing

2 Problems of Ideal Tests
Ideal tests detect all defects produced in the manufacturing process. Ideal tests pass all functionally good devices. Very large numbers and varieties of possible defects need to be tested. Difficult to generate tests for some real defects. Defect-oriented testing is an open problem.

3 Real Tests Based on analyzable fault models, which may not map on real defects. Incomplete coverage of modeled faults due to high complexity. Some good chips are rejected. The fraction (or percentage) of such chips is called the yield loss. Some bad chips pass tests. The fraction (or percentage) of bad chips among all passing chips is called the defect level.

4 Testing as Filter Process
Mostly good chips Good chips Prob(pass test) = high Prob(good) = y Fabricated chips Prob(pass test) = low Prob(fail test) = low Mostly bad chips Defective chips Prob(bad) = 1- y Prob(fail test) = high

5 Costs of Testing Design for testability (DFT)
Chip area overhead and yield reduction Performance overhead Software processes of test Test generation and fault simulation Test programming and debugging Manufacturing test Automatic test equipment (ATE) capital cost Test center operational cost

6 Design for Testability (DFT)
DFT refers to hardware design styles or added hardware that reduces test generation complexity. Motivation: Test generation complexity increases exponentially with the size of the circuit. Example: Test hardware applies tests to blocks A and B and to internal bus; avoids test generation for combined A and B blocks. Logic block A block B PI PO Test input output Int. bus

7 Cost of Manufacturing Testing in 2000AD
GHz, analog instruments,1,024 digital pins: ATE purchase price = $1.2M + 1,024 x $3,000 = $4.272M Running cost (five-year linear depreciation) = Depreciation + Maintenance + Operation = $0.854M + $0.085M + $0.5M = $1.439M/year Test cost (24 hour ATE operation) = $1.439M/(365 x 24 x 3,600) = 4.5 cents/second

8 Testing Principle

9 Automatic Test Equipment Components
Consists of: Powerful computer Powerful 32-bit Digital Signal Processor (DSP) for analog testing Test Program (written in high-level language) running on the computer Probe Head (actually touches the bare or packaged chip to perform fault detection experiments) Probe Card or Membrane Probe (contains electronics to measure signals on chip pin or pad)

10 ADVANTEST Model T6682 ATE

11 T6682 ATE Block Diagram

12 LTX FUSION HF ATE

13 Verification Testing Ferociously expensive May comprise:
Scanning Electron Microscope tests Bright-Lite detection of defects Electron beam testing Artificial intelligence (expert system) methods Repeated functional tests

14 Characterization Test
Worst-case test Choose test that passes/fails chips Select statistically significant sample of chips Repeat test for every combination of 2+ environmental variables Plot results in Shmoo plot Diagnose and correct design errors Continue throughout production life of chips to improve design and process to increase yield

15 Manufacturing Test Determines whether manufactured chip meets specs
Must cover high % of modeled faults Must minimize test time (to control cost) No fault diagnosis Tests every device on chip Test at speed of application or speed guaranteed by supplier

16 Burn-in or Stress Test Process:
Subject chips to high temperature & over-voltage supply, while running production tests Catches: Infant mortality cases – these are damaged chips that will fail in the first 2 days of operation – causes bad devices to actually fail before chips are shipped to customers Freak failures – devices having same failure mechanisms as reliable devices

17 Sub-types of Tests Parametric – measures electrical properties of pin electronics – delay, voltages, currents, etc. – fast and cheap Functional – used to cover very high % of modeled faults – test every transistor and wire in digital circuits – long and expensive – main topic of tutorial

18 Fault Modeling Why model faults? Some real defects in VLSI and PCB
Common fault models Stuck-at faults Single stuck-at faults Fault equivalence Fault dominance and checkpoint theorem Classes of stuck-at faults and multiple faults Transistor faults Summary

19 Some Real Defects in Chips
Processing defects Missing contact windows Parasitic transistors Oxide breakdown . . . Material defects Bulk defects (cracks, crystal imperfections) Surface impurities (ion migration) Time-dependent failures Dielectric breakdown Electromigration Packaging failures Contact degradation Seal leaks Ref.: M. J. Howes and D. V. Morgan, Reliability and Degradation - Semiconductor Devices and Circuits, Wiley, 1981.

20 Occurrence frequency (%)
Observed PCB Defects Defect classes Shorts Opens Missing components Wrong components Reversed components Bent leads Analog specifications Digital logic Performance (timing) Occurrence frequency (%) 51 1 6 13 8 5 Ref.: J. Bateson, In-Circuit Testing, Van Nostrand Reinhold, 1985.

21 Common Fault Models Single stuck-at faults
Transistor open and short faults Memory faults PLA faults (stuck-at, cross-point, bridging) Functional faults (processors) Delay faults (transition, path) Analog faults etc.

22 Single Stuck-at Fault Three properties define a single stuck-at fault
Only one line is faulty The faulty line is permanently set to 0 or 1 The fault can be at an input or output of a gate Example: XOR circuit has 12 fault sites ( ) and 24 single stuck-at faults Faulty circuit value Good circuit value c j 0(1) s-a-0 a d 1(0) g 1 h z i 1 b e 1 k f Test vector for h s-a-0 fault

23 Fault Equivalence Number of fault sites in a Boolean gate circuit = #PI + #gates + # (fanout branches). Fault equivalence: Fault sets f1 and f2 are equivalent if all tests that detect f1 also detect f2 and vice versa. If faults f1 and f2 are equivalent then the corresponding faulty functions are identical. Fault collapsing: All single faults of a logic circuit can be divided into disjoint equivalence subsets, where all faults in a subset are mutually equivalent. A collapsed fault set contains one fault from each equivalence subset.

24 Equivalence Rules sa0 sa0 sa0 sa1 sa0 sa1 sa1 sa1 WIRE AND OR sa0 sa1
NOT sa0 sa1 sa1 sa0 sa0 sa1 sa0 sa1 sa0 NAND NOR sa1 sa0 sa0 sa1 sa1 sa0 sa1 FANOUT

25 Dominance Example Collapse ratio = ── = 0.47 32 Faults in red
sa0 sa1 Faults in red removed by equivalence collapsing sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 Faults in yellow removed by dominance collapsing sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 sa0 sa1 15 Collapse ratio = ── = 0.47 32

26 Fault Dominance If all tests of some fault F1 detect another fault F2, then F2 is said to dominate F1. Dominance fault collapsing: If fault F2 dominates F1, then F2 is removed from the fault list. When dominance fault collapsing is used, it is sufficient to consider only the input faults of Boolean gates. See the next example. In a tree circuit (without fanouts) PI faults form a dominance collapsed fault set. If two faults dominate each other then they are equivalent.

27 Dominance Example All tests of F2 F1 s-a-1 001 F2 110 010 000 s-a-1
000 101 100 s-a-1 F2 011 Only test of F1 s-a-1 s-a-1 s-a-1 s-a-0 A dominance collapsed fault set

28 Checkpoints Primary inputs and fanout branches of a combinational circuit are called checkpoints. Checkpoint theorem: A test set that detects all single (multiple) stuck-at faults on all checkpoints of a combinational circuit, also detects all single (multiple) stuck-at faults in that circuit. Total fault sites = 16 Checkpoints ( ) = 10

29 Transistor (Switch) Faults
MOS transistor is considered an ideal switch and two types of faults are modeled: Stuck-open -- a single transistor is permanently stuck in the open state. Stuck-short -- a single transistor is permanently shorted irrespective of its gate voltage. Detection of a stuck-open fault requires two vectors. Detection of a stuck-short fault requires the measurement of quiescent current (IDDQ).

30 Stuck-Open Example Vector 1: test for A s-a-0 (Initialization vector)
pMOS FETs VDD Two-vector s-op test can be constructed by ordering two s-at tests A 1 Stuck- open B C 1(Z) Good circuit states nMOS FETs Faulty circuit states

31 Stuck-Short Example Test vector for A s-a-0 VDD pMOS FETs IDDQ path in
faulty circuit A 1 Stuck- short B Good circuit state C 0 (X) nMOS FETs Faulty circuit state

32 Functional vs. Structural ATPG

33 Carry Circuit

34 Functional vs. Structural (Continued)
Functional ATPG – generate complete set of tests for circuit input-output combinations 129 inputs, 65 outputs: 2129 = 680,564,733,841,876,926,926,749, 214,863,536,422,912 patterns Using 1 GHz ATE, would take 2.15 x 1022 years Structural test: No redundant adder hardware, 64 bit slices Each with 27 faults (using fault equivalence) At most 64 x 27 = 1728 faults (tests) Takes s on 1 GHz ATE Designer gives small set of functional tests – augment with structural tests to boost coverage to 98+ %

35 Exhaustive Algorithm For n-input circuit, generate all 2n input patterns Infeasible, unless circuit is partitioned into cones of logic, with inputs Perform exhaustive ATPG for each cone Misses faults that require specific activation patterns for multiple cones to be tested

36 Random-Pattern Generation
Flow chart for method Use to get tests for 60-80% of faults, then switch to D-algorithm or other ATPG for rest

37 History of Algorithm Speedups
D-ALG PODEM FAN TOPS SOCRATES Waicukauski et al. EST TRAN Recursive learning Tafertshofer et al. Est. speedup over D-ALG (normalized to D-ALG time) 1 7 23 292 ATPG System ATPG System ATPG System ATPG System 485 25057 Year 1966 1981 1983 1987 1988 1990 1991 1993 1995 1997

38 Testability Measures Definition Controllability and observability
SCOAP measures Combinational circuits Sequential circuits Summary

39 What are Testability Measures?
Approximate measures of: Difficulty of setting internal circuit lines to 0 or 1 from primary inputs. Difficulty of observing internal circuit lines at primary outputs. Applications: Analysis of difficulty of testing internal circuit parts – redesign or add special test hardware. Guidance for algorithms computing test patterns – avoid using hard-to-control lines.

40 Testability Analysis Determines testability measures
Involves Circuit Topological analysis, but no test vectors (static analysis) and no search algorithm. Linear computational complexity Otherwise, is pointless – might as well use automatic test-pattern generation and calculate: Exact fault coverage Exact test vectors

41 SCOAP Measures SCOAP – Sandia Controllability and Observability Analysis Program Combinational measures: CC0 – Difficulty of setting circuit line to logic 0 CC1 – Difficulty of setting circuit line to logic 1 CO – Difficulty of observing a circuit line Sequential measures – analogous: SC0 SC1 SO Ref.: L. H. Goldstein, “Controllability/Observability Analysis of Digital Circuits,” IEEE Trans. CAS, vol. CAS-26, no. 9. pp. 685 – 693, Sep

42 Range of SCOAP Measures
Controllabilities – 1 (easiest) to infinity (hardest) Observabilities – 0 (easiest) to infinity (hardest) Combinational measures: Roughly proportional to number of circuit lines that must be set to control or observe given line. Sequential measures: Roughly proportional to number of times flip-flops must be clocked to control or observe given line.

43 Combinational Controllability

44 Controllability Formulas (Continued)

45 Combinational Observability
To observe a gate input: Observe output and make other input values non-controlling.

46 Observability Formulas (Continued)
Fanout stem: Observe through branch with best observability.

47 Comb. Controllability Circled numbers give level number. (CC0, CC1)

48 Controllability Through Level 2

49 Final Combinational Controllability

50 Combinational Observability for Level 1
Number in square box is level from primary outputs (POs). (CC0, CC1) CO

51 Combinational Observabilities for Level 2

52 Final Combinational Observabilities

53 Sequential Measures (Comparison)
Combinational Increment CC0, CC1, CO whenever you pass through a gate, either forward or backward. Sequential Increment SC0, SC1, SO only when you pass through a flip-flop, either forward or backward. Both Must iterate on feedback loops until controllabilities stabilize.

54 D Flip-Flop Equations Assume a synchronous RESET line.
SC1 (Q) = SC1 (D) + SC1 (C) + SC0 (C) + SC0 (RESET) + 1 SC0 (Q) = min [SC1 (RESET) + SC1 (C) + SC0 (C), SC0 (D) + SC1 (C) + SC0 (C)] + 1 SO (D) = SO (Q) + SC1 (C) + SC0 (C) + SC0 (RESET)

55 D Flip-Flop Clock and Reset
CO (RESET) = CO (Q) + CC1 (Q) + CC1 (RESET) + CC1 (C) + CC0 (C) SO (RESET) is analogous Three ways to observe the clock line: Set Q to 1 and clock in a 0 from D Set the flip-flop and then reset it Reset the flip-flop and clock in a 1 from D CO (C) = min [ CO (Q) + CC1 (Q) + CC0 (D) + CC1 (C) + CC0 (C), CO (Q) + CC1 (Q) + CC1 (RESET) + CO (Q) + CC0 (Q) + CC0 (RESET) + CC1 (D) + CC1 (C) + CC0 (C)] SO (C) is analogous

56 Testability Computation
For all PIs, CC0 = CC1 = 1 and SC0 = SC1 = 0 For all other nodes, CC0 = CC1 = SC0 = SC1 = ∞ Go from PIs to POs, using CC and SC equations to get controllabilities -- Iterate on loops until SC stabilizes -- convergence is guaranteed. Set CO = SO = 0 for POs, ∞ for all other lines. Work from POs to PIs, Use CO, SO, and controllabilities to get observabilities. Fanout stem (CO, SO) = min branch (CO, SO) If a CC or SC (CO or SO) is ∞ , that node is uncontrollable (unobservable).

57 Sequential Example Initialization

58 After 1 Iteration

59 After 2 Iterations

60 After 3 Iterations

61 Stable Sequential Measures

62 Final Sequential Observabilities

63 Testability Measures are Not Exact
Exact computation of measures is NP-Complete and impractical Green (Italicized) measures show correct (exact) values – SCOAP measures are in orange -- CC0,CC1 (CO) 1,1(6) 1,1(5,∞) 2,3(4) 2,3(4,∞) 6,2(0) 4,2(0) (6) 1,1(5) 1,1(4,6) (5) (4,6) (6) 2,3(4) 2,3(4,∞) 1,1(6) 1,1(5,∞)

64 Summary Testability measures are approximate measures of:
Difficulty of setting circuit lines to 0 or 1 Difficulty of observing internal circuit lines Applications: Analysis of difficulty of testing internal circuit parts Redesign circuit hardware or add special test hardware where measures show poor controllability or observability. Guidance for algorithms computing test patterns – avoid using hard-to-control lines

65 Exercise Compute (CC0, CC1) CO for all lines in the following circuit.
Questions: 1. Is observability of primary input correct? 2. Are controllabilities of primary outputs correct? 3. What do the observabilities of the input lines of the AND gate indicate?

66 Major Combinational Automatic Test-Pattern Generation Algorithms
Definitions D-Algorithm (Roth) – 1966 PODEM (Goel)

67 Forward Implication Results in logic gate inputs that are significantly labeled so that output is uniquely determined AND gate forward implication table:

68 Backward Implication Unique determination of all gate inputs when the gate output and some of the inputs are given

69 Implication Stack Push-down stack. Records:
Each signal set in circuit by ATPG Whether alternate signal value already tried Portion of binary search tree already searched

70 Implication Stack after Backtrack
Unexplored Present Assignment Searched and Infeasible E 1 B B 1 1 F F F 1 1 1

71 Objectives and Backtracing of ATPG Algorithm
Objective – desired signal value goal for ATPG Guides it away from infeasible/hard solutions Backtrace – Determines which primary input and value to set to achieve objective Use testability measures

72 D-Algorithm -- Roth IBM (1966)
Fundamental concepts invented: First complete ATPG algorithm D-Cube D-Calculus Implications – forward and backward Implication stack Backtrack Test Search Space

73 Primitive D-Cube of Failure
Models circuit faults: Stuck-at-0 Stuck-at-1 Bridging fault (short circuit) Arbitrary change in logic function AND Output sa0: “ D” AND Output sa1: “0 X D ” “X 0 D ” Wire sa0: “D” Propagation D-cube – models conditions under which fault effect propagates through gate

74 Implication Procedure
Model fault with appropriate primitive D-cube of failure (PDF) Select propagation D-cubes to propagate fault effect to a circuit output (D-drive procedure) Select singular cover cubes to justify internal circuit signals (Consistency procedure) Put signal assignments in test cube Regrettably, cubes are selected very arbitrarily by D-ALG

75 D-Algorithm – Top Level
Number all circuit lines in increasing level order from PIs to POs; Select a primitive D-cube of the fault to be the test cube; Put logic outputs with inputs labeled as D (D) onto the D-frontier; D-drive (); Consistency (); return ();

76 D-Algorithm – D-drive while (untried fault effects on D-frontier)
select next untried D-frontier gate for propagation; while (untried fault effect fanouts exist) select next untried fault effect fanout; generate next untried propagation D-cube; D-intersect selected cube with test cube; if (intersection fails or is undefined) continue; if (all propagation D-cubes tried & failed) break; if (intersection succeeded) add propagation D-cube to test cube -- recreate D-frontier; Find all forward & backward implications of assignment; save D-frontier, algorithm state, test cube, fanouts, fault; break; else if (intersection fails & D and D in test cube) Backtrack (); else if (intersection fails) break; if (all fault effects unpropagatable) Backtrack ();

77 D-Algorithm -- Consistency
g = coordinates of test cube with 1’s & 0’s; if (g is only PIs) fault testable & stop; for (each unjustified signal in g) Select highest # unjustified signal z in g, not a PI; if (inputs to gate z are both D and D) break; while (untried singular covers of gate z) select next untried singular cover; if (no more singular covers) If (no more stack choices) fault untestable & stop; else if (untried alternatives in Consistency) pop implication stack -- try alternate assignment; else Backtrack (); D-drive (); If (singular cover D-intersects with z) delete z from g, add inputs to singular cover to g, find all forward and backward implications of new assignment, and break; If (intersection fails) mark singular cover as failed;

78 Backtrack if (PO exists with fault effect) Consistency ();
else pop prior implication stack setting to try alternate assignment; if (no untried choices in implication stack) fault untestable & stop; else return;

79 Example 7.2 Fault A sa0 Step 1 – D-Drive – Set A = 1 D 1 D

80 Step 2 -- Example 7.2 Step 2 – D-Drive – Set f = 0 D D 1 D

81 Step 3 -- Example 7.2 Step 3 – D-Drive – Set k = 1 1 D D D 1 D

82 Step 4 -- Example 7.2 Step 4 – Consistency – Set g = 1 1 1 D D D 1 D

83 Step 5 -- Example 7.2 Step 5 – Consistency – f = 0 Already set 1 1 D D
D D 1 D

84 Step 6 -- Example 7.2 Step 6 – Consistency – Set c = 0, Set e = 0 1 1
D D D 1 D

85 D-Chain Dies -- Example 7.2
Step 7 – Consistency – Set B = 0 D-Chain dies X 1 1 D D D 1 D Test cube: A, B, C, D, e, f, g, h, k, L

86 Example 7.3 – Fault s sa1 Primitive D-cube of Failure 1 D sa1

87 Example 7.3 – Step 2 s sa1 Propagation D-cube for v 1 D 1 sa1 D D

88 Example 7.3 – Step 2 s sa1 Forward & Backward Implications 1 1 1 D 1 1
1 1 1 D 1 1 sa1 D D

89 Example 7.3 – Step 3 s sa1 Propagation D-cube for Z – test found! 1 1
1 1 1 D 1 1 sa1 D D D 1

90 Example 7.3 – Fault u sa1 Primitive D-cube of Failure 1 D sa1

91 Example 7.3 – Step 2 u sa1 Propagation D-cube for v 1 sa1 D D

92 Example 7.3 – Step 2 u sa1 Forward and backward implications 1 1 1 sa1
1 sa1 D D

93 Inconsistent d = 0 and m = 1 cannot justify r = 1 (equivalence)
Backtrack Remove B = 0 assignment

94 Example 7.3 – Backtrack Need alternate propagation D-cube for v 1 D
D sa1

95 Example 7.3 – Step 3 u sa1 Propagation D-cube for v 1 1 D sa1 D

96 Example 7.3 – Step 4 u sa1 Propagation D-cube for Z 1 1 1 sa1 D D D 1

97 Example 7.3 – Step 4 u sa1 Propagation D-cube for Z and implications 1
1 1 1 1 1 sa1 D D D 1

98 PODEM -- Goel IBM (1981) New concepts introduced:
Expand binary decision tree only around primary inputs Use X-PATH-CHECK to test whether D-frontier still there Objectives -- bring ATPG closer to propagating D (D) to PO Backtracing

99 Motivation IBM introduced semiconductor DRAM memory into its mainframes – late 1970’s Memory had error correction and translation circuits – improved reliability D-ALG unable to test these circuits Search too undirected Large XOR-gate trees Must set all external inputs to define output Needed a better ATPG tool

100 PODEM High-Level Flow Assign binary value to unassigned PI
Determine implications of all PIs Test Generated? If so, done. Test possible with more assigned PIs? If maybe, go to Step 1 Is there untried combination of values on assigned PIs? If not, exit: untestable fault Set untried combination of values on assigned PIs using objectives and backtrace. Then, go to Step 2

101 Example 7.3 Again Select path s – Y for fault propagation sa1

102 Example Step 2 s sa1 Initial objective: Set r to 1 to sensitize fault 1 sa1

103 Example Step 3 s sa1 Backtrace from r 1 sa1

104 Example Step 4 s sa1 Set A = 0 in implication stack 1 sa1

105 Example Step 5 s sa1 Forward implications: d = 0, X = 1 1 1 sa1

106 Example Step 6 s sa1 Initial objective: set r to 1 1 1 sa1

107 Example Step 7 s sa1 Backtrace from r again 1 1 sa1

108 Example Step 8 s sa1 Set B to 1. Implications in stack: A = 0, B = 1 1 1 1 sa1

109 Example Step 9 s sa1 Forward implications: k = 1, m = 0, r = 1, q = 1, Y = 1, s = D, u = D, v = D, Z = 1 1 1 1 sa1 D 1 1 1 D D 1

110 Backtrack -- Step 10 s sa1 X-PATH-CHECK shows paths s – Y and s – u – v – Z blocked (D-frontier disappeared) 1 1 sa1

111 Step s sa1 Set B = 0 (alternate assignment) 1 sa1

112 Backtrack -- s sa1 Forward implications: d = 0, X = 1, m = 1, r = 0,
s = 1, q = 0, Y = 1, v = 0, Z = 1. Fault not sensitized. 1 1 1 sa1 1 1 1

113 Step s sa1 Set A = 1 (alternate assignment) 1 1 sa1

114 Step s sa1 Backtrace from r again 1 1 sa1

115 Step 15 -- s sa1 Set B = 0. Implications in stack: A = 1, B = 0 1 1
sa1

116 Backtrack -- s sa1 Forward implications: d = 0, X = 1, m = 1, r = 0. Conflict: fault not sensitized. Backtrack 1 1 1 sa1 1 1 1 1

117 Step s sa1 Set B = 1 (alternate assignment) 1 1 1 sa1

118 Fault Tested -- Step 18 s sa1
Forward implications: d = 1, m = 1, r = 1, q = 0, s = D, v = D, X = 0, Y = D 1 1 1 1 1 D sa1 D D D X

119 Backtrace (s, vs) Pseudo-Code
v = vs; while (s is a gate output) if (s is NAND or INVERTER or NOR) v = v; if (objective requires setting all inputs) select unassigned input a of s with hardest controllability to value v; else select unassigned input a of s with easiest controllability to value v; s = a; return (s, v) /* Gate and value to be assigned */;

120 Objective Selection Code
if (gate g is unassigned) return (g, v); select a gate P from the D-frontier; select an unassigned input l of P; if (gate g has controlling value) c = controlling input value of g; else if (0 value easier to get at input of XOR/EQUIV gate) c = 1; else c = 0; return (l, c );

121 PODEM Algorithm while (no fault effect at POs)
if (xpathcheck (D-frontier)) (l, vl) = Objective (fault, vfault); (pi, vpi) = Backtrace (l, vl); Imply (pi, vpi); if (PODEM (fault, vfault) == SUCCESS) return (SUCCESS); (pi, vpi) = Backtrack (); if (PODEM (fault, vfault) == SUCCESS) return (SUCCESS); Imply (pi, “X”); return (FAILURE); else if (implication stack exhausted) else Backtrack (); return (SUCCESS);

122 FAN -- Fujiwara and Shimono (1983)
New concepts: Immediate assignment of uniquely-determined signals Unique sensitization Stop Backtrace at head lines Multiple Backtrace

123 PODEM Fails to Determine Unique Signals
Backtracing operation fails to set all 3 inputs of gate L to 1 Causes unnecessary search

124 FAN -- Early Determination of Unique Signals
Determine all unique signals implied by current decisions immediately Avoids unnecessary search

125 PODEM Makes Unwise Signal Assignments
Blocks fault propagation due to assignment J = 0

126 Unique Sensitization of FAN with No Search
Path over which fault is uniquely sensitized FAN immediately sets necessary signals to propagate fault

127 Headlines Headlines H and J separate circuit into 3 parts, for which test generation can be done independently

128 Contrasting Decision Trees
FAN decision tree PODEM decision tree

129 Multiple Backtrace FAN – breadth-first passes – 1 time PODEM –
depth-first passes – 6 times PODEM – Depth-first search 6 times

130 AND Gate Vote Propagation
[5, 3] [0, 3] [5, 3] [0, 3] [0, 3] AND Gate Easiest-to-control Input – # 0’s = OUTPUT # 0’s # 1’s = OUTPUT # 1’s All other inputs -- # 0’s = 0

131 Multiple Backtrace Fanout Stem Voting
[5, 1] [1, 1] [3, 2] [18, 6] [4, 1] [5, 1] Fanout Stem -- # 0’s = S Branch # 0’s, # 1’s = S Branch # 1’s

132 Multiple Backtrace Algorithm
repeat remove entry (s, vs) from current_objectives; If (s is head_objective) add (s, vs) to head_objectives; else if (s not fanout stem and not PI) vote on gate s inputs; if (gate s input I is fanout branch) vote on stem driving I; add stem driving I to stem_objectives; else add I to current_objectives;

133 Rest of Multiple Backtrace
if (stem_objectives not empty) (k, n0 (k), n1 (k)) = highest level stem from stem_objectives; if (n0 (k) > n1 (k)) vk = 0; else vk = 1; if ((n0 (k) != 0) && (n1 (k) != 0) && (k not in fault cone)) return (k, vk); add (k, vk) to current_objectives; return (multiple_backtrace (current_objectives)); remove one objective (k, vk) from head_objectives;

134 Logic simulation and fault simulation

135 True-Value Simulation Algorithms
Compiled-code simulation Applicable to zero-delay combinational logic Also used for cycle-accurate synchronous sequential circuits for logic verification Efficient for highly active circuits, but inefficient for low-activity circuits High-level (e.g., C language) models can be used Event-driven simulation Only gates or modules with input events are evaluated (event means a signal change) Delays can be accurately simulated for timing verification Efficient for low-activity circuits Can be extended for fault simulation

136 Compiled-Code Algorithm
Step 1: Levelize combinational logic and encode in a compilable programming language Step 2: Initialize internal state variables (flip-flops) Step 3: For each input vector Set primary input variables Repeat (until steady-state or max. iterations) Execute compiled code Report or save computed variables

137 Event-Driven Algorithm (Example)
Scheduled events c = 0 d = 1, e = 0 g = 0 f = 1 g = 1 Activity list d, e f, g g a =1 e =1 t = 0 1 2 3 4 5 6 7 8 2 c = g =1 2 2 d = 0 4 f =0 b =1 Time stack g 4 8 Time, t

138 Time Wheel (Circular Stack)
max Current time pointer t=0 Event link-list 1 2 3 4 5 6 7

139 Efficiency of Event-driven Simulator
Simulates events (value changes) only Speed up over compiled-code can be ten times or more; in large logic circuits about 0.1 to 10% gates become active for an input change Steady 0 0 to 1 event Large logic block without activity Steady 0 (no event)

140 Fault Simulation Algorithms
Serial Parallel Deductive Concurrent Differential

141 Serial Algorithm Algorithm: Simulate fault-free circuit and save responses. Repeat following steps for each fault in the fault list: Modify netlist by injecting one fault Simulate modified netlist, vector by vector, comparing responses with saved responses If response differs, report fault detection and suspend simulation of remaining vectors Advantages: Easy to implement; needs only a true-value simulator, less memory Most faults, including analog faults, can be simulated

142 Serial Algorithm (Cont.)
Disadvantage: Much repeated computation; CPU time prohibitive for VLSI circuits Alternative: Simulate many faults together Test vectors Fault-free circuit Comparator f1 detected? Circuit with fault f1 Comparator f2 detected? Circuit with fault f2 Comparator fn detected? Circuit with fault fn

143 Parallel Fault Simulation
Compiled-code method; best with two-states (0,1) Exploits inherent bit-parallelism of logic operations on computer words Storage: one word per line for two-state simulation Multi-pass simulation: Each pass simulates w-1 new faults, where w is the machine word length Speed up over serial method ~ w-1

144 Parallel Fault Sim. Example
Bit 0: fault-free circuit Bit 1: circuit with c s-a-0 Bit 2: circuit with f s-a-1 c s-a-0 detected a b e c s-a-0 g d f s-a-1

145 Deductive Fault Simulation
One-pass simulation Each line k contains a list Lk of faults detectable on k Following true-value simulation of each vector, fault lists of all gate output lines are updated using set-theoretic rules, signal values, and gate input fault lists PO fault lists provide detection data Limitations: Set-theoretic rules difficult to derive for non-Boolean gates Gate delays are difficult to use

146 Deductive Fault Sim. Example
Notation: Lk is fault list for line k kn is s-a-n fault on line k Le = La U Lc U {e0} = {a0 , b0 , c0 , e0} 1 {a0} a {b0 , c0} 1 e b 1 {b0} c 1 g f d Lg = (Le Lf ) U {g0} = {a0 , c0 , e0 , g0} U {b0 , d0} {b0 , d0 , f1} Faults detected by the input vector

147 Concurrent Fault Simulation
Event-driven simulation of fault-free circuit and only those parts of the faulty circuit that differ in signal states from the fault-free circuit. A list per gate containing copies of the gate from all faulty circuits in which this gate differs. List element contains fault ID, gate input and output values and internal states, if any. All events of fault-free and all faulty circuits are implicitly simulated. Faults can be simulated in any modeling style or detail supported in true-value simulation (offers most flexibility.) Faster than other methods, but uses most memory.

148 Conc. Fault Sim. Example a0 b0 c0 e0 a e b c g a0 b0 c0 e0 d f b0 d0
1 1 1 1 1 a 1 1 1 e b 1 c 1 1 g 1 a0 b0 c0 e0 d f 1 1 1 1 b0 d0 f1 g0 f1 d0 1 1 1

149 Fault Coverage and Efficiency
# of detected faults Total # faults Fault coverage = Fault efficiency # of detected faults Total # faults -- # undetectable faults =

150 Test Generation Systems
Circuit Description Fault List Compacter SOCRATES With fault simulator Aborted Faults Test Patterns Backtrack Distribution Undetected Faults Redundant Faults

151 Test Compaction Example
t1 = 0 1 X t2 = 0 X 1 t3 = 0 X t4 = X 0 1 Combine t1 and t3, then t2 and t4 Obtain: t13 = t24 = 0 0 1 Test Length shortened from 4 to 2

152 Test Compaction Fault simulate test patterns in reverse order of generation ATPG patterns go first Randomly-generated patterns go last (because they may have less coverage) When coverage reaches 100%, drop remaining patterns (which are the useless random ones) Significantly shortens test sequence – economic cost reduction

153 Static and Dynamic Compaction of Sequences
Static compaction ATPG should leave unassigned inputs as X Two patterns compatible – if no conflicting values for any PI Combine two tests ta and tb into one test tab = ta tb using D-intersection Detects union of faults detected by ta & tb Dynamic compaction Process every partially-done ATPG vector immediately Assign 0 or 1 to PIs to test additional faults

154 Sequential Circuits A sequential circuit has memory in addition to combinational logic. Test for a fault in a sequential circuit is a sequence of vectors, which Initializes the circuit to a known state Activates the fault, and Propagates the fault effect to a primary output Methods of sequential circuit ATPG Time-frame expansion methods Simulation-based methods

155 Example: A Serial Adder
An Bn 1 1 s-a-0 D 1 1 D X Cn Cn+1 X 1 Combinational logic Sn X FF

156 Time-Frame Expansion 1 1 X D Time-frame -1 Time-frame 0 1 1 1 1 s-a-0
Bn-1 Time-frame -1 An Bn Time-frame 0 1 1 1 1 s-a-0 D X s-a-0 D D 1 1 Cn-1 1 D X Cn 1 D 1 Cn+1 X 1 Combinational logic Combinational logic 1 Sn-1 Sn X D FF

157 Concept of Time-Frames
If the test sequence for a single stuck-at fault contains n vectors, Replicate combinational logic block n times Place fault in each block Generate a test for the multiple stuck-at fault using combinational ATPG with 9-valued logic Vector -n+1 Vector -1 Vector 0 Fault Unknown or given Init. state State variables Next state Time- frame -n+1 Time- frame -1 Time- frame Comb. block PO -n+1 PO -1 PO 0

158 Example for Logic Systems
FF1 B A FF2 s-a-1

159 Five-Valued Logic (Roth) 0,1, D, D, X
A s-a-1 s-a-1 D D X X X FF1 FF1 X D D FF2 FF2 B X B X Time-frame -1 Time-frame 0

160 Nine-Valued Logic (Muth) 0,1, 1/0, 0/1, 1/X, 0/X, X/0, X/1, X
A X s-a-1 s-a-1 0/1 X/1 X 0/X 0/X FF1 FF1 X 0/1 X/1 FF2 FF2 B X B 0/1 Time-frame -1 Time-frame 0


Download ppt "Testing."

Similar presentations


Ads by Google