Presentation is loading. Please wait.

Presentation is loading. Please wait.

Logic Synthesis 5 Outline –Multi-Level Logic Optimization –Recursive Learning - HANNIBAL Goal –Understand recursive learning –Understand HANNIBAL algorithms.

Similar presentations


Presentation on theme: "Logic Synthesis 5 Outline –Multi-Level Logic Optimization –Recursive Learning - HANNIBAL Goal –Understand recursive learning –Understand HANNIBAL algorithms."— Presentation transcript:

1 Logic Synthesis 5 Outline –Multi-Level Logic Optimization –Recursive Learning - HANNIBAL Goal –Understand recursive learning –Understand HANNIBAL algorithms Optional Reading –Kunz and Menon, ICCAD94

2 Implications –find all value assignments in circuit necessary for the consistency of a given set of value assignments –assignments y = 0, f = 1 => d = 0, a = 0, c = 0 –y = 0 is justified since a complete set of necessary assignments is found Direct Implications –variable assignments that can be made in a circuit directly from gate truthtables and circuit connectivity –example above is direct »y = 0, f = 1 => d = 0 => a = 0, c = 0 a c f=1 y=0 d

3 Indirect Implications –variable assignments that can only be made through the intersection of multiple temporary direct implications –unjustified assignment y = 1! –direct implications blocked since y = 1 for d = 1 or e = 1 –make both possible assignments at d and e –make direct implications and look for intersection »location where assignment will be the same in either case »f = 1 in either case, so y = 1 => f = 1 a=1’ f=1’ c=1’’ y=1! d=1’ e=1’’ f=1’’

4 Recursive Learning Apply indirect implications recursively –to depth limit r max r = 0 make_all_implications(r, r max ) make all direct implications set up list U r of all unjustified functions if r < r max for each function f i in U r set up list of justifications f C r for each justification J i in f C r make assignments in J i make_all_implications(r+1, r max ) if >=1 variable y i in the circuit which assumes same logic value V for all consistent justifications J k in f C r learn y i = V i is uniquely determined in level r make direct implications for all y i = V i in level r if all justifications are inconsistent learn given situation of value assignments in level r is inconsistent

5 Logic Optimization Indirect implications imply circuit suboptimality –suggest good factors a f c y a c f y y = 1 => f = 1 indirect blocked at OR gate y = af + fc => (a + c)f y = 1 => f = 1 direct a f c y g hd f e y = 1 => f = 1 indirect blocked at OR gates a c d e g h f y y = 1 => f = 1 direct y = g(af + fc) + h(df + fe) = f(g(a + c) + h(d + e))

6 Boolean Division: Shannon’s Expansion Expansion of boolean function y w.r.t. variable x i y(x 1...x n ) = x i y(x 1,...x i =1,...x n ) + ~x i y(x 1,...x i =0,...x n ) Generalization to function variables y(x) = f(x) y(x)| f(x)=1 + ~f(x) y(x)| f(x)=0 y(x)| f(x)=V = y(x) if f(x)=V, X (don’t care) otherwise f(x) = x i is a special case Example y = (a + f)(f + c) y = f(a + 1)(1 + c) + ~f(a + 0)(0 + c) = f + ~fac = f + ac

7 Optimization by Expansion Expansion is a form of boolean division –generates many internal don’t cares –exploit don’t cares to optimize circuit –use y as cover for y(x)| f(x)=1 and y(x)| f(x)=0 »y = f y + ~f y »do not substitute constants into y »even more redundancy in circuit »example: (a + f)(f + c) for 1, (a + f)(f + c) for ac Optimization approach –find divisor f –transformation: y’ = f y + ~f y –reduction: remove redundancy –repeat until cost is stable

8 Finding Divisors: Indirect Implications Permissible function –can replace y with y’ and not change circuit function »exploit don’t cares –example: y = ab, b is don’t care, then y’ = a Transformations –both stuck-at faults at y are testable »some circuit input values will force y to both 0 and 1 and some output value will depend on this –f not in transitive fanout of y - no loops –y’ = y| f=1 + ~f = y + ~f for implication y=0 => f=1 –y’ = f + y| f=0 = f + y for implication y=0 => f=0 –y’ = f y| f=1 = f y for implication y=1 => f=1 »y = f y + ~f y, ~f y = 0, so y = f y –y’ = ~f y| f=0 = ~f y for implication y=1 => f=0 –complete set - can compose to get any transformation

9 Making Implications Use D-calculus to make implications –V = { 0, 1, D, D’, X } –D - node stuck-at-1 –D’ - node stuck-at-0 –X - unknown –5-valued truthtables, 1 and D = D, 0 or D = D, invert(D) = D’ D and D’ act as don’t cares –other nodes could be 0 or 1 –implication is true regardless of other node values »e.g. y = 0 => f = 1 for all other node values Circuit transformations must work for all circuit inputs

10 Divisor Selection Make implications at all nodes –results in many implications –many circuit transformations are possible Choose most indirect implication –most likely to reduce circuit a lot Limit recursion depth –execution time goes up exponentially with r max –use recursion depth 2 –all optimizations found with higher recursion depth can be found with sequence of optimizations at smaller depth

11 Division and Redundancy Removal Modify circuit according to transformation –y’ = f y for implication y=1 => f=1 »insert AND gate –similar for other three transformations Remove redundancy using redundant faults –identify all redundant faults in transformed circuit –use ATPG techniques –consider stuck-at-0/1 faults in nodes touched by recursive learning to find current transformation –fault is redundant if no test can be found for it »it does not change circuit function –replace fault with constant (e.g. 0 for stuck-at-0) –perform constant propagation - AND(1, A) = A y’ f y

12 Example a b c f u w v d y y=1 f=1 a b c u w v d y SA1 f y’ a b c u w v d Determine divisor Transform circuit Remove redundancy Save one AND gate

13 Conclusions Among best optimization results to date –up to 30% better than factorization by kernels –redundancy addition/removal uses related approach Reasonable execution time –2 minutes to 7 hours on SPARC 10 for ISCAS85 circuits –up to 60% reduction in circuit size from original Conclusions –structural techniques work better than equation-based techniques –ATPG-based factorization works better than kernel-based techniques –LOT adds transforms to improve testability and optimization –Minpower adds transforms to minimize power consumption


Download ppt "Logic Synthesis 5 Outline –Multi-Level Logic Optimization –Recursive Learning - HANNIBAL Goal –Understand recursive learning –Understand HANNIBAL algorithms."

Similar presentations


Ads by Google