Presentation is loading. Please wait.

Presentation is loading. Please wait.

Logic Synthesis: Past, Present, and Future

Similar presentations


Presentation on theme: "Logic Synthesis: Past, Present, and Future"— Presentation transcript:

1 Logic Synthesis: Past, Present, and Future
Alan Mishchenko UC Berkeley

2 Overview Introduction Logic synthesis Formal verification Conclusions
Traditional methods Recent improvements Formal verification Conclusions

3 Introduction Design sizes increase
Larger logic circuits have to be handled in both design and verification Large Boolean expressions arise in other applications cryptography, risk analysis, computer-network verification, etc Development of logic synthesis methods continues New methods are being discovered (in particular, SAT-based) Scalability of the traditional methods is being improved Scalability is a “moving target”

4 Logic Synthesis: Definition
Given a function, derive (synthesize) a circuit Given a poor circuit, improve it Sum of products: Circuit: .i 6 .o 1

5 Representations of Boolean Functions
Truth table (TT) Sum-of-products (SOP) Product-of-sums (POS) Binary decision diagram (BDD) And-inverter graph (AIG) Logic network (LN) abcd F 0000 0001 0010 0011 1 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 a b c d F 1 a b c d F a b c d F ab x+cd F = ab+cd F = (a+c)(a+d)(b+c)(b+d)

6 What Representation to Use?
For small functions (up to 16 inputs) Truth tables work the best (checking properties, factoring, etc) For medium-sized functions ( inputs) BDDs are still used (reachability analysis) Typically, AIGs work better AIGs can be converted into CNF SAT solver can used for logic manipulation For large industrial circuits (>100 inputs, >10,000 gates) Flat representations (truth tables, SOPs, BDDs) do not work Traditional logic network representation is not efficient AIGs are a better choice

7 Historical Perspective
Problem Size ABC 100000 SIS, VIS, MVSIS 100 Espresso, MIS, SIS 50 AIG 16 CNF BDD SOP TT 1980 1990 2000 2010 Time

8 Traditional Logic Synthesis
SOP minimization Logic network representation Algebraic factoring fast_extract algorithm Don’t-care-based optimization Technology mapping

9 SOP Minimization SOP = Sum-of-Products
For example: F = ab + cd + abc Given an SOP, minimize the number of products For example: F = ab + cd + abc = ab + cd SOP minimization is important because traditional logic synthesis uses SOPs for factoring and node minimization Tools: MINI (IBM, 1974), by S. J. Hong, R. G. Cain, D. L. Ostapko Espresso-MV (IBM / UC Berkeley, 1986) by R. Brayton, R. Rudell

10 Logic Network Representation
In traditional logic synthesis, netlists are represented as DAGs whose nodes can have arbitrary logic functions Similar to standard-cell library, with any library gates This representation is general but not well-suited for implementing fast/scalable transformations Later we will see AIGs are better than logic networks Logic network And-Inverter Graph (AIG) a b c d F ab x+cd a b c d F F = ab+cd

11 Algebraic Factoring SOP is a two-level (AND-OR) circuit
In many applications, a multi-level circuit is needed Algebraic factoring converts SOP into a multi-level circuit Factoring of integers and polynomials is similar 48 = 2 * 2 * 2 * 3 a2 - b2 = (a + b)*(a - b) The result of factoring of an SOP is not unique For example: F = ab + ac + bc = a(b+c) + bc = b(a+c) + ac

12 fast_extract Algorithm
fast_extract (J. Vasudevamurthy and J. Rajski, ICCAD’90) is used for algebraic factoring in practice It considers all two-cube divisors and single-cube two-literal divisors and their complement For example, the 6-input SOP shown on the right: D1 = !a + b is a two-cube divisor D2 = a * !b is a single-cube two-literal divisor D1 = !D2 It uses a priority queue to find what divisor to extract It updated the SOP and the priority queue after extracting each divisor The resulting factored form is further synthesized by other methods

13 Don’t-Cares Don’t-cares are input combinations, for which node n in the network can produce any value a b c y x n a b n (x = 0, y = 1) is a satisfiability don’t-care for node n (a = 1, b = 1) is an observability don’t-care for node n

14 Traditional Don’t-Care-Based Optimization
It is a complicated Boolean problem with many variations and improvements – hard to select one “classic algorithm” Below we consider the work of H. Savoj and R. Brayton from early 1990’s Compute compatible subsets of observabilty don’t-cares (CODCs) for all nodes in one sweep For each node, project don’t-cares into a local space, and minimize the node’s representation Don’t-care are represented using BDDs, the node using an SOP The resulting network has smaller local functions Quality of mapping in terms of area and delay are often improved Implementation in SIS (full_simplify) has not been widely used Slow and unscalable in many cases

15 Example of Using Don’t-Cares
Original function: Complete don’t-cares: Optimized function: F = ab + d(a!c+bc) F = b 00 01 11 10 1 00 01 11 10 - 1 00 01 11 10 1

16 Technology Mapping Input: A Boolean network (And-Inverter Graph)
Output: A netlist of K-LUTs implementing AIG and optimizing some cost function a b c d f e a b c d e f Technology Mapping The subject graph The mapped netlist

17 Recent Improvements AIG: standard logic representation Synthesis
Improved factoring Improved (SAT-based) don’t-care computation Using structural choices Mapping Verification

18 And-Inverter Graph (AIG) Definition and Examples
AIG is a Boolean network composed of two-input ANDs and inverters cdab 00 01 11 10 1 F(a,b,c,d) = ab + d(a!c+bc) b c a d 6 nodes 4 levels cdab 00 01 11 10 1 F(a,b,c,d) = a!c(b+d) + bc(a+d) a c b d 7 nodes 3 levels

19 Components of Efficient AIG Package
Structural hashing Leads to a compact representation Is applied during AIG construction Propagates constants Makes each node structurally unique Complemented edges Represents inverters as attributes on the edges Leads to fast, uniform manipulation Does not use memory for inverters Increases logic sharing using DeMorgan’s rule Memory allocation Uses fixed amount of memory for each node Can be done by a custom memory manager Even dynamic fanout can be implemented this way Allocates memory for nodes in a topological order Optimized for traversal using this topological order Small static memory footprint for many applications Computes fanout information on demand a b c d Without hashing a b c d With hashing

20 AIG: A Unifying Representation
An underlying data structure for various computations Rewriting, resubstitution, simulation, SAT sweeping, etc, are based on the AIG manager A unifying representation for the computation flow Synthesis, mapping, verification use the same data-structure Allows multiple structures to be stored and used for mapping The main circuit representation in our software (ABC)

21 Improved Algebraic Factoring
Traditional factoring finds divisors by enumerating cube pairs (quadratic in the number of cubes) Recent improvement (B. Schmitt, ASP-DAC’17) computes the same divisors using a hash table The new implementation is linear in the number of cubes and scales better in practice Compare commands “fx” and “fxch” in ABC

22 SAT-based Methods Currently many hard computational problems are formulated and solved using Boolean satisfiability Resubstitution, decomposition, delay/area optimization, etc (and also technology mapping, detailed placement, routing, etc) Heuristic solutions are often based on exact minimum results computed for small problem instances For example, technology mapping for a small multi-output subject graph (20-30 nodes) is solved exactly in a short time A larger design is optimized by a sliding window approach A robust formulation is as important as a fast SAT solver Don’t-cares can be efficiently used during synthesis without computing them

23 Terminology Logic function (e.g. F = ab+cd) Logic network
Variables (e.g. b) Literals (e.g. b and !b) Minterms (e.g. a!bcd) Cubes (e.g. ab) Logic network Primary inputs/outputs Logic nodes Fanins/fanouts Transitive fanin/fanout cone Cuts and windows Primary inputs Primary outputs Fanins Fanouts TFO TFI

24 Don’t-Care-Based Optimization
Definition A window for node n in the network is the context, in which its functionality is considered A window includes k levels of the TFI m levels of the TFO all re-convergent paths contained in this scope Window POs Window PIs k = 3 m = 3 n

25 Characterizing Don’t-Cares
Construction steps Collect candidate divisors di of node n Divisors are not in the TFO of n Their support is a subset of that of node n Duplicate the window of node n Use the same set of input variables Use a different set of output variables Add inverter for node n in one copy Create comparator for the outputs Set the comparator to 1 This is the care set of node n 1 Use in SAT-based synthesis Convert the circuit to CNF Add this CNF to the SAT solver as a constraint to characterize don’t-cares without computing them explicitly SAT solver is used to derive different functions F(d1, d2, …) implementing node n in terms of divisors d1, d2, … d2 n n d1 X

26 Synthesis with Choices
Traditional synthesis explores a sequence of netlist snapshots Modern synthesis allows for snapshots to be mixed-and-matched This is achieved by computing structural choices Traditional synthesis D1 D2 D3 D4 Synthesis with structural choices D1 HAIG D4 D2 D3

27 Choice Computation Input is several versions of the same netlist
For example, area- and delay-optimized versions Output is netlist with “choice nodes” Fanins of a choice node are functionally equivalent and can be used interchangeably Structural choices are given to a technology mapper, which uses them to mix-and-match different structures, resulting in improved area/delay Computation Combine netlists into one netlist (share inputs, append outputs) Detect all pairs of equivalent nodes in the netlist (SAT sweeping) Record node pairs proved functionally equivalent by the SAT solver Re-hash the netlist with these equivalences and create choice nodes

28 Modern Technology Mapping
Unification of all types of mappers Standard cells, LUTs, programmable cells, etc Using structural choices in all mappers State-of-the-art heuristics use two good cuts at a node Several flavors of SAT-based mapping are researched Currently SAT is limited to post-processing/optimization Also, SAT is used to pre-compute optimal results for small functions May appear in the main-stream mappers in the future

29 Summary of “Modern Synthesis”
Using AIGs to represent circuits in all computations Using simulation and SAT for Boolean manipulation Achieving (near-) linear complexity in most algorithms by performing iterative local computations Often using pre-computed data, hashing, truth tables, etc Scaling to 10M+ AIG nodes in many computations

30 Formal Verification Goal Formal vs. “informal” Difficulties
Making sure the design performs according to the spec Formal vs. “informal” Simulation is often not enough Bounded verification is important but often not enough Difficulties Dealing with real designs Modeling sequential logic (initialization, clock domains, etc) Modeling the environment (need support for constraints) Dealing with very large problems (need robust abstraction) Industrial adoption is hindered by lack of understanding how to apply formal and by weakness of the tools Two aspects of formal Front-end (modeling) Back-end (solving) In this presentation, we are mostly talking about the backend

31 Sequential Miter Equivalence checking Miter Equivalence checking
A sequential logic circuit The result of modeling by frontend To be solved by backend Equivalence checking Miter contains two copies of the design (golden and implementation) Model checking Miter contains one copy of the design and a property to be verified The goal is to transform the miter presented as a sequential AIG, and prove that the output is constant 0 D2 D1 Equivalence checking D1 Property checking p

32 Outcomes of Verifying a Property
Success The property holds in all reachable states Failure A finite-length counter-example (CEX) is found Undecided A limit on resources (such as runtime) is reached

33 Certifying Verification
How do we know that the answer produced by a verification tool is correct? If the result is “SAT”, re-run the resulting CEX to make sure that it is valid If the result is “UNSAT”, an inductive invariant can be generated and checked by a third-party tool

34 Inductive Invariant An inductive invariant is a Boolean function in terms of register variables, such that It is true for the initial state(s) It is inductive assuming that it holds in one (or more) time-frames allows us to prove it in the next time-frame It does not contain “bad states” where property fails State space Bad Invariant Reached Init

35 Verification Engines Provers Bug-hunters Transformers
K-step induction, with or without uniqueness constraints BDDs (exact reachability) Interpolation (over-approximate reachability) Property directed reachability (over-approximate reachability) Interval property checking Bug-hunters Random simulation Bounded model checking (BMC) Hybrids of the two (“semi-formal”) Transformers Combinational optimization Retiming etc

36 Integrated Verification Flow
Deriving logic for gates or RTL Modeling clocks, multi-phase clocking Representing initialization logic, etc Structural hashing, sequential cleanup, etc Applying engine sequences (concurrently) Using abstraction, speculation, etc Trying to prove or find a bug with high resource limits Creating sequential miter Initial simplification Progressive solving Last-gasp solving

37 ABC in Design Flow System Specification RTL Verification ABC
Logic synthesis Technology mapping Physical synthesis Manufacturing

38 Summary Introduced logic synthesis
Described representations and algorithms Outlined recent improvements

39 ABC Resources Tutorial Complete source code Windows binary
R. Brayton and A. Mishchenko, "ABC: An academic industrial-strength verification tool", Proc. CAV'10, LNCS 6174, pp Complete source code Windows binary

40 Abstract This talk will review the development of logic synthesis from manipulating small Boolean functions using truth tables, through the discovery of efficient heuristic methods for sum-of-product minimization and algebraic factoring applicable to medium-sized circuits, to the present-day automated design flow, which can process digital designs with millions of logic gates. The talk will discuss the main computation engines and packages used in logic synthesis, such as circuit representation in the form of And-Inverter-Graphs, recent improvements to the traditional minimization and factoring methods, priority-cut-based technology mapping into standard-cells and lookup-tables, and don't-care-based optimization using Boolean satisfiability.  We will also discuss the importance of formal verification for validating the results produced by the synthesis tools, and the deep synergy between algorithms and data-structures used in synthesis and verification. In the course of this talk, the presenter will share his 14-year experience of being part of an academic research group with close connections to companies in Silicon Valley, both design houses and CAD tool vendors. 

41 Bio Alan Mishchenko graduated from Moscow Institute of Physics and Technology, Moscow, Russia, in Alan received his Ph.D. from Glushkov Institute of Cybernetics, Kiev, Ukraine, in In 2002, Alan started at University of California at Berkeley as an Assistant Researcher and, in 2013, he became a Full Researcher. Alan shared the D.O. Pederson TCAD Best Paper Award in 2008 and the SRC Technical Excellence Award in 2011 for work on ABC. His research interests are in developing efficient methods for logic synthesis and verification. 


Download ppt "Logic Synthesis: Past, Present, and Future"

Similar presentations


Ads by Google