The Synergy between Logic Synthesis and Equivalence Checking R. Brayton UC Berkeley Thanks to SRC, NSF, California Micro Program and industrial sponsors,

Slides:



Advertisements
Similar presentations
Recording Synthesis History for Sequential Verification Robert Brayton Alan Mishchenko UC Berkeley.
Advertisements

FRAIGs - A Unifying Representation for Logic Synthesis and Verification - Alan Mishchenko, Satrajit Chatterjee, Roland Jiang, Robert Brayton ERL Technical.
Aaron Bradley University of Colorado, Boulder
Reduction of Interpolants for Logic Synthesis John Backes Marc Riedel University of Minnesota Dept.
BVSRC Berkeley Verification and Synthesis Research Center UC Berkeley
Introduction to Logic Synthesis Alan Mishchenko UC Berkeley.
1 FRAIGs: Functionally Reduced And-Inverter Graphs Adapted from the paper “FRAIGs: A Unifying Representation for Logic Synthesis and Verification”, by.
DAG-Aware AIG Rewriting Alan Mishchenko, Satrajit Chatterjee, Robert Brayton Department of EECS, University of California Berkeley Presented by Rozana.
Logic Decomposition ECE1769 Jianwen Zhu (Courtesy Dennis Wu)
Electrical and Computer Engineering Archana Rengaraj ABC Logic Synthesis basics ECE 667 Synthesis and Verification of Digital Systems Spring 2011.
Enhancing and Integrating Model Checking Engines Robert Brayton Alan Mishchenko UC Berkeley June 15, 2009.
05/04/06 1 Integrating Logic Synthesis, Tech mapping and Retiming Presented by Atchuthan Perinkulam Based on the above paper by A. Mishchenko et al, UCAL.
Scalable and Scalably-Verifiable Sequential Synthesis Alan Mishchenko Mike Case Robert Brayton UC Berkeley.
Combinational and Sequential Mapping with Priority Cuts Alan Mishchenko Sungmin Cho Satrajit Chatterjee Robert Brayton UC Berkeley.
ABC: A System for Sequential Synthesis and Verification BVSRC Berkeley Verification and Synthesis Research Center Robert Brayton, Niklas Een, Alan Mishchenko,
ABC: A System for Sequential Synthesis and Verification
Logic Synthesis: Past and Future Alan Mishchenko UC Berkeley.
Cut-Based Inductive Invariant Computation Michael Case 1,2 Alan Mishchenko 1 Robert Brayton 1 Robert Brayton 1 1 UC Berkeley 2 IBM Systems and Technology.
1 Stephen Jang Kevin Chung Xilinx Inc. Alan Mishchenko Robert Brayton UC Berkeley Power Optimization Toolbox for Logic Synthesis and Mapping.
Research Roadmap Past – Present – Future Robert Brayton Alan Mishchenko Logic Synthesis and Verification Group UC Berkeley.
ABC: A System for Sequential Synthesis and Verification BVSRC Berkeley Verification and Synthesis Research Center University of California, Berkeley Alan.
1 Alan Mishchenko Research Update June-September 2008.
A Semi-Canonical Form for Sequential Circuits Alan Mishchenko Niklas Een Robert Brayton UC Berkeley Michael Case Pankaj Chauhan Nikhil Sharma Calypto Design.
Enhancing Model Checking Engines for Multi-Output Problem Solving Alan Mishchenko Robert Brayton Berkeley Verification and Synthesis Research Center Department.
Global Delay Optimization using Structural Choices Alan Mishchenko Robert Brayton UC Berkeley Stephen Jang Xilinx Inc.
Sequential Equivalence Checking for Clock-Gated Circuits Hamid Savoj Robert Brayton Niklas Een Alan Mishchenko Department of EECS University of California,
Reducing Structural Bias in Technology Mapping
Synthesis for Verification
Power Optimization Toolbox for Logic Synthesis and Mapping
Alan Mishchenko UC Berkeley
Delay Optimization using SOP Balancing
Enhancing PDR/IC3 with Localization Abstraction
SAT-Based Logic Optimization and Resynthesis
Robert Brayton Alan Mishchenko Niklas Een
Alan Mishchenko Robert Brayton UC Berkeley
Alan Mishchenko Satrajit Chatterjee Robert Brayton UC Berkeley
Introduction to Logic Synthesis with ABC
Magic An Industrial-Strength Logic Optimization, Technology Mapping, and Formal Verification System Alan Mishchenko UC Berkeley.
Integrating an AIG Package, Simulator, and SAT Solver
Synthesis for Verification
Optimal Redundancy Removal without Fixedpoint Computation
The Synergy between Logic Synthesis and Equivalence Checking
The Synergy between Logic Synthesis and Equivalence Checking
Introduction to Formal Verification
Alan Mishchenko University of California, Berkeley
SAT-Based Optimization with Don’t-Cares Revisited
Robert Brayton UC Berkeley
Scalable and Scalably-Verifiable Sequential Synthesis
Improvements to Combinational Equivalence Checking
SAT-based Methods for Scalable Synthesis and Verification
Resolution Proofs for Combinational Equivalence
Integrating an AIG Package, Simulator, and SAT Solver
Introduction to Logic Synthesis
Canonical Computation without Canonical Data Structure
Recording Synthesis History for Sequential Verification
Delay Optimization using SOP Balancing
Logic Synthesis: Past and Future
Canonical Computation without Canonical Data Structure
Magic An Industrial-Strength Logic Optimization, Technology Mapping, and Formal Verification System Alan Mishchenko UC Berkeley.
Innovative Sequential Synthesis and Verification
Robert Brayton Alan Mishchenko Niklas Een
SAT-Based Logic Synthesis (yes, Logic Synthesis Is Everywhere!)
SAT-based Methods: Logic Synthesis and Technology Mapping
Introduction to Logic Synthesis with ABC
Scalable Don’t-Care-Based Logic Optimization and Resynthesis
Robert Brayton Alan Mishchenko Niklas Een
Alan Mishchenko Robert Brayton
Alan Mishchenko Department of EECS UC Berkeley
Integrating AIG Package, Simulator, and SAT Solver
Alan Mishchenko Robert Brayton UC Berkeley
Presentation transcript:

The Synergy between Logic Synthesis and Equivalence Checking R. Brayton UC Berkeley Thanks to SRC, NSF, California Micro Program and industrial sponsors, Actel, Altera, Calypto, Intel, Magma, Synplicity, Synopsys, Xilinx

Outline  Emphasize mostly synthesis  Look at the operations of classical logic synthesis  Contrast these with newer methods based on ideas borrowed from verification Themes will be scalability and verifiability Themes will be scalability and verifiability  Look at new approaches to sequential logic synthesis and verification

Two Kinds of Synergy 1.Algorithms and advancements in verification used in synthesis and vice versa. 2.Verification enables synthesis Ability to equivalence check enables use and acceptance of sequential operationsAbility to equivalence check enables use and acceptance of sequential operations retiming, unreachable states, sequential redundancy removal, etc. retiming, unreachable states, sequential redundancy removal, etc. 3.Synthesis enables verification Desire to use sequential synthesis operations spurs verification developmentsDesire to use sequential synthesis operations spurs verification developments

Examples of The Synergy  Similar solutions e.g. retiming in synthesis / retiming in verification e.g. retiming in synthesis / retiming in verification  Algorithm migration e.g. BDDs, SAT, induction, interpolation, rewriting e.g. BDDs, SAT, induction, interpolation, rewriting  Related complexity scalable synthesis scalable verification (approximately) scalable synthesis scalable verification (approximately)  Common data-structures e.g. combinational and sequential AIGs e.g. combinational and sequential AIGs

Quick Overview of “Classical” Logic Synthesis  Boolean network  Network manipulation (algebraic) Elimination Elimination Decomposition (common kernel extraction) Decomposition (common kernel extraction)  Node minimization Espresso Espresso Don’t cares Don’t cares  Resubstitution (algebraic or Boolean)

“Classical” Logic Synthesis Equivalent AIG in ABC a b cdfe x yz Boolean network in SIS a b cd e xy f z AIG is a Boolean network of 2-input AND nodes and invertors (dotted lines)

One AIG Node – Many Cuts Combinational AIG a b cd f e AIG can be used to compute many cuts for each node Each cut in AIG represents a different SIS node No a priori fixed boundaries Implies that AIG manipulation with cuts is equivalent to working on many Boolean networks at the same time Different cuts for the same node

Combinational Rewriting iterate 10 times { iterate 10 times { for each AIG node { for each AIG node { for each k-cut for each k-cut derive node output as function of cut variables if ( smaller AIG is in the pre-computed library ) if ( smaller AIG is in the pre-computed library ) rewrite using improved AIG structure rewrite using improved AIG structure}} Note: each AIG node has, on average, 5 4-cuts compared to a SIS node with only 1 cut Rewriting at a node can be very fast – using hash-table lookups, truth table manipulation, disjoint decomposition

Combinational Rewriting Illustrated AIG rewriting looks at one AIG node, n, at a time AIG rewriting looks at one AIG node, n, at a time A set of new nodes replaces the old fanin cone of n A set of new nodes replaces the old fanin cone of n History AIG contains all nodes ever created in the AIG History AIG contains all nodes ever created in the AIG The old root and the new root nodes are grouped into an equivalence class (more on this later) The old root and the new root nodes are grouped into an equivalence class (more on this later) n n’n’n’n’ Working AIG n n’n’n’n’ History AIG

Comparison of Two Syntheses “Classical” synthesis “Classical” synthesis  Boolean network  Network manipulation (algebraic) Elimination Elimination Decomposition (common kernel extraction) Decomposition (common kernel extraction)  Node minimization Espresso Espresso Don’t cares computed using BDDs Don’t cares computed using BDDs  Resubstitution “Contemporary” synthesis “Contemporary” synthesis  AIG network  DAG-aware AIG rewriting (Boolean) Several related algorithms Rewriting Refactoring Balancing  Node minimization Boolean decomposition Don’t cares computed using simulation and SAT  Resubstitution with don’t cares

Node Minimization Comparison a b cd e xy f z Call ESPRESSO on node functionEvaluate the gain for all k-cuts of the node and take the best result a b cdfe Note: Computing cuts becomes a fundamental computation

Types of Don’t-Cares  SDCs Input patterns that never appear as an input of a node due to its transitive fanin Input patterns that never appear as an input of a node due to its transitive fanin  ODCs Input patterns for which the output of a node is not observable Input patterns for which the output of a node is not observable  EXDCs Pre-specified or computed external don’t cares (e.g. subsets of unreachable states) Pre-specified or computed external don’t cares (e.g. subsets of unreachable states)

Illustration of SDCs and ODCs (combinational) abc y x F x = 0, y = 1 is an SDC for node F Limited satisfiability  a b F a = 1, b = 1 is an ODC for F Limited observability 

don’t-cares onset offset An incompletely-specified function (ISF) as a SPFD An SPFD SPFD can be represented as a bi-partite graph SPFDs: Sets of Pairs of Functions to be Distinguished SPFDs: Sets of Pairs of Functions to be Distinguished don’t-cares beyond don’t-cares

Scalability of Don’t-Care Computation  Scalability is achieved by windowing Window defines local context of a node Window defines local context of a node  Don’t-cares are computed and used in Post-mapping resynthesis Post-mapping resynthesis a Boolean network derived from AIG network using technology mappinga Boolean network derived from AIG network using technology mapping High-effort AIG minimization High-effort AIG minimization an AIG with some nodes clusteredan AIG with some nodes clustered

Windowing a Node in the Network  Definition A window for a node in the network is the context in which the don’t-cares are computed A window for a node in the network is the context in which the don’t-cares are computed  A window includes n levels of the TFI n levels of the TFI m levels of the TFO m levels of the TFO all re-convergent paths captured in this scope all re-convergent paths captured in this scope  Window with its PIs and POs can be considered as a separate network Window POs Window PIs n = 3 m = 3 Boolean network

Don’t-Care Computation Framework … “Miter” constructed for the window POs n X Y Window n X Y Same window with inverter

Implementation of Don’t-Care Computation Compute the care set Simulation Simulation Simulate the miter using random patternsSimulate the miter using random patterns Collect PI (X) minterms, for which the output of miter is 1Collect PI (X) minterms, for which the output of miter is 1 This is a subset of a care setThis is a subset of a care set Satisfiability Satisfiability Derive set of network clausesDerive set of network clauses Add the negation of the current care set,Add the negation of the current care set, Assert the output of miter to be 1,Assert the output of miter to be 1, Enumerate through the SAT assignmentsEnumerate through the SAT assignments Add these assignments to the care setAdd these assignments to the care set  Illustrates a typical use of simulation and SAT Simulate to filter out possibilities Simulate to filter out possibilities Use SAT to check if the remainder is OK (or if a property holds) Use SAT to check if the remainder is OK (or if a property holds) XX YY n n 1

Resubstitution Resubstitution considers a node in a Boolean network and expresses it using a different set of fanins Resubstitution considers a node in a Boolean network and expresses it using a different set of fanins X X Computation can be enhanced by use of don’t cares

Resubstitution with Don’t-Cares - Overview Consider all or some nodes in Boolean network Consider all or some nodes in Boolean network  Create window  Select possible fanin nodes (divisors)  For each candidate subset of divisors Rule out some subsets using simulation Rule out some subsets using simulation Check resubstitution feasibility using SAT Check resubstitution feasibility using SAT Compute resubstitution function using interpolation Compute resubstitution function using interpolation A low-cost by-product of completed SAT proofsA low-cost by-product of completed SAT proofs  Update the network if there is an improvement

Resubstitution with Don’t Cares  Given: node function F(x) to be replaced node function F(x) to be replaced care set C(x) for the node care set C(x) for the node candidate set of divisors {g i (x)} for re-expressing F(x) candidate set of divisors {g i (x)} for re-expressing F(x)  Find: A resubstitution function h(y) such that F(x) = h(g(x)) on the care set A resubstitution function h(y) such that F(x) = h(g(x)) on the care set  SPFD Theorem: Function h exists if and only if every pair of care minterms, x 1 and x 2, distinguished by F(x), is also distinguished by g i (x) for some i C(x)C(x) F(x)F(x) g1g1 g2g2 g3g3 C(x)C(x) F(x)F(x) g1g1 g2g2 g3g3 h(g)h(g) = F(x)

Example of Resubstitution  Any minterm pair distinguished by F(x) should also be distinguished by at least one of the candidates g i (x) Given: F(x) = (x 1  x 2 )(x 2  x 3 ) Two candidate sets: {g 1 = x 1 ’x 2, g 2 = x 1 x 2 ’x 3 }, {g 3 = x 1  x 2, g 4 = x 2 x 3 } Set {g 3, g 4 } cannot be used for resubstitution while set {g 1, g 2 } can. x F(x) F(x) g 1 (x) g 1 (x) g 2 (x) g 2 (x) g 3 (x) g 3 (x) g 4 (x)

Checking Resubstitution using SAT Note use of care set. Resubstitution function exists if and only if SAT problem is unsatisfiable. F F Miter for resubstitution check

Computing Dependency Function h by Interpolation (Theory)  Consider two sets of clauses, A(x, y) and B(y, z), such that A(x, y)  B(y, z) = 0 y are the only variables common to A and B. y are the only variables common to A and B.  An interpolant of the pair (A(x, y), B(y, z)) is a function h(y) depending only on the common variables y such that A(x, y)  h(y)  B(y, z) A(x, y) B(y, z) h(y)h(y)h(y)h(y) Boolean space (x,y,z)

Computing Dependency Function h by Interpolation (Implementation) Problem:  Find function h(y), such that C(x)  [h(g(x))  F(x)], i.e. F(x) is expressed in terms of {g i }. Solution:  Prove the corresponding SAT problem “unsatisfiable”  Derive unsatisfiability resolution proof [Goldberg/Novikov, DATE’03]  Divide clauses into A clauses and B clauses  Derive interpolant from the unsatisfiability proof [McMillan, CAV’03]  Use interpolant as the dependency function, h(g)  Replace F(x) by h(g) if cost function improved Notes on this solution  uses don’t cares  does not use Espresso  is more scalable A B y A B h

Sequential Synthesis and Sequential Equivalence Checking (SEC)  Sequential SAT sweeping  Retiming  Sequential equivalence checking Theme – ensuring verifiability

SAT Sweeping ? Applying SAT to the output of a miter SAT Naïve approach Naïve approach Build output miter – call SAT Build output miter – call SAT works well for many easy problemsworks well for many easy problems Better approach - SAT sweeping Better approach - SAT sweeping based on incremental SAT solving based on incremental SAT solving Detects possibly equivalent nodes using simulationDetects possibly equivalent nodes using simulation Candidate constant nodes Candidate constant nodes Candidate equivalent nodes Candidate equivalent nodes Runs SAT on the intermediate miters in a topological orderRuns SAT on the intermediate miters in a topological order Refines the candidates using counterexamplesRefines the candidates using counterexamples Proving internal equivalences in a topological order A B SAT-1 D C SAT-2 ? ? PI k Combinational CEC

Sequential SAT Sweeping  Similar to combinational in that it detects node equivalences But the equivalences are sequential – guaranteed to hold only in the reachable state space But the equivalences are sequential – guaranteed to hold only in the reachable state space  Every combinational equivalence is a sequential one, not vice versa  run combinational SAT sweeping beforehand  Sequential equivalence is proved by k-step induction Base case Base case Inductive case Inductive case  Efficient implementation of induction is key!

Base Case Symbolic state Candidate equivalences: {A = B}, {C = D} A B SAT-3 D C SAT-4 A B SAT-1 D C SAT-2 ? ? ?? PI 0 PI 1 Init state internal equivalences Proving internal equivalences in initialized frames 1 through k Proving internal equivalences in a topological order in frame k+1 A B SAT-1 D C SAT-2 A B D C A B D C Assuming internal equivalences to in uninitialized frames 1 through k ?? PI 0 PI 1 PI k Inductive Case k-step Induction

Efficient Implementation Two observations: 1. Both base and inductive cases of k-step induction are runs of combinational SAT sweeping Tricks and know-how from the above are applicable Tricks and know-how from the above are applicable The same integrated package can be used The same integrated package can be used starts with simulationstarts with simulation performs node checking in a topological orderperforms node checking in a topological order benefits from the counter-example simulationbenefits from the counter-example simulation 2. Speculative reduction Deals with how assumptions are used in the inductive case Deals with how assumptions are used in the inductive case

Speculative Reduction Given: Sequential circuit Sequential circuit The number of frames to unroll (k) The number of frames to unroll (k) Candidate equivalence classes Candidate equivalence classes One node in each class is designated as the representativeOne node in each class is designated as the representative Speculative reduction moves fanouts to the representatives Makes 80% of the constraints redundant Makes 80% of the constraints redundant Dramatically simplifies the timeframes (observed 3x reductions) Dramatically simplifies the timeframes (observed 3x reductions) Leads to saving x in runtime during incremental SAT Leads to saving x in runtime during incremental SAT A B Adding assumptions without speculative reduction 0 A B Adding assumptions with speculative reduction 0

Guaranteed Verifiability for Sequential SAT Sweeping Theorem: The resulting circuit after sequential SAT sweeping using k-step induction can be sequentially verified by k-step induction. The resulting circuit after sequential SAT sweeping using k-step induction can be sequentially verified by k-step induction. (use some other k-step induction prover) (use some other k-step induction prover) D2 K-step induction D1Synthesis D2 D1Verification 0

Experimental Synthesis Results  Academic benchmarks 25 test cases (ITC ’99, ISCAS ’89, IWLS ’05) 25 test cases (ITC ’99, ISCAS ’89, IWLS ’05)  Industrial benchmarks 50 test cases 50 test cases  Comparing three experimental runs Baseline Baseline comb synthesis and mappingcomb synthesis and mapping Register correspondence (Reg Corr) Register correspondence (Reg Corr) structural register sweepstructural register sweep register correspondence using partitioned inductionregister correspondence using partitioned induction comb synthesis and mappingcomb synthesis and mapping Signal correspondence (Sig Corr) Signal correspondence (Sig Corr) structural register sweepstructural register sweep register correspondence using partitioned inductionregister correspondence using partitioned induction signal correspondence using non-partitioned inductionsignal correspondence using non-partitioned induction comb synthesis and mappingcomb synthesis and mapping

Industrial Benchmarks Experimental Synthesis Results Academic Benchmarks Single clock domain Numbers are geometric averages and their ratios

Sequential Synthesis and Equivalence Checking  Sequential SAT sweeping  Retiming  Sequential equivalence checking

Retiming and Resynthesis  Sequential equivalence checking after 1) combinational synthesis, followed by 2) retiming, followed by 3) combinational synthesis … is PSPACE-complete  How to make it simpler?

How to Make It Simpler? Like Hansel and Gretel – leave a trail of bread crumbs Like Hansel and Gretel – leave a trail of bread crumbs

Recording Synthesis History  Two AIG managers are used Working AIG (WAIG) Working AIG (WAIG) History AIG (HAIG) History AIG (HAIG) Combinational structural hashing is used in both managersCombinational structural hashing is used in both managers  Two node-mappings are supported Every node in WAIG points to a node in HAIG Every node in WAIG points to a node in HAIG Some nodes in HAIG point to other nodes in HAIG that are sequentially equivalent Some nodes in HAIG point to other nodes in HAIG that are sequentially equivalent WAIG HAIG

Recording History for Retiming  backward retiming is similar Step 1 Create retimed node Step 2 Transfer fanout in WAIG and note equivalence in HAIG Step 3 Recursively remove old logic and continue building new logic WAIG HAIG

Sequential Rewriting Sequential cut: {a,b,b 1,c 1,c} rewrite Rewriting step. Sequentiallyequivalent History AIG after rewriting step. The History AIG accumulates sequential equivalence classes. new nodes History AIG

Recording History with Windowing and ODCs  In window-based synthesis using ODCs, sequential behavior at window PIs and POs is preserved sequential behavior at window PIs and POs is preserved HAIG Multi-input, multi- output window  In HAIG, equivalence classes of window outputs can be used independently of each other not necessarily sequentially equivalent

AIG Procedures Used for Recording History  WAIG createAigManager createAigManager deleteAigManager deleteAigManager createNode createNode replaceNode replaceNode deleteNode_recur deleteNode_recur  HAIG createAigManager deleteAigManager createNode, setWaigToHaigMapping setEquivalentHaigMapping do nothing

Using HAIG for Tech-Mapping  HAIG contains all AIG structures Original and derived Original and derived  The accumulated structures can be used to improve the quality of technology mapping By reducing structural bias (Chatterjee et al, ICCAD’05) By reducing structural bias (Chatterjee et al, ICCAD’05) By performing integrated mapping and retiming (Mishchenko et al, ICCAD’07) By performing integrated mapping and retiming (Mishchenko et al, ICCAD’07)  HAIG-based mapping is scalable and leads to delay improvements (~20-30%) with small area degradation

Using HAIG for Equivalence Checking  Sequential depth of a window-based sequential synthesis transform is the largest number of registers on any path from an input to an output of the window  Theorem 1: If transforms recorded in HAIG have sequential depth 0 or 1, the equivalence classes of HAIG nodes can be proved by simple induction (k=1) over two time-frames  Theorem 2: If the inductive proof of HAIG passes without counter- examples, then the original and final designs are sequentially equivalent the original and final designs are sequentially equivalent AA’ B B’ AA’ B B’ unsat #1 #2 Sequential depth = 1

Experimental SEC Results Notes: 1. Comparison is done before and after register/signal correspondence 2. RegCorr, SigCorr and Mapping are synthesis runtimes 3. SEC is comparison done in usual way without HAIG 4. “HAIG” is the runtime of HAIG-based SEC Includes the runtime of speculative reduction and inductive provingIncludes the runtime of speculative reduction and inductive proving Does not include the runtime of collecting HAIG (~1% of synthesis time)Does not include the runtime of collecting HAIG (~1% of synthesis time)

Summary and Conclusions  Development of algorithms from either synthesis or verification are effective in the other  Leads to new improved ways to synthesize synthesize equivalence check equivalence check  Sequential synthesis can be effective but must be able to equivalence check Limit scope of sequential synthesis Limit scope of sequential synthesis Leave a trail of bread crumbs Leave a trail of bread crumbs

end