Presentation on theme: "Accelerated Path-Based Timing Analysis with MapReduce"— Presentation transcript:
1Accelerated Path-Based Timing Analysis with MapReduce Tsung-Wei Huang and Martin D. F. WongDepartment of Electrical and Computer Engineering (ECE)University of Illinois at Urbana-Champaign (UIUC), IL, USA2015 ACM International Symposium on Physical Design (ISPD)ECE Main Slide
2Outline Path-based timing analysis (PBA) Speed up the PBA Static timing analysisPerformance bottleneckProblem formulationSpeed up the PBADistributed computingMapReduce programming paradigmExperimental resultConclusion
3Static Timing Analysis (STA) Verify the expected timing characteristics of integrated circuitsKeep track of path slacks and identify the critical path with negative slackIncreasing significance of varianceOn-chip variation such as temperature change and voltage dropPerform dual-mode (min-max) conservative analysisWe all know STA is an important step in the design flow.
4Passing (positive slack) Timing Test and Verification of Setup/Hold CheckSequential timing testSetup time check“Latest” arrival time (at) v.s. “Earliest” required arrival time (rat)Hold time check“Earliest” arrival time (at) v.s. “Latest” required arrival time (rat)Earliest rat(hold test)Latest rat(setup test)One important task of the STA is the sequential timing tests which verifies the timing using setup/hold guard.Passing (positive slack)FailingFailingtimeHold violation No violation Setup violation
5Two Fundamental Solutions to STA Block-based timing analysisLinear topological propagationWorst quantities for each pointVery fast, but pessimisticPath-based timing analysisAnalyze timing path by path instead of single pointsCommon path pessimism removal (CPPR), advanced on chip variation (AOCV), etcReduce the pessimism marginVery slow (exponential number of paths), but more accurate*Source: Cadence Tempus white paperThere are two fundamental solutions to the STA. The first one is the so called, block-based timing analysis, which performs linear topological scan on the circuit and propagate the timing based the topological order. During the propagation, we keep track of the “worst” timing quantities on each point…
6CPPR Example – Data Path Slack with CPPR Off Pre common-path-pessimism-removal (CPPR) slackData path 1: ((120+( ))-30) – ( ) = -15 (critical)Data path 2: ((120+( ))-30) –( ) = -30 (critical)
7CPPR Example – Data Path Slack with CPPR On Post common-path-pessimism-removal (CPPR) slackData path 1: ((120+( ))-30) – ( )+5 = -10 (critical)Data path 2: ((120+( ))-30) –( )+40 = 10+5CPPR 1CPPR 2+40
8Example: Impact of Common-Path-Pessimism Removal (CPPR)
9Problem Formulation of PBA Consider the key coding block of PBAAfter block-based timing propagationEarly/Late delay on edgesInputA given circuit G=(V, E)A given test set TA parameter kOutputTop-k critical paths in the designGoal & ApplicationCPPR from TAU 2014 ContestSpeed up the PBA timeClock treeBenchmark from TAU 2014 CAD contest
10Key Observation of PBA Time-consuming process but… Multi-threading Multiple timing tests (e.g., setup, hold, PO, etc) are independentGraph-based abstraction isolates the process of each timing testHigh parallelismMulti-threadingShared-memory-based architectureSingle computing node with multiple coresDistributed computingDistributed-memory-based architectureMultiple computing nodes + multiple coresGoal of this paper!
11Conventional Distributed Programming Interface AdvantageHigh parallelism, multiple computing nodes with multiple cores Performance typically scales up as the core count grows MPI programming libraryExplicitly specify the details of message passing Annoying and error-prone Very long development time and low productivity Highly customized for performance tuning The distributed programming is advantageous in … The conventional programming interface to distributed computing is the MPI library.MPI_InitMPI_SendMPI_RecvMPI_IsendMPI_IrecvMPI_ReduceMPI_ScatterMPI_GatherMPI_AllgatherMPI_AllreduceMPI_BarrierMPI_FinalizeMPI_GridMPI_CommMPI …
12MapReduce – A Programming Paradigm for Distributed System First introduced by Google in 2004Simplified distributed computing for big-data processingOpen source libraryHadoop (Java), Scalar (Java), MRMPI (C++), etc.Because of this problem on MPI, Google first introduces the concept of MapReduce. MapReduce is a programming paradigm that simplifies distributed computing for big-data processing.
13MapReduce program (<10 lines) Standard Form of a MapReduce ProgramMap operationPartition the data set into pieces and assign work to processorsProcessors generate output data and assign each string a “key”Collate operationOutput data with the same key are collected to an unique processorReduce operationDerive the solution from each unique data setMPI_Isend…MPI_Irecv…MPI_SEND…M…MPI_Send…MPI_Recv…MPI_Barrier……Tradition MPI program(> 1000 lines)MapReduce program (<10 lines)
14Example - Word Counting Count the frequency of each word across a document set3288 TB data set10 min to finish on Google cluster
15MapReduce Solution to PBA (I) Partition the test set across available processorsEach processor generates the top k critical pathsEach path is associated with a global key (identical across all paths)CollateAggregate paths with the same key and combine them to a path stringReduceSort the paths from the path string and output the top k critical pathsMapper (t)Generate the search for test tFind top k critical paths for tEmit K-V pair for each pathReducer (s)Parse path from path string sSort pathsOutput the top k critical paths
16Extraction of graph and paths MapReduce Solution to PBA (II)MapperExtract the search graph for each timing testFind k critical paths on each search graph [Huang and Wong, ICCAD’14]ReducerSort paths according to slacks and output the globally top-k critical pathsMapReduceTop-1critical pathInput circuit graphExtraction of graph and paths
17Reducing the Communication Overhead Messaging latency to remote node is expensiveData localityEach computing node has a replicate of the circuit graphNo graph copy between the master node and slave nodesHidden reduceReducer call on each processor before the collate methodReduce the amount of path strings passing through computing nodes*Source: Intel clustered OpenMP white paper
18Experimental Results Programming environment Benchmark C++ language with C++ based MapReduce library (MR-MPI)2.26GHZ 64-bit Linux machineUIUC Campus cluster (with up to 500 computing nodes and 5000 cores)BenchmarkTAU 2014 CAD contest on Path-based CPPRMillion-scale circuit graphs
20Experimental Results – Runtime (II) Runtime portion on Map, Collate, and ReduceMap occupies the majority of the runtime~ 10 % on process communicationCommunication overheadGrows as the path count increases~15 % improvement with hidden reduce
21Experimental Results – Comparison with Multi-threading on a Single Node
22Conclusion MapReduce-based solution to PBA Future work Coding ease, promising speedup, and high scalabilityAnalyzes million-scale graph within a few minuteFuture workInvestigate more EDA applications on cluster computingGraphX, Spark, etc.