Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shubham Rai Akash Kumar

Similar presentations


Presentation on theme: "Shubham Rai Akash Kumar"— Presentation transcript:

1 Shubham Rai Akash Kumar
Technology Mapping Shubham Rai Akash Kumar

2 Outline Introduction to Technology mapping
Different steps involved in technology mapping Two main matching strategies Structural matching Boolean matching FPGA technology mapping

3 Introduction Technology mapping is the final phase of the logic synthesis. The previous phases are technology independent. Technology mapping’s goal is to convert the logic design into a synthesizable schematic. RTL Logic minimization and optimization Technology mapping Its an interface Verify Schematic

4 Technology Mapping What kind of other constraints ?
Implements the technology independent network by matching pieces of the network with the logic cells that are available in a technology-dependent cell library. Process of binding nodes in the network to cells in the library. While performing technology mapping, the algorithm attempts to minimize area, while meeting other user constraints. What kind of other constraints ?

5 Technology Mapping - General
Library Based Technology Mapping Standard cell design A limited set of pre-designed cells FPGA Technology Mapping Look-up Table based: each LUT can implement a large no. of functions, e.g., all functions of 5 inputs and 1 output, e.g. in Xilinx FPGAs. Multiplexer based: each FPGA cell consists of a number of multiplexers e.g. in Actel FPGAs. Why Multiplexers ?

6 Technology Mapping – Library Based
Standard cells based designs Library cells are limited: Library design costs. Delay / Power / Reliability limits. Impedance / Capacitance limits.

7 Cell Library – example Logic gate Beta ratios available
Power levels (input cap) # of Vt options # of tapered cells Total # of cells INV 5 34 2 340 NAND2 29 28 318 NAND3 4 23 22 206 NAND4 3 16 96 NOR2 176 NOR3 6 36 AOI21 160 AOI12 AOI22 OAI21 OAI12 OAI22 Beta Ratio: the relative strength of nmos and pmos transistors (nfets and pfets). The ratio of the strengths is referred to as the "beta ratio“. Taper: the ratio of output to input capacitance for each stage Taken from IBM journal of R&D

8 Cell Library – example IBM Standard-cell library used for the POWER4.
These 2132 cells are just the basic library but only 12 gate types! Other gates used XOR/XNOR (mainly for muxes and comparators) There are also large buffer, latch / FF libraries The really tough cases are handled by custom made cells.

9 Cell Library – example Ignoring the beta ratios, dual Vt, Gate strength The designer has the following gates drawn to choose from INV AOI22 NAND2 NAND3 OAI21 NOR2 OAI22 AOI21

10 Standard cells – AOI22 Truth Table Icon Diagram A0 A1 B0 B1 Y
IN OUT Y AOI22 (technology) Inputs Outputs AO A1 B0 B1 Y 1 X

11 Technology Mapping – Flow
Translates logic equations into a network of technology cells. Transforms each and every cell in the network. A three step procedure Decomposition. Partitioning. Matching and Covering. Often, stages are merged or split in some books and tools

12 Technology Mapping – Flow
Decomposition Restructures the Boolean function into the subject graph Partitioning Partitions the big network into sub-networks Matching and Covering Finds matches between patterns and regions of cells in the subject graph Uses patterns to cover the subject graph and minimize the cost function

13 Decomposition

14 Partitioning Cut-based method

15 Matching and Covering

16 Decomposition and partitioning

17 Logic Decomposition Creating a new representation of the circuit.
Decomposing the network into new primitives. Most common choice is NAND2 and INV. The library cells must be decomposed too.

18 Logic Decomposition – Example
Base Functions: Pattern Trees: Refresh about the alternate gate representations and explain the nor, and oai21 representations inv1 nand2 nor2 nand3 oai21

19 Logic Decomposition - Example
These two decompositions match the NAND4 gate. Not identical for timing issues. Complex gates can be matched by completely different patterns. Optimization possibilities. Common solution – Make all trees left or right oriented. NAND4

20 Partitioning Breaking the big network into smaller sub-networks
Each sub-network defined as a subject Boolean graph Reduce to many multi-input, single-output networks Reduces the size of the covering problem Decomposition and partitioning are heuristics Reduce problem difficulty Hurt the quality of the final solution They are intractable problems.

21 Partitioning Algorithm
Mark the vertices with multiple out degrees. Edges whose tails are marked as vertices define the partition boundary. The above steps convert the original graph to multiple- input single-output subject graphs. If there are too many inputs, each subject graph can be further partitioned

22 Partitioning Multiple output needs to be converted to single output.
The red one can be further partitioned into smaller problems.

23 Pattern Matching and Covering
One of the crucial tasks for technology mapping Determines which cells in the library may be used to implement a set of nodes in the subject Boolean network.1 Two main types of pattern matching Structural matching Boolean matching [1] A new structural pattern matching algorithm for technology mapping Zhao, M.; Sapatnekar, S.S. , Design Automation Conference, Page(s):

24 Structural matching

25 Structural Matching Match the network with library cells recursively until the entire network is matched. Entire network Library cell Common trick – adding of pairs of inverters. Main issue – graph isomorphism. There is a dependency between the decomposition method to the solutions space! For every cell – all the isomorphic representations must be tested.

26 Pattern Matching Example - Library
NAME (AREA) INV (1) AND2 (3) NAND2 (2) OR2 (3) NOR2 (2) OAI21 (3) NAND3 (3) AOI22 (4) NAND4 (4)

27 Pattern Matching Example
Find all possible patterns in the subject network

28 Pattern Matching Example
Inverter patterns:

29 Pattern Matching Example
NAND2 patterns:

30 Pattern Matching Example
AND2 patterns:

31 Pattern Matching Example
OR2 patterns:

32 Pattern Matching Example
NOR2 patterns:

33 Pattern Matching Example
NAND3 patterns:

34 Pattern Matching Example
OAI21 patterns:

35 Pattern Matching Example
AOI22 patterns:

36 Pattern Matching Example
All patterns together

37 Structural Matching Algorithm
A simple algorithm to identify if a pattern tree is isomorphic to a subgraph of the subject tree Isomorphic – Same shape! Only works when only one type of base function is used in decomposition Note: Inverter can be seen as NAND with 1 input Degree is used to indicate the number of children u is the root of the pattern graph v is a vertex of the subject graph What are the base functons? NAND2 remember !!

38 Convert Netlist to Graph
Leaf (input) node INV NAND

39 Example SUBJECT TREE PATTERN TREES

40 Structural Matching Algorithm
Match (u,v){ //Matches isomorphic graphs too if (u is a leaf) return (true); //Leaf of pattern graph else { if (v is a leaf) return (false); //Leaf of subject graph if (degree(v)≠degree(u) return (False); //Different gate if (degree(v)==1){ uc = child of u; vc = child of v; return (Match(uc, vc)); //Recursive call } else{ ul = left-child of u; ur = right-child of u; vl = left-child of v; vr = right-child of v; return (Match(ul, vl).Match(ur,vr) + Match(ur,vl).Match(ul,vr));

41 Graph Isomorphism These are not isomorphic !!

42 Structural Matching Algorithm – 2
The match algorithm is only suitable for one type of base gates Solution: Tree-based matching using automata An automaton is used to represent the library Trees encoded by strings of characters Matching is done by string-recognition algorithm Different versions need to be specified separately Reset and Failure states. Something like a state machine.

43 Problems with Structural Mapping
Trees only No matching across fanout nodes No XOR gates Imperfect matching. Example: f = xy + x’y’ + y’z g = xy + x’y’ + xz Different structure, same function – not identified by structural matching Solution: Use Boolean matching Verify using truth table XORs because they cannot be represented by Trees

44 Problems with Structural Mapping
g = xy + x’y’ + xz f = xy + x’y’ + y’z

45 Boolean Matching

46 Boolean Matching Relies on matching the pattern to the subject network logically – performing the same function Decomposition independent Patterns that match structurally will always match with boolean matching, but the other way around is not always true Structurally matched pattern are also logically equivalent Two logically equivalent patterns may have different structures A survey of Boolean Matching Techniques Luca Benini and Giovanni De Micheli, ACM ToDAES, July 1997.

47 Boolean Matching Let us consider a cluster function (subject graph) f(X), with n input variables, that are entries of X. Let us also consider a pattern function g(Y) where the variables in Y are m cell inputs. For the sake of simplicity we assume n = m. Matching of two functions f and g involves comparing two functions for equivalence and finding an assignment of the cluster variables to pattern inputs Only consider function equivalence

48 Equivalence of Functions
Example Can the desired functionality below be achieved with the available cell? Cluster cell: desired functionality Library cell: available functionality

49 Equivalence of Functions
Permutation of input variables The ordering of variables may need to be changed to give equivalent behaviour Negation of input variables: when the polarity of inputs can be altered Negation of output: when the polarity of outputs can be altered The polarity of inputs/outputs can often be altered because I/Os originate and terminate on registers or I/O pads yielding signals and their complements

50 Permutation of Input Variables
g (X) = f ( (X) )  (rho) is a permutation of X Ex. f = x1 x3 + x2 x4 and g = x2 x4 + x1 x3  maps x x2 x x1 x x4 x x3  = rho The functions are equivalent when the variable order is allowed to change: defined as P-equivalent

51 Negation of Input Variables
Let f (X) and g (X) be two functions and X = { x1 , x2 ,…, xn} g (X) = f ( (X) )  (phi) maps each xi to itself or its complement Ex. f = x1 + x2 + x3 and g = x1 + x2 + x3  maps x1 → x1 x2 → x2 x3 → x3  - phi The functions are equivalent when the polarity of an input variable is allowed to change: defined as N-equivalent

52 Negation of Output Ex. f = x1 + x2 and g = x1  x2
g (X) = f (X) or g (X) = f (X) Ex. f = x1 + x2 and g = x1  x2 g (X) = f (X)

53 Equivalence of Functions
NPN-equivalent equivalent under input Negation, input Permutation, output Negation PN-equivalent equivalent under input Permutation and negation P-equivalent equivalent under input Permutation N-equivalent equivalent under input Negation

54 Boolean Matching Given functions f (X) and g(Y), where X =
{ x1 , x2 ,…, xn} and Y = {y1 , y2 ,…, yn}  (psi): maps each xi to a unique yj or yj g (Y) = f ( (X) ) (or f ( (X) ) 2n  n!  2 mappings In order to compare two functions we need that many checks! -psi 2^n – For input permutations N! – Variable ordering 2 – Ouput negation WHY?

55 Boolean Matching Algorithms
Two main algorithms Canonical Forms Boolean Signatures Canonical: Reduced to the simplest and most significant form possible without loss of generality

56 Canonical Forms Burch and Long [1992] introduced a form for checking input-polarity equivalence Allows checking of N-equivalence in constant time Canonical form: each distinct function corresponds to a unique distinct form. Let’s define the canonical form of N-equivalence of function F as CN(F). If CN(F) = CN(G), then F is N-equivalent to G. F can be implemented using G by choosing/varying the polarity of input signals The canonical form for N-equivalence relies on ROBDD representation – Reduced ordered binary decision diagram

57 Canonical Forms – P Equivalence
Similar forms can be defined for input permutation. Semi-canonical forms used to represent input permutations for reasons of speed. For each pattern cells, the set of all its semi-canonical forms is generated and stored in a hash table. The cluster function is matched by constructing ONE of its semi-canonical forms and checking in the library’s hash table.

58 Semi-Canonical Forms Cell 1 Cell id Hash 1 Abcdef Bcdadfe 2 Dfere Dfac
Cbd 3 Cad 4 daefr Cell 2 Subject Graph Cell 3 Hash: Cbd Matched with Cell 2 Cell 4

59 Boolean Signatures Signature of a Boolean function is a compact representation that characterizes some of the properties of the function itself. Each Boolean function has unique signatures. However, a signature may be related to >1 functions This problem is called aliasing Signature match is necessary but not a sufficient condition – difference with canonical forms

60 Boolean Signatures Function characteristics used for signatures
Symmetries of a function. a set of variables that are pair-wise interchangeable without affecting the logic The number of unate/binate variables Variables with which a function monotonically increases/decreases are called unate variables, else binate Also reduces the number of variable permutations that need to be considered in the search of a match

61 Boolean Signatures – Example
Consider the following pattern function from a library: g = s1s2a + s1s’2b + s’1s3c + s’1s’3d. Function g has 4 unate variables and 3 binate variables. Consider a cluster function f with n = 7 variables. First, a necessary condition for f to match g is to also have 4 unate variables and 3 binate variables. If this is the case, only 3!4! = 144 variable orders need to be considered in the worst case. (A match can be detected before all 144 variable orders are considered.) This number must be compared to the overall number of permutations, 7! = 5040, which is much larger.

62 Boolean Signatures Reduces the search space
Signatures are stored offline At run-time the signature of the candidate function computed and compared with the library Only if all signatures match, the equivalence is checked Signature is invariant to input permutation/ input negation/ output negation

63 Boolean Signatures

64 covering

65 Covering The technology mapping problem is the optimization problem of finding a minimum cost covering of the subject graph by choosing from a collection of pattern graphs in the library. A covering is a collection of pattern graphs such that every node of the subject graph is contained in one (or more) of the pattern graphs.

66 Covering Example – Library
INV (1) AND2 (3) NAND2 (2) OR2 (3) NOR2 (2) OAI21 (3) NAND3 (3) AOI22 (4) NAND4 (4) NAME (AREA)

67 Covering Example Subject graph (decomposed to NAND and INV):

68 Covering example Trivial Covering Solution: AREA = 22

69 Covering example Better Covering Solution: AOI22 AND2 OR2 OR2
AREA = 17

70 Covering example Alternative Covering Solution: NAND3 AND2 OAI21 NOR2
AREA = 14

71 Covering How to determine the best – lowest area/delay cover?
Most subject graphs are trees, and so are the pattern graphs. The problem is therefore reduced to tree-covering-by- tree. Optimal algorithms exist to solve the above by using dynamic programming. However, we are not quite ready yet.

72 Covering Some networks are not trees, but DAGs – directed acyclic graphs. DAG covering is NP hard. Common solution – breaking the DAG into trees, at each of the fan out points.

73 Covering: Treeifying Roots Leaves The following network is a DAG.
There exists a route between a leaf (input) to two roots (outputs). Leaves Roots

74 Treeifying The DAG is broken into trees, at the fan out points. Tree 2

75 Covering: Dynamic Programming
Visit subject tree bottom up At each vertex Attempt to match: Locally rooted sub-tree to all library cells. Find best match and record There is always a match when the base cells are in the library Bottom-up search yields an optimum cover Caveat: Mapping into trees is a distortion for some cells Overall optimality is weakened by the overall strategy of splitting into several stages

76 Example cost = 2 INV cost = 3 NAND cost = 4 AND cost = 5 OR
SUBJECT TREE PATTERN TREES cost = 2 INV cost = 3 NAND cost = 4 AND cost = 5 OR INV NAND Leaf node

77 Example: Lib Match of s: t1 cost = 2 s r u t Match of u: t2 cost = 3 s
Match of t: t1 cost = 2+3 = 5 s r u t Match of t: t3 cost = 4 s r u t Match of r: t2 cost = =9 s r u t Match of r: t4 cost = 5+3 =8

78 Covering: Cost Functions
In the example earlier, the cost was only Area. Area cost is just summing up all the cells areas. Timing cost – the delay of the slowest path. Delay computed with (max, +) rules Add delay of match to highest cost of sub-trees Delay maybe fanout dependent – require look ahead scheme Requires finding the critical path, depends on the previous gates, sizing and physical routing. Power cost function requires knowledge of the activity factors for every pattern tree!

79 Polarity Assignment For structural covering, the optimal polarity assignment can be achieved using a clever trick All connections between base gates are replaced by inverter pairs. Connections to or from inverters don’t need to be replaced. Inverter pairs don’t affect the graph behaviour Add inverters in input and output gates as well Transformation applied to both pattern graphs and the subject graph

80 Inverter Pair Heuristic
Also useful when the library does not have any base gate into which the cells are decomposed The dynamic programming covering algorithm can take advantage of the existence of both polarities for each signal in the subject graph Newly added inverters removed if overall cost is not reduced Fake element added to the library: inverter pair. Actual implementation: direct connection, 0 cost.

81 Inverter Pair Heuristic Example
What if only AND gate and inverters were available? AND INV May be removed if doesn’t benefit Cannot be matched Can be matched

82 FPGA Technology mapping

83 FPGA Technology Mapping
Decomposition and partitioning are still done to reduce the problem to a manageable size. Matching and covering done differently. Look-up table based Multiplexer based

84 A Two-input LUT (a) Circuit for a two-input LUT x 1 2 f 0/1 2-input LUTs have 4 unique input combinations i.e. output possibilities. This implies 16 possible functions! 1 x 2 (b) f + = Each (c) Storage cell contents in the LUT x 1 2 f

85 A Three-input LUT f 0/1 x 2 3 1 3-input LUTs have 8 unique input combinations i.e. output possibilities. This implies 256 possible functions! Message: With each extra input, the number of possible functions increase super-exponentially

86 LUT (Look-Up Table) Functionality
x x 1 1 Look-Up tables are primary elements for logic implementation in FPGA Each LUT can implement any function of 4 inputs x x 2 2 y y x 1 2 3 4 y x x LUT LUT x 1 2 3 4 y 3 3 x x x x x x x x y y x x 1 1 2 2 3 3 4 4 4 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Might result in waste of resources if the function is too simple. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 x 1 2 3 4 y 1 1 1 1 1 1 1 1 x x x x x x x x 1 1 2 2 3 3 4 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 x 1 2 y x x x x 1 1 2 2 y y y y

87 5-Input Functions implemented using two LUTs
One CLB Slice can implement any function of 5 inputs Logic function is partitioned between two LUTs F5 multiplexer selects LUT A4 A3 A2 A1 WS DI D LUT ROM RAM LUT LUT A4 A4 ROM ROM D D A3 A3 RAM RAM A2 A2 A1 A1 WS WS DI DI F5 F5 1 F5 GXOR G F5 F5 A4 A3 A2 A1 WS DI D LUT ROM RAM 1 1 X X WS WS DI DI GXOR GXOR F4 F4 A4 A4 G G D D F3 F3 A3 A3 F2 F2 A2 A2 LUT LUT ROM ROM F1 F1 A1 A1 RAM RAM F5

88 5-Input Functions implemented using two LUTs
OUT LUT

89 CLB Structure F5IN CIN CLK CE COUT D Q CK S R EC O G4 G3 G2 G1 Look-Up Table Carry & Control Logic YB Y F4 F3 F2 F1 XB X BY SR SLICE The configurable logic block (CLB) contains two slices. Each slice contains two 4-input look-up tables (LUT), carry & control logic and two registers. There are two 3-state buffers associated with each CLB, that can be accessed by all the outputs of a CLB. Xilinx is the only major FPGA vendor that provides dedicated resources for on-chip 3-state bussing. This feature can increase the performance and lower the CLB utilization for wide multiplex functions. The Xilinx internal bus can also be extended off chip. Each slice has 2 LUT-FF pairs with associated carry logic Two 3-state buffers (BUFT) associated with each CLB, accessible by all CLB outputs

90 CLB Slice Structure Each slice contains two sets of the following:
Four-input LUT Any 4-input logic function, or 16-bit x 1 sync RAM or 16-bit shift register Carry & Control Fast arithmetic logic Multiplier logic Multiplexer logic Storage element Latch or flip-flop Set and reset True or inverted inputs Sync. or async. control Two slices form a CLB. These slices can be used independently or together for wider logic functions.Within each slice also, the LUT and the flip flop can be used for the same function or for independent functions. The flip flops do not handcuff the designers into only having a set or clear. And for more ASIC like flows, the flip flop can be sued as latch. So, the designers do not need to re-code the design for the device architecture.

91 Look-up Table FPGAs The virtual library of look-up table FPGAs is represented by all logic functions that can be realized by the tables. Every n-input LUT can implement functions Starting from Xilinx Virtex 5, each slice contains 4 LUTs each 6-input i.e. 264 functions in a LUT! Even for small LUTs, enumerating the library cells is not practical.

92 Look-up Table FPGAs Each n-input LUT can implement any function
Given a combinational logic network, technology mapping consists of finding an equivalent logic network, with a minimum number of vertices (or minimum critical delay) such that each vertex is associated with a function implementable by a LUT, or equivalently with a local function having at most n input variables. Implies minimum number of LUTs.

93 LUT Technology Mapping
5-input LUTs: 3 3-input LUTs: 6 4-input LUTs: ??

94 LUT Technology Mapping
Library Cell binding algorithms are not applicable. Too many cells to be enumerated. Typical starting point: a logic network decomposed into base functions ANDs and ORs. When considering n-input LUTs, the base functions are required to have at most n inputs, but 2-input base functions give more flexibility. Tree covering paradigm adapted by Francis ‘91.

95 Adapted Tree Covering Consider an SOP expression of a single-output function Each product term has at most n variables If the function has a total of n variables, the entire function can be implemented in ONE table Else, groups of SOP terms need to be assigned to different tables – similar to bin packing. Bin packing – packing a given set of objects (here product terms) of bounded size into bins (here tables).

96 Bin-Packing Algorithm
Imagine when you go shopping How to decide the minimum number of bags needed? Pick the biggest article first? Greedy approach? Dynamic Programming

97 Adapted Tree Covering Not exactly the same problem since partitioning alone is not sufficient. More bins needed to combine partitioned terms. Example: Let table size be n = 3, and function be f = ab + cd. Each term has 2 variables, but f in total has 4 variables. 2 LUTs needed to implement each term ab and cd. 1 more needed to combine those two terms! f1 = ab, f2 = cd and f = f1 + f2.

98 Modified Bin Packing Iterate the following steps until all product term: Select the product term with most variables. Place it into any table where it fits. If no table has enough place, add a new table. When all product terms done, iterate the following Declare the table with the fewest unused variables as final and associate a variable with it. Assign this variable to the first table that can accept. Procedure terminated when only one table is left. Heuristic algo, but optimal solution for n < 6 and for trees.

99 Modified Bin Packing – Example
Let table size be n = 3, and function be f = ab + cd. 1 table created for the term ab. Term cd doesn’t fit in table 1, so table 2 created for cd. All terms finished. Declare one of the tables final (both tables have exactly one unused input.), say ab, and associate a variable z with it. Fit z into the other table with cd since 1 unused input Final output of the table: f = z + cd.

100 Modified Bin Packing – Example
Y f = ab + cd Available LUT a b c d f z

101 Look-up Table FPGAs – Packing
After matching and covering one more step needed – Packing! Most LUT-based FPGAs consist of logic blocks containing more than a single LUT. Logic block packing groups several LUTs and registers into one logic block under constraints Number of LUTs in a logic block Number of distinct input signals Optimization goal: Pack connected LUTs to minimize the signals to be routed between logic blocks Fill each logic block to its capacity – minimize logic blocks

102 Packing The problem is a form of clustering
Dividing a netlist into several pieces, under constraints e.g. maximum partition size Optimizing a goal such as minimizing the number of connections that cross partitions – routes. Most techniques use closeness metric – the desirability of putting two LUTs into the same logic block while respecting any constraints restricting which LUTs can be packed

103 Packing Shin and Kin [1993] proposed a greedy algorithm
Closeness metric is a function of how many nets two clusters share, and the size of the cluster that would result from merging them Initially, each circuit block (LUT) is a cluster Two clusters with the largest closeness value are merged Closeness value of other clusters updated Algorithm terminates when cluster count falls below a user-specified value The metric balances the number of circuits in each cluster Highly adaptable to input count, cluster size, etc.

104 Multiplexer-based FPGAs
The logic function is implemented by using a set of multiplexers. Different multiplexer modules are used for flexibility. Each module can implement a fairly large number of logic functions. 702 unique functions! s4 f a b c d s2 s1 s3 s3 f a b c d s2 s1 s4 f a b c d s2 s1 s3

105 Multiplexer-based FPGAs
What is the resulting function? s3 f a b c d s2 s1 s4 f a b c d s2 s1 s3 s3’(s1’a + s1b)+s3 (s2’c+s2d) (s3+s4)’(s1’a+s1b)+(s3+s4)(s2’c+s2d) (s3+s4)’((s1+s2)’a+(s1+s2)b)+(s3+s4)((s1+s2)’c+(s1+s2)d) s4 f a b c d s2 s1 s3

106 Multiplexer-based FPGAs
Library-based approaches are feasible with smaller library sizes. For larger libraries, the least frequently used gates are removed. Existing library-based approaches can be used in such cases for binding. Other approaches: Boolean matching using canonical forms. Mapping the network into if-then-else graph.

107 Summary Technology Mapping transforms a technology independent description of a logic circuit into a technology specific representation, while optimizing the result by some metric (delay, power, area, reliability, etc.) Different techniques employed for standard cell (ASIC design) and FPGA design. Decomposition and Partitioning largely similar. Matching and Covering applied appropriately. Packing needed as a last step for FPGAs

108 EE 4218 Technology Mapping II
Akash Kumar

109 FPGA Resources Xilinx Virtex-6
Each CLB (Configurable Logic Block) has two slices Each slice has four six-input Look Up Tables (LUT) LUT/RAM/SRL 0 1 CIN COUT Switch Matrix

110 6-Input LUT with Dual Output
6-input LUT with 1 output or… …it can be two 5-input LUTs (using common inputs) with 2 outputs One or two outputs Any function of six variables or two independent functions of five variables LUTs can perform any combinatorial function limited only by the number of inputs. LUTs are the primary combinatorial logic resource and are the industry standard. The look-up table functionality is essentially a small memory containing the desired output value for each combination of input values. The truth table for the desired function is effectively stored in a small memory, where the inputs to the function act as the address to be read from the memory. The values for the storage elements are generated by the ISE® software tools, and downloaded to all LUTs during configuration. Each 6-input LUT can be configured as two 5-input LUTs. This gives the device a great deal of flexibility to build an efficient design. Thus, the slice can be used to build any function of six variables or two independent functions of five variables.


Download ppt "Shubham Rai Akash Kumar"

Similar presentations


Ads by Google