Presentation is loading. Please wait.

Presentation is loading. Please wait.

TCAM Ternary Content Addressable Memory

Similar presentations


Presentation on theme: "TCAM Ternary Content Addressable Memory"— Presentation transcript:

1 TCAM Ternary Content Addressable Memory
張燕光 成大資工

2 Introduction – TCAM Content-addressable memories(CAMs) enable a search operation to complete in a single clock cycle. TCAM allows a third state of "*" or "Don't Care" for adding flexibility to the search. The TCAM memory array stores rules in decreasing order of priorities for simplifying the priority encoder.

3 Introduction – TCAM Compares an input key against all TCAM entries in parallel. Each TCAM entry is a prefix for IP lookups In general, each entry is a ternary string The N-bit bit-vector indicates which rules match. N-bit priority encoder indicates the address of the matched entry with highest priority. The address is used as an index into an array in SRAM to find the action associated with this prefix.

4 Destination IP Address
TCAM ip lookup example Destination IP Address N location memory: Array of TCAM entries TCAM bit-vector: 1 Priority Encoder Memory location Action memory SRAM Next-hop

5 Type of CAMs Binary CAM (BCAM or CAM) only stores 0s and 1s
Applications: MAC table consultation. Layer 2 security related VPN segregation. Ternary CAM (TCAM) stores 0, 1 and *. Application: when we need wilds cards such as, layer 3 and 4 classification for QoS and CoS purposes and IP routing (longest prefix matching). Available sizes: 1Mb, 2Mb, 4.7Mb, 9.4Mb, and 18.8Mb, 20Mb, 36 Mb, 40Mb. 50, 100, 360MPP CAM entries are structured as multiples of bits rather than 32 bits.

6 CAM cell circuits 10-T NOR-type CAM cell 9-T NAND-type CAM cell 1 1
1 1 D=0 D=1 10-T NOR-type CAM cell 9-T NAND-type CAM cell

7 CAM of Four 3-bit words SL(differential
Searchline) pair Step 2: Precharging all match lines (MLs) high = match; low = miss Step 3: Broadcast search word to SL Step 1: Load input key Step 5: encoder maps matchline of matching location to its encoded address Step 4: Perform the cell comparisons

8 TCAM cell NOR-type each cell in one of three logic states: 0,1, or *.
two 1-bit 6-T static random-access memory (SRAM) cells (D0/D1) are used to store three logic states of a TCAM cell. 1 1 1 1 D=0 D=* D=1 NOR-type

9 TCAM state Generally, the 0, 1, and * states of a TCAM cell are set by D0 = 0 and D1 = 1, D0 = 1 and D1 = 0, and D0 = 1 and D1 = 1, respectively.

10 TCAM Each SRAM cell consists of two cross-coupled inverters and two additional transistors used to access each SRAM cell via two bitlines (BLs) and one wordline (WL) for read and write operations. The pair of transistors (M1/M3 or M2/M4) forms a pulldown path from the matchline (ML). Each SRAM cell consists of two cross-coupled inverters (each formed by two transistors) and two additional transistors used to access each SRAM cell via two bitlines (BLs) and one wordline (WL) for read and write operations. Each SRAM cell has six transistors, and each TCAM cell consists of 16. The pair of transistors (M1 / M3 or M2/M4) forms a pulldown path from the matchline (ML).

11 TCAM If one pulldown path connects the ML to ground, the state of the ML becomes 0. A pulldown path connects the ML to ground when the searchline (SL) and D0 do not match. No pulldown path that connects the ML to ground exists when the SL and D0 match. When the TCAM cell is in the “don’t care” state, M3 and M4 prevent searchlines SL and , respectively, from being connected to ground regardless of the search bit. when the search bit is “don’t care” because SL = 0 and = 0, M1 and M2 prevent searchlines SL and , respectively, from being connected to ground regardless of the stored value in the TCAM cell.

12 Truth table of a TCAM

13 Introduction(4/5) Case : Search key is a prefix
Longest prefix match using TCAM Case : Search key is a prefix Case : Search key is an IP address Prefix Global mask 0: disable 1: enable 1 1 Memory location SRAM 0x000 0x000 1 1 1 1 1 1 Priority Encoder Port A 0x001 1 1 X 0x001 Port B X X 0x010 1 1 X X 0x010 Port C 0x010 0x001 Port B Port C 0x011 0x011 Port D 1 1 X X X 0x100 0x100 Port C 1 X X X 0x101 1 1 X X X 0x101 Port B Decreasing Length TCAM

14 TCAM – Priority and Update
In order to ensure the LPM is correct, the prefix length ordering in TCAM should always be maintained when updates take place. TCAM 1 32-bit prefixes 31-bit prefixes 30-bit prefixes 9-bit prefixes . 8-bit prefixes M-1 Free space Prefix length ordering in TCAM

15 TCAM – Pro and Con Advantages
Industry vendors are providing cheaper and faster TCAM products TCAM architecture is easy to understand and simple to manage in updating TCAM entries TCAM’s performance is deterministic Disadvantages A TCAM is less dense than a RAM, storing fewer bits in the same chip area. TCAMs dissipate more power than RAM solutions Not straightforward to store ranges, e.g., [0, 3]=00**, [0-5]=00**+010*

16 TCAM Cross reference Netlogic Density I/F Renesas NL6128 4.5M 72 bit
Dual Search 4.5M NL7512 20M Dual Search 18M NL9512 80 bit Quad Search 20M R8A20410BG IDT 75P52100 4M 75K72100 18M 75S10020B

17 CoolCAMs CoolCAM architectures and algorithms for making TCAM-based routing tables more power-efficient. TCAM vendors provide a mechanism to reduce power. by selectively addressing smaller portions of TCAM The TCAM is divided into a set of blocks; each block is a contiguous, fixed size chunk of TCAM entries e.g. a 512k-entry TCAM could be divided into 64 blocks of 8k entries each When a search command is issued, it is possible to specify which block(s) to use in the search Power saving, since TCAM power consumption is proportional to the number of entries searched In the first architecture, it use a subset of the destination address bits to hash to a TCAM partition (the bit-selection architecture), allowing for a very simple hardware implementation. The selected bits are fixed based on the contents of the routing table. In the second architecture, a small trie (implemented using a separate, small TCAM) is used to map a prefix of the destination address to one of the TCAM partitions in the next stage (the trie-based architecture)

18 CoolCAMs Observation: most prefixes in core routing tables are between 16 and 24 bits long Put the very short (<16bit) and very long (>24bit) prefixes in a set of TCAM blocks to search on every lookup The remaining prefixes are partitioned into “buckets,” one of which is selected by hashing for each lookup each bucket is laid out over one or more TCAM blocks the hashing function is restricted to merely using a selected set of input bits as an index over 98%, in the authors’ datasets each bucket is laid out over one or more TCAM blocks

19 CoolCAM – bit selection architecture
Forwarding engine architecture for using bit selection to reduce power consumption. The 3 hashing bits here are selected from the 32- bit destination address by setting the appropriate 5-bit values for b0, b1 and b2.

20 CoolCAM – bit selection architecture
A route lookup, then, involves the following: hashing function (bit selection logic, really) selects k hashing bits from the destination address, which identifies a bucket to be searched also search the blocks with the very long and very short prefixes In order to avoid the worst-case input, but it gives designers a power budget Given such a power budget and a routing table, it is sufficient to find a set of hashing bits that produce a split that does not exceed the power budget (a satisfying split) Restrict ourselves to choosing hashing bits from the first 16 bits of the address, to avoid replicating prefixes The main issues now are: how to select the k hashing bits how to allocate the different buckets among the various TCAM blocks (since bucket size may not be an integral multiple of the TCAM block size)

21 CoolCAM – bit selection architecture
3 Heuristics the first is simple: use the rightmost k bits of the first 16 bits. In almost all routing traces studied, this works well. Second Heuristic: brute force search to check all possible subsets of k bits from the first 16. Guaranteed to find a satisfying split Third heuristic: a greedy algorithm.Falls between the simple heuristic and the brute-force one, in terms of complexity and accuracy Second: Since it compares possible sets of k bits, running time is maximum for k =8 Third To select k hashing bits, the algorithm performs k iterations, selecting one bit per iteration number of buckets doubles each iteration Goal in each iteration is to select a bit that minimizes the size of the biggest bucket produced in that iteration

22 CoolCAM – bit selection architecture
Partitioning scheme using a Routing Trie data structure Eliminates drawbacks of the Bit Selection architecture worst-case bounds on power consumption do not match well with power consumption in practice assumption that most prefixes are bits long Two trie-based schemes (subtree-split and postorder-splitting), both involving two steps(only differ in the mechanism for performing the first stage lookup) : construct a binary routing trie from the routing table partitioning step: carve out subtrees from the trie and place into buckets

23 CoolCAM – bit selection architecture
Trie-based forwarding engine architecture use an index TCAM (instead of hashing) to determine which bucket to search requires searching the entire index TCAM, but typically the index TCAM is very small

24 Trie-based Table Partitioning
Partitioning is based on binary trie data structure Eliminates drawbacks of bit selection architecture worst-case bounds on power consumption do not match well with power consumption in practice assumption that most prefixes are bits long Two trie-based schemes (subtree-split and postorder-splitting), both involving two steps: construct a binary trie from the routing table partitioning step: carve out subtrees from the trie and place into buckets The two schemes differ in their partitioning step

25 Trie-based Architecture
Trie-based forwarding engine architecture use an index TCAM (instead of hashing) to determine which bucket to search requires searching the entire index TCAM, but typically the index TCAM is very small

26 Routing Trie Example Routing Table: Corresponding 1-bit trie:

27 Splitting into subtrees
Subtree-split algorithm: input: b = maximum size of a TCAM bucket output: a set of K  TCAM buckets, each with size in the range , and an index TCAM of size K Partitioning step: post order traversal of the trie, looking for carving nodes. Carving node: a node with count  and with a parent whose count is > b When we find a carving node v , carve out the subtree rooted at v, and place it in a separate bucket place the prefix of v in the index TCAM, along with the covering prefix of v counts of all ancestors of v are decreased by count(v )

28 Subtree-split: Example

29 Subtree-split: Example

30 Subtree-split: Example

31 Subtree-split: Example

32 Subtree-split: Remarks
Subtree-split creates buckets whose size range from b/2 to b (except the last, which ranges from 1 to b ) At most one covering prefix is added to each bucket The total number of buckets created ranges from N/b to 2N/b ; each bucket results in one entry in the index TCAM Using subtree-split in a TCAM with K buckets, during any lookup at most K + 2N /K prefixes are searched from the index and data TCAMs Total complexity of the subtree-split algorithm is O(N +NW /b)

33 Post-order splitting Partitions table into buckets of exactly b prefixes improvement over subtree-split, where the smallest and largest bucket sizes can vary by a factor of 2 Cost: more entries in the index TCAM Partitioning step: post-order traversal of the trie, looking for subtrees to carve out, but, Buckets are made from collections of subtrees, rather than just a single subtree because it is possible the entire trie does not contain N /b subtrees of exactly b prefixes each

34 Post-order splitting postorder-split : does a post-order traversal of the trie, calling carve-exact to carve out subtree collections of size b carve-exact : does the actual carving if it’s at a node with count = b , then it can simply carve out that subtree if it’s at a node with count < b , whose parent has count  b , do nothing (since we will later have a chance to carve the parent) if it’s at a node with count x, where x < b , and the node’s parent has count > b , then… carve out the subtree of size x at this node, and recursively call carve-exact again, this time looking for a carving of size b - x (instead of b)

35 Post-order split: Example
b = 4

36 Post-order split: Example
b = 4

37 Post-order split: Example
b = 4

38 Postorder-split: Remarks
Postorder-split creates buckets of size b (except the last, which ranges from 1 to b ) At most W covering prefixes are added to each bucket, where W is the length of the longest prefix in the table The total number of buckets created is exactly N/b. Each bucket results in at most W +1 entries in the index TCAM Using postorder-split in a TCAM with K buckets, during any lookup at most (W +1)K + N /K+W prefixes are searched from the index and data TCAMs Total complexity of the postorder-split algorithm is O(N +NW /b)

39 TCAM update PLO_OPT (prefix length ordering)
2019/2/23 TCAM update PLO_OPT (prefix length ordering) CAO_OPT (Chain-ancestor ordering) CSIE CIAL Lab 39

40 TCAM update (1/7) Update Scheme
Prefix-length ordering constraint: PLO_OPT Two prefixes of the same length don’t need to be in any specific order. Chain-ancestor ordering constraint CAO_OPT There’s an ordering constraint between two prefixes if and only if one is a prefix of the other. The NFA build in FPGA is a pipelined architecture. It also called Brute-force approach. Each state of transition chain is a pipeline stage. In case of a match or mismatch, it shifts only one position to the right. For example, if we have pattern “abc”, and input string “123abc”. In each cycle, we have one input character. In first three cycle, we don’t match any character. In forth cycle, we have matched “a”, and hold the match signal to first register. In the last two cycle, they also match ”b” and “c”. The pattern has input matched this pattern.

41 TCAM update (2/7) PLO_OPT Divide all prefixes into different groups
by length. Two prefixes in the same group can be in any order. keep all the unused entries in the center of the TCAM. The worst-case number of memory operations per update is L/2. 32-bit prefixes 31-bit prefixes 21-bit prefixes 20-bit prefixes 9-bit prefixes free space 8-bit prefixes 1 24 25 . . . .

42 TCAM update (3/7) PLO_OPT Deletion PLO_OPT Insertion
32-bit prefixes 31-bit prefixes 21-bit prefixes 20-bit prefixes 9-bit prefixes free space 8-bit prefixes 1 24 25 PLO_OPT Deletion PLO_OPT Insertion . Prefix: /20 Prefix: /32 Free space start boundary +1 Free space end boundary +1 .

43 TCAM update(4/9) free space
CAO_OPT PLO constraint is more restrictive, the constraint can be relaxed to only overlapping prefixes. Q1 Q4 Q2 Q3 free space Q1 Q4 Q2 Q3 Prefixes on the same chain of the trie need to be ordered.

44 TCAM update (5/9) free space CAO_OPT Q3 Q2 Q4 Q1
A logical inverted trie can be superimposed on the prefixes stored in the TCAM. Only prefixes in the same path have ordering constraint. The CAO_OPT algorithm also keeps the empty space in the center of the TCAM. For every prefix, the longest chain that this prefix belongs to should be split around the empty space as equally as possible. The worst-case number of memory operations per update is D/2. where D is the max length of any chain in the trie. Q1 Q4 Q2 Q3 Maximal chain free space

45 TCAM update (6/9) free space Notation statement LC(p) len(LC(p))
rootpath(p) ancestors of p children of p hcld(p) HCN(p) free space Children of p hcld(p) p LC(p) Ancestors of p

46 TCAM update (7/9) free space free space Case2:
LC(q) HCN(q) Insert q here free space LC(q) Insert q here CAO_OPT Insertion Case2: When the prefix to be inserted is below the free space . Case1: When the prefix to be inserted is above the free space . Moving prefixes on LC(q) Moving prefixes on HCN(q)

47 TCAM update (8/9) CAO_OPT Deletion free space It works on the chain that has prefix p adjacent to the free space. q delete

48 TCAM update (9/9) Shortcoming to shift above schemes for IPv6 PLO:
There are 128 different length of prefix in IPv6. Therefore, the worst case and average case of cost to shift the prefix stored in TCAM growth extremely. CAO: CAO need to maintain the additional trie structure using SRAM. Each update cost O(L) time to modify the data store in the trie. In order to reorder the chain sequence, the router needs to hold-on for prefix update. It will cause packet drop rate increase when needing more memory access times.

49 Introduction – Rule Table
Network-layer Destination (address/mask) Network-layer Source (address/mask) Transport-layer Destination Source Protocol Action R1 / / * Permit R2 / / 80 UDP R3 / / 20-21 1024 R4 TCP Deny R5 gt 1023 R6 / Packet Header Network-layer Destination Source Transport-layer Protocol Best matching rule, Action P1 21 1003 UDP R1/ Permit P2 80 1024 R4/ Deny P3 TCP R3/ Permit

50 TCAM range encoding The direct range-to-prefix conversion is the traditional database independent. The primary advantage of database independent schemes is their fast update operations. However, database independent schemes suffer from large TCAM memory consumption.

51 TCAM range encoding The dependent range encoding schemes, which reduce the TCAM requirement by exploiting the dependency among rules. Each field value is individually converted into one or more field ternary strings. The field values in input packet headers must be translated into intermediate results which in turn will be used as search keys in TCAM.

52 TCAM range encoding The first scheme called EIGC encodes ranges by using the BRGC identifiers of elementary intervals and converting each range to a number of ternary strings. The second scheme called perfect BRGC (P-BRGC) groups ranges into perfect BRGC range sets that can be encoded by a minimal number of ternary values.

53 Preliminaries (1/3) Buddy Code (BC)
The Buddy code, Bn, is defined recursively as B1 = {0, 1} and Bk= { } for k = 2 to n. In fact, Bn is the set of natural numbers 0 to 2n – 1. *** 0** 1** 00* 01* 10* 11* 000 001 010 011 100 101 110 111 53

54 Preliminaries (2/3) Binary Reflected Gray Code (BRGC)
The Gray code (GC), Gn, with parameters gi, 1≦ i ≦ n, is defined recursively as G1 = {0, 1} and Gk = { }, for k = 2 to n. *** 0** 1** 00* 01* 11* 10* 000 001 011 010 110 111 101 100 54

55 Preliminaries (3/3) Elementary interval
Let G be a set of original ranges. It is assumed there is a default range covering the whole address space. The set of elementary intervals, constructed from the endpoints of G. R1=[14, 27] R2=[2, 6] R3=[11, 29] R4=[9, 22] E0 E1 E2 E3 E4 E5 E6 E7 E8 [0, 1] [2, 6] [7, 8] [9,10] [11,13] [14,22] [23,27] [28,29] [30,31]

56 Related Works Database independent & database dependent
Database independent or database dependent based on whether the process of encoding a range is independent of other ranges. Database independent schemes have the advantage of fast update. However, they tend to use less TCAM memory than the database independent ones. 56

57 Direct range-to-prefix conversion
Database independent scheme. Convert each range to prefixes. In the worst case, range [1, 2W-2] in the W-bit address space is split into 2W-2 prefixes. ex: R=[1,14], correspond to prefix 0001, 001*, 01**, 10**, 110*, 1110

58 Ternary string (Boolean expression)
Buddy Code ___ The minimized Boolean expression of the 4-bit range [1, 14] is v1v4+v3v4+v2v3+v1v2, which is equivalent to 1**0 + **01 + *01* + 01**. v3v4 00 01 11 10 1 v1v2 ___ v1v2 ___ v2v3 ___ v3v4 ___ v1v4 BRGC BRGC encoded range [b2g(1), b2g(14)]. The minimized Boolean expression is v2+v3+v4 , which is equivalent to *1** + **1* + ***1. v3v4 00 01 11 10 1 v1v2 v2 v4 v3

59 Ternary string (Boolean expression)
It can be noted that the Boolean expression minimization is an NP-complete problem for which an efficient exact algorithm is difficult to find. However, Espresso-II [1][11], a fast heuristic algorithm, may be used in practice.

60 Direct conversion using BRGC
Direct range-to-ternary string conversion using BRGC BRGC code has the advantage over Buddy code in that an h-bit block of BRGC codes can be combined with one of its n – h symmetric h-bit blocks into a (h+1)-bit block (a ternary string). BRGC encoded range [b2g(1), b2g(14)] Combine 0001 001* 01** 11** 101* 1001 *01* *1** *001 R=[1, 14] 001* 01** 11** 101* 0001 1001

61 SRGE Short Range Gray Encoding (SRGE).
SRGE encodes range borders as binary reflected gray codes and represents the resulting range by a minimal set of ternary string. SRGE scheme encodes the vast majority of ranges without using extra bits and use dependent encoding only for a very small number of ranges.

62 SRGE Short Range Gray Encoding (SRGE) 0101 1011 p 6 14
SRGE example: range [6 - 14]

63 SRGE Short Range Gray Encoding (SRGE) 0100 1100 p pr pl 6 14
SRGE example: range [6 - 14]

64 SRGE Short Range Gray Encoding (SRGE) Prefixes1 = {010*}
SRGE example: range [6 - 14]

65 SRGE Short Range Gray Encoding (SRGE) 1110 1010 P’ pl’ pr’ S’
SRGE example: range [6 - 14]

66 SRGE Short Range Gray Encoding (SRGE) Prefixes2 = {101*, 1001}
pl’ pr’ S’ SRGE example: range [6 - 14]

67 SRGE Short Range Gray Encoding (SRGE) Prefixes1 Prefixes2

68 BEIE Basic elementary interval encoding scheme (BEIE)
By giving each elementary interval a unique identifier (code), an original range can be encoded by the identifiers of the elementary intervals covered by the range. R1=[14, 27] R2=[2, 6] R3=[11, 29] R4=[9, 22] E0 E1 E2 E3 E4 E5 E6 E7 E8 0000 0001 0010 0011 0100 0101 0110 0111 1000 R1=0101, R2=0001 R3=01** R4=0011, 010*

69 PPC Parallel packet classification encoding (PPC)
The PPC encoding scheme is also based on the concept of elementary intervals. PPC divides the original primitive ranges into multiple groups (called layers). Depending on the encoding style, the code assignments in one layer may be Independen. Partially dependent on other layer. Completely dependent on other layer.

70 PPC PPC Style I The primitive ranges at the same layer are nonoverlapping. The total number of layers is minimized. Rule code r1 ***1 r2 **1* r3 01** r4 10** r1=[X2,X5], r2=[X4,X7], r3=[X1,X2], r4=[X5,X6]

71 PPC PPC Style II Two primitive ranges at the same layer can be assigned a common identifier if both ranges are subsets of two disjoint primitive. Rule code r1 **1 r2 *1* r3 10* r4 11* r1=[X2,X5], r2=[X4,X7], r3=[X1,X2], r4=[X5,X6]

72 PPC PPC Style III Converting overlapping primitive ranges at different layers into a larger number of nonoverlapping primitive ranges at the same layer. Rule code r1 **01,**10 r2 **10,**11 r3 01** r4 10** r1=[X2,X5], r2=[X4,X7], r3=[X1,X2], r4=[X5,X6]

73 PPC Parallel packet classification encoding (PPC)

74 Prefix compaction The routing table is organized as a tree structure.
Prefix P1 is an identical parent of P2, and P2 is a parent of P3. P2 is a redundant prefix that can be removed without affecting routing functionality.

75 Prefix compaction TCAM allows the use of an arbitrary mask, so that the bits of ones or zeros needn’t be continuous.

76 Proposed Range Encoding Schemes (1/8)
Scheme based on Elementary Interval and BRGC(EIGC) Assign each elementary interval a identifier by using BRGC Default elementary interval have the same code. Match condition R1 01*1 R2 0001 R3 01** R4 0010 and 011* R1 R2 R3 R4 E0 E1 E2 E3 E4 E5 E6 E7 E8 0000 0001 0000 0010 0110 0111 0101 0100 0000

77 Proposed Range Encoding Schemes (2/8)
Scheme based on Perfect BRGC Range Sets(P-BRGC) Independent range set: Any range in the set must intersect at least one of the other ranges. A perfect BRGC range sets satifies: Each range contain 2n elementary intervals Two intersection range A and B in the set, share elementary interval must be equal to half of the number of elementary interval contained in either A or B. 77

78 Proposed Range Encoding Schemes (3/8)
Scheme based on Perfect BRGC Range Sets(P-BRGC) Perfect BRGC range sets

79 Proposed Range Encoding Schemes (7/8)
Scheme based on Perfect BRGC Range Sets(P-BRGC) Virtual endpoint insert Should limit the number of virtual endpoints Match condition R1 01*1 R2 0001 R3 01** R4 0010 and 011* Match condition R1 01*1 R2 0001 R3 01** R4 0*1* R1 Virtual endpoint R2 R3 R4 0000 0001 0000 0011 0010 0110 0111 0101 0100 0000 79

80 Proposed Range Encoding Schemes (8/8)
Insert a range into layers

81 Performance (1/5)

82 Performance (2/5) All rules Non-prefix rules
Compare performance with all rules and non-prefix rules All rules Non-prefix rules P-BRGC outperforms the other schemes for tables fw1 and ipc1. However, for table acl1, EIGC performs the best.

83 Performance (3/5)

84 Performance (4/5) All rules Non-prefix rules
Compare performance with all rules and non-prefix rules All rules Non-prefix rules In figure. It is apparent that PPC and the proposed P-BRGC scheme consume less TCAM memory than other schemes for all ipc, fw, and acl tables. Likewise, the proposed P-BRGC scheme needs only 50-77% of the TCAM memory required in PPC.

85 Performance (5/5) Non-prefix rules Rule number: 5k Rule number: 10k

86 Conclusions TCAM introduction Power reduction Range encoding


Download ppt "TCAM Ternary Content Addressable Memory"

Similar presentations


Ads by Google