12K Support Training.

Slides:



Advertisements
Similar presentations
1 Radio Maria World. 2 Postazioni Transmitter locations.
Advertisements

Números.
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
AGVISE Laboratories %Zone or Grid Samples – Northwood laboratory
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.

PDAs Accept Context-Free Languages
ALAK ROY. Assistant Professor Dept. of CSE NIT Agartala
Process Description and Control
/ /17 32/ / /
Reflection nurulquran.com.
EuroCondens SGB E.
Worksheets.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 4 Computing Platforms.
Sequential Logic Design
Copyright © 2013 Elsevier Inc. All rights reserved.
RM WD-97 WD-101 WD-102 WD-124 a IIIh-H : RM110 (2.1) Hainan c agGY Ia-1 (2) Anhui agGY Ia-2 (3) agGY Ia WD-2 WD-8 WD-36 agGY Ia
Addition and Subtraction Equations
Multiplication X 1 1 x 1 = 1 2 x 1 = 2 3 x 1 = 3 4 x 1 = 4 5 x 1 = 5 6 x 1 = 6 7 x 1 = 7 8 x 1 = 8 9 x 1 = 9 10 x 1 = x 1 = x 1 = 12 X 2 1.
Division ÷ 1 1 ÷ 1 = 1 2 ÷ 1 = 2 3 ÷ 1 = 3 4 ÷ 1 = 4 5 ÷ 1 = 5 6 ÷ 1 = 6 7 ÷ 1 = 7 8 ÷ 1 = 8 9 ÷ 1 = 9 10 ÷ 1 = ÷ 1 = ÷ 1 = 12 ÷ 2 2 ÷ 2 =
1 Building a Fast, Virtualized Data Plane with Programmable Hardware Bilal Anwer Nick Feamster.
By John E. Hopcroft, Rajeev Motwani and Jeffrey D. Ullman
1 When you see… Find the zeros You think…. 2 To find the zeros...
Western Public Lands Grazing: The Real Costs Explore, enjoy and protect the planet Forest Guardians Jonathan Proctor.
Add Governors Discretionary (1G) Grants Chapter 6.
CALENDAR.
CHAPTER 18 The Ankle and Lower Leg
Summative Math Test Algebra (28%) Geometry (29%)
Introduction to Turing Machines
ASCII stands for American Standard Code for Information Interchange
The 5S numbers game..
突破信息检索壁垒 -SciFinder Scholar 介绍
A Fractional Order (Proportional and Derivative) Motion Controller Design for A Class of Second-order Systems Center for Self-Organizing Intelligent.
The basics for simulations
© 2010 Concept Systems, Inc.1 Concept Mapping Methodology: An Example.
Chapter 12 – Data Structures
MM4A6c: Apply the law of sines and the law of cosines.
2002 Prentice Hall, Inc. All rights reserved. Outline 25.1Introduction 25.2Basic HTML Tables 25.3Intermediate HTML Tables and Formatting 25.4Basic HTML.
Chapter 3 Logic Gates.
Figure 3–1 Standard logic symbols for the inverter (ANSI/IEEE Std
Operating Systems Operating Systems - Winter 2012 Chapter 2 - Processes Vrije Universiteit Amsterdam.
Dynamic Access Control the file server, reimagined Presented by Mark on twitter 1 contents copyright 2013 Mark Minasi.
TCCI Barometer March “Establishing a reliable tool for monitoring the financial, business and social activity in the Prefecture of Thessaloniki”
Copyright © [2002]. Roger L. Costello. All Rights Reserved. 1 XML Schemas Reference Manual Roger L. Costello XML Technologies Course.
Progressive Aerobic Cardiovascular Endurance Run
CSE 6007 Mobile Ad Hoc Wireless Networks
1 Introduction to Network Layer Lesson 09 NETS2150/2850 School of Information Technologies.
MaK_Full ahead loaded 1 Alarm Page Directory (F11)
When you see… Find the zeros You think….
2011 WINNISQUAM COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=1021.
Before Between After.
2011 FRANKLIN COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=332.
ST/PRM3-EU | | © Robert Bosch GmbH reserves all rights even in the event of industrial property rights. We reserve all rights of disposal such as copying.
Numeracy Resources for KS2
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Figure 10–1 A 64-cell memory array organized in three different ways.
Static Equilibrium; Elasticity and Fracture
ANALYTICAL GEOMETRY ONE MARK QUESTIONS PREPARED BY:
Resistência dos Materiais, 5ª ed.
1 © 2004, Cisco Systems, Inc. All rights reserved. CCNA 1 v3.1 Module 9 TCP/IP Protocol Suite and IP Addressing.
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
Doc.: IEEE /0333r2 Submission July 2014 TGaj Editor Report for CC12 Jiamin Chen, HuaweiSlide 1 Date: Author:
UNDERSTANDING THE ISSUES. 22 HILLSBOROUGH IS A REALLY BIG COUNTY.
A Data Warehouse Mining Tool Stephen Turner Chris Frala
Chart Deception Main Source: How to Lie with Charts, by Gerald E. Jones Dr. Michael R. Hyman, NMSU.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Introduction Embedded Universal Tools and Online Features 2.
Schutzvermerk nach DIN 34 beachten 05/04/15 Seite 1 Training EPAM and CANopen Basic Solution: Password * * Level 1 Level 2 * Level 3 Password2 IP-Adr.
CRS-1 System Architecture Introduction
Presentation transcript:

12K Support Training

Goals Deepen global 12k support expertise through architecture discussion and hands-on troubleshooting

Agenda 12K Product Overview System Architecture Forwarding Architecture Services and Applications Troubleshooting

Module I – Overview and System Architecture

Part I - 12K Product Overview

12K Architecture Overview Fully distributed, multi-gigabit IP Router RP provides routing and control services Line cards perform IP forwarding Advanced QoS capabilities Bandwidth scalable (OC12, OC48, OC192)

Cisco 12008 Product Highlights Crossbar switch fabric architecture 8 slot card cage (7 for interfaces) Components: Switch Fabric Cards (SFC) Clock and Scheduler Cards (CSC) Route Processor (RP) Line Cards (LCs)

Cisco 12012 Product Highlights Crossbar switch fabric architecture 12 slot card cage (11 for interfaces) Components: Switch Fabric Cards (SFC) Clock and Scheduler Cards (CSC) Route Processor (RP) Line Cards (LCs)

Cisco 12016 Product Highlights Switching performance 16 Slot System, 2.5Gbps switching capacity/slot – can support 10Gb LCs if fabric is upgraded Increased number of linecards Configuration 2 Interface Shelves 16 slots 1 Fabric Shelf, with 5 slots 2 Alarm cards – 1 top shelf, 1 bottom shelf More details of the Electro-Mechanical specs are available in the Data Sheet and the Cisco 12016 GSR documentation Key points to note are the following : The 12016 has 2 interface shelves with 8 slots each The Route processor has a dedicated slot in the shelves The pitch of the linecard slots is wider than the 12012, in order to accommodate the existing linecards, a filler is provided. This is critical to maintaining air flow and EMI characteristics of the chassis The base configuration of the chassis will include : 1 Gigabit Route Processor Full fabric redundancy ( 3 SFC + 2 CSC) Fully redundant power configuration (4 DC or 3 AC) Redundant Blower assembly and Alarm cards

Cisco 12416 - Product Overview Switching performance 16 Slot System each with 10Gbps switching capacity/slot Supports 10G linecards Support for existing 12k line cards Slots are wider to accommodate 10 Gb LCs Configuration 2 interface shelves 16 slots 1 fabric shelf, with 5 slots 2 Alarm cards – 1 top shelf, 1 bottom shelf

Cisco 12410 Product Highlights 10 X OC192 capable slots 8 Slots are wider to accommodate 10 Gb Lcs 2 x Legacy slots (narrower slots 8 and 9) 7 card fabric – 2 CSCs & 5 SFCs

Cisco 12406 Product Highlights 6 slot card cage 1 narrow slot dedicated for RP 5 for redundant RP and Line Cards Components: Switch Fabric Cards (SFC) Clock and Scheduler Cards (CSC) 1 or 2 Route Processors (RP) Up to 5 Line Cards (LCs) 1 or 2 Alarm Cards 1/3 rack height

Cisco 12404 GSR Product Highlights 4 slot card cage 1 narrow slot for RP 3 10G capable slots Components: 1 Consolidated Fabric Card : CSC-4 (CSC, SFC, Alarm built in) Route Processors (RP) Up to 3 Line Cards (LCs) FABRIC IS NOT RESILIENT

Cisco 12816 - Product Overview Switching performance 16 Slot System each with 40Gbps switching capacity/slot Supports 20G and future 40G linecards Support for existing GSR line cards Slots are wider to accommodate 10/20Gb LCs Requires PRP Configuration 2 interface shelves 16 slots 1 fabric shelf, with 5 slots 2 Alarm cards – 1 top shelf, 1 bottom shelf

Cisco 12810 Product Highlights 10 X 40Gb capable slots 8 Slots are wider to accommodate 10/20 Gb Lcs Requires PRP 2 x Legacy slots (narrower slots 8 and 9) 7 card fabric – 2 CSCs & 5 SFCs

Part II - 12K System Architecture

12K Components

System Components Route Processor Switching fabric Line cards Power/Environmental Subsystems Maintenance BUS

12k Architecture - Components Line Card Line Card Line Card Line Card Switch Fabric • • • • • • Line Card Line Card GRP: Gigabit Route Processor Runs IOS and routing protocols, distributes the CEF table to line cards Has a single 10/100 ethernet port (for management only, not for transit switched traffic) Unlike the RSP on the 7500, the GRP is not involved in the packet switching path Line Cards IP/MPLS – IP/tag forwarding, ping response, fragmentation Queuing - FIFO, MDRR Congestion Control - WRED Features – MPLS, CAR, Tunnels, ACL, BGP Policy Acct Statistics - Netflow, CEF accounting Switch Fabric MBUS 1 Mbps redundant CAN bus Connects to LCs, RP, Fabric, Power Supplies, and Blowers Controls hardware discovery, environmental monitoring, firmware upgrades Route Processor Route Processor Maintenance Bus Fan/Blower System Power Supplies

Route Processor Boots and manages line cards Provides and coordinates routing services Builds, distributes, and maintains FIB Adjacency table, FIB table, MPLS label table Provides out-of-band console/aux ports Provides intelligence behind system monitoring and access

RP - System Monitor/Controller Routing Protocol Updates Process-level Traffic System health monitoring Interface Status Msgs Statistics Line Card Line Card Line Card Line Card Switch Fabric • • • • • • Line Card Line Card Route Processor Line Card GRP: Gigabit Route Processor Runs IOS, system interface software (SNMP, etc) Builds the master CEF table and distributes this to the individual linecards Controls environmental and system functions Has a single 10/100 ethernet port (for management only, not for transit switched traffic) Unlike the RSP on the 7500, the GRP is not involved in the packet switching path Statistics - Netflow, CEF accounting Temperature, Voltage, Current Monitoring Maintenance Bus Fan/Blower System Power Supplies

Line Cards Perform all packet switching Statistics collection and reporting Run IOS Six different forwarding architectures

MBUS Maintenance Bus Out-of-band communications channel to linecards 1 Mbps - 2 wire serial interface Based on Controller Area Network (CAN) 2.0 Spec. (ISO 11898) http://www.can-cia.org/can/ A daughter card on each linecard having it’s own CPU w/ integrated CAN controller, A/D converter and other peripherals, dual CAN interface, SRAM, Flash and Serial EEPROM. CSCs and BusBoard can proxy and/or multiplex MBUS signals for power supplies Control pins reach into LED, Serial ID EEPROM, DC/DC power converter, clock select FPGA, temp sensor, voltage sensor Very large set of functions Line Card Line Card Multigigabit Crossbar Fabric • • • • • • Line Card Line Card Scheduler Route Processor Route Processor Fan/Blower System Power Supply Maintenance Bus

MBUS Functions Power and boot LC Device Discovery RP arbitration OIR management Environmental monitoring Diagnostics download LC console access Via “attach” command Logging

Alarm Cards LED display for fabric card status External alarm connection Power conversion/supply for 5v MBUS power plane On the 12008, this functionality is on the CSC.

Switch Fabric - Overview Provides the data path connecting the LCs and the RP Active CSC card provides the master clock for the system Everything traverses fabric in Cisco cell. - Data is 8B/10B encoded Two components - Clock & Scheduler Cards (CSC) - Switch Fabric Cards (SFC)

ciscoCell Packet are chopped into ciscoCells before they are sent across the switching fabric. A ciscoCell is 64bytes of data consisting of 48bytes of IP payload and 8bytes of header and 8bytes of CRC.

Clock Scheduler Card (CSC) Scheduler (SCA) Handles scheduling requests and issues grants to access the crossbar switching fabric Cross-bar (XBAR) Sets the fabric lines for transmissions following the scheduling decision Clock and Scheduler Card contains the following functionality: System clock--Sent to all line cards, GRP, and switch fabric cards. The system clock synchronizes data transfers between line cards or line cards and the GRP through the switch fabric. In systems with redundant clock and scheduler cards, the two system clocks are synchronized so that if one system clock fails, the other clock takes over. Scheduler--Handles requests from the line cards for access to the switch fabric. When the scheduler receives a request from a line card for switch fabric access, the scheduler determines when to allow the line card access to the switch fabric. Switch fabric--Circuitry that carries the user traffic between line cards or between the GRP and a line card. The switch fabric on the clock and scheduler card is identical to the switch fabric on the switch fabric card. MTBF Clock Scheduler Card = 240,078 hr Switch Fabric Card contains the following functionality: Switch fabric--Circuitry that carries the user traffic between line cards or between the GRP and a line card. The switch fabric on the clock and scheduler card is identical to the switch fabric on the switch fabric SFCs receive the scheduling information and clocking reference from the CSC cards and perform the switching functions.

Fabric Redundancy Each fabric card provides a slice of the Cisco cell data path Up to 5 data paths are available – for up to 4+1 redundancy The 5th data path carries an XOR of other streams Used for recovery of a errored stream No 5th path = no recovery capability ‘Grants’ travel exclusively between the LC and the active CSC using separate communication lines Never traverse the SFC cards

Scheduling Algorithm (“ESLIP”) Request Each input LC makes request to output highest priority queued cell (unicast or multicast) Grant Each destination LC grants the request to the highest priority request Accept Each input LC selects the highest grant Transmit XBAR set and cells transmitted

ESLIP Illustrated Switch Fabric Scheduler DRR Slot select the highest grant and accepts the connection Accept Switch Fabric Grant Scheduler send multiple grants (for multiple outputs) to slot Each line card utilizes DRR to select a set of packets from the VoQs. Request is sent to Scheduler on CSC to obtain a grant Request Scheduler (for each output) selects highest priority packet from requests and determines if output can grant request Scheduler

Bootup process

Startup/Boot Process Initial Power On RP Boot Process Clock Scheduler Boot Process Line Card Boot Process Switch Fabric Boot Process Fabric Initialization IOS Download

Initial Power On When the chassis is powered on, the Mbus module on each card is powered on. After the Mbus module powers on its processor it boots from a module on EEPROM. Card power up order varies depending on linecard type.

RP Boot Process Mbus module powers first Board logic starts, image begins booting and Mbus code is loaded to the Mbus module The CPU, Memory controller ASIC, cell-handler ASICs and FIA ASICs are then issued power for startup RP arbitration process is executed using the Mbus Master RP instructs Line Cards and Switch Fabric Cards to power on. RP waits for Line Cards to power and finish booting

Switch Fabric Card Startup/Boot Master RP instructs each SFC Mbus module to power on at the same time the Line Card Mbus modules are told SFC obtains clock the same way each LC does The SLI ASICs and XBAR initialize and power up SFC Mbus code is downloaded from the RP All cards are now powered on but not usable

Line Card Startup/Boot Each LC Mbus module powers up after being told to do so by the RP Clock selection takes place The Line Card CPU is powered on and boots Mbus module code is loaded The Line Cards CPU notifies the RP it has booted Switch Fabric access is not available yet

Line Card IOS Downloads The Line Card may already have enough code in its flash to become operational on the Switch Fabric, or it may require an Mbus download. Only enough code for the Line Card to become operational on the fabric will be loaded using the Mbus. Once all cards are operational on the fabric, the fabric is initialized and the main IOS software is downloaded.

IPC Services

IPC Overview The 12k is a distributed multiprocessor system. The processors communicate via IPC … an essential architectural service IPC has a reliable (acknowledged) and unreliable mode of transport (with or without sequence number or notification). The application uses an appropriate method.

IPC Clients Applications (clients) can build their own queue structures and feed the IPC queue/cache as well as choose to block or not until an ACK or imbedded response is received. e.g. … CEF uses a multi-priority queue and it’s own cache in front of the IPC queue (controlled by “ip cef linecard ipc memory”) … it’s got it’s own message handling routines defined in the same registry as direct IPC interrupt or process level message handling. Many (most) applications use the CEF packaging (XDR) message types and queues as an interface to IPC. e.g. … route-map updates and acl updates to linecards Applications are also responsible for being “well-behaved”. Utility applications like slavelog and slavecore use IPC directly.

Module 2 – Forwarding Architecture

Route Processor

The Route Processor (RP) The RP’s control path for Line Cards uses IPC via the switch fabric or Mbus The switch fabric connection is the main data path for route table distribution The Mbus connection enables the RP to download a bootstrap image, collect or load diagnostic information, and perform general maintenance operations

RP Responsibilities Running routing protocols Builds and distributes the routing tables to Line Cards (i.e. routing table maintenance) Provides general maintenance functions (i.e. Booting Line Card processors) The RP derives it’s name from one of it’s primary functions of running the routing protocols enabled on the router The master Forwarding Information Base (FIB) is built and the RP uses this FIB to build entries it then sends to each 12k Line Card across the Switch Fabric. The GRP plugs into any slot of the GSR chassis and serves as the console for the router, and handles environmental monitoring. The forwarding table and the routing table differ significantly.

RP Routing Table Maintenance Using the RIB the RP maintains a complete forwarding table of its own (RP-FIB) Routing updates are forwarded from RP-RIB to each Line Card (LC-FIB) Each LC-FIB entry corresponds to an interface which contains a MAC encapsulation string, output interface and MTU The GRP is also responsible for maintaining a complete forwarding table on each Line Card. The Forwarding Information Base (FIB) differs from the Routing Information Base (RIB). The RIB contains information which is not useful on the Line Card (e.g. the time the route was installed, the metrics for the route(s), etc…) The Line Card forwarding table will contain supernet prefixes which can be used by the Layer 3 Forwarding Engines. These prefixes are overlapping and represent a complete copy of the routing table. Associated with each prefix will be a MAC encapsulation string, an output interface and an MTU. The forwarding table may have additional fields populated in the data structure for a forwarding table entry (e.g. next-hop, recursive route address). Also, multiple routes may exist in the forwarding table for a particular prefix. This may change the way that load sharing is done in the system. The GRP is responsible for detecting, resolving, and maintaining recursive entries in the forwarding table. The GRP also maintains an adjacency table, which lists all neighboring IP systems. This table calculates and holds the MAC encapsulation strings. The forwarding table will refer to the adjacency table to extract encapsulation strings. Note that this information needs to be relayed down to the LCs. Platform specific software will be necessary to map MAC encapsulation strings into board-specific MAC encapsulation pointers and lengths. Further platform independent software will be necessary to format the forwarding table entry in the RP for distribution via IPC to the LC.

RP Routing Table Maintenance FIB distribution is done through reliable IPC updates When the routing protocol triggers an update, it is placed into the FIB of the RP then sent to the Line Cards Updates are unicast across the fabric to all Line Cards

GRP

Major Components: The GRP R5000 CPU (a.k.a. P4) Mbus Module Tiger ASIC CSAR ASIC FIA ASIC SLI ASIC Power Modules Tiger – Memory controller unit CSAR – Cisco Cell Segmentation & Reassembly ASIC

GRP Components Power Units DRAM Cisco cell Segment And Reassembly (CSAR) Tiger ASIC CPU Serial Line Interface ASIC (SLI) Fabric Interface ASIC (FIA)

GRP Component Groups Logic I/O Sub-system Fabric Mbus Memory enhancements to the Cisco 12000 Internet Router Gigabit Route Processor (GRP): • Increased route memory (DRAM), from 256MB to 512MB • Increased PCMCIA flash storage from 20MB up to 128M with the use of PCMCIA ATA flash disks. Key Benefits for using 512MB Route Memory: • Supports up to 750,000 prefixes in some cases, depending on the type line cards installed in the system • Supports a large number of Interfaces including subinterfaces and channelized interfaces depending on overall system memory usage. • Provides more memory space for running large IOS images, which have grown in size recently as more features are added to support current and emerging services and applications. Software Requirements: New IOS and ROM Monitor software releases are required for using 512MB route memory on the GRP. IOS support is available in 12.0(19)S, 12.0(19)ST and later IOS releases. ROM Monitor software 11.2(181) is also required to use the new memory option. The ROM Monitor software is bundled into main IOS software packages. Flash disks are supported in 12.0(16)S, 12.0(14)ST and later IOS releases and need corresponding boot images to function properly. 512MB DRAM is available as of September 14, 2001 and Flash Disk is available as of September 21, 2001. Mbus

CSAR (ciscoCell Segmentation and Reassembly) ASIC Buffer manager ASIC for the GRP (equivalent to Rx and Tx BMA on Engine 0 LCs) The CSAR contains two 64k buffers Messages are placed in a hold queue if these buffers are full An interrupt is sent to the CPU when the buffers are free The CSAR contains 32 reassembly areas when receiving ciscoCells from the fabric for unicast and multicast providing 64 areas Connects to fabric at OC12

Performance RP 1 (PRP-1)

PRP-1 Architecture 64@133Mhz FUSILLI I/O Bus 2x10/100 MBUS DUART Main (RAM) Memory 2M L3 Cache XCVR 64@133Mhz CHOPPER XCVR CPU Voyager (PPC7450) System Controller (Discovery) FUSILLI XCVR XCVR ASSEMBLER XCVR I/O Bus 2x10/100 MBUS DUART PCMCIA Bootflash BootPROM NVRAM

Performance Route Processor (PRP) The PRP is fully compatible with the GRP at the hardware level One of the major differences with the PRP is the use of the V’ger processor, a Motorola PPC processor running at 655MHz The future Apollo processor running at 1GHz will replace V’ger Connects to fabric at OC48 – requires at least 1CSC and 3 SFCs to operate PPC = PowerPC

Performance Route Processor (PRP) The PPC CPU also supports on-chip 32Kbs of Layer 1 cache and on-chip 256Kb of Layer 2 cache with an external 2MB of Layer 3 cache controller. The realized performance improvement is 4 – 5 times that of the current GRP

Performance Route Processor (PRP) Default 512Mb DRAM upgradeable to 2Gb 2 10/100 Enet ports RJ-45 Console port 64Mb Flash Disk as standard

Line card concepts

Line Card Concepts Components: PLIM - Physical Layer Optics, Framer, SAR, etc. Layer 3 Forwarding Engine IP/MPLS Switching and Services Fabric Interface Transmission Physical Layer (Optics) Layer 3 Engine Fabric Interface RX To Fabric CPU TX From Fabric

PLIM – Physical Interfaces Handle L2 protocol encap/decap - SONET/SDH framing - ATM cell segmentation/re-assembly - Channelization Receives packet off the wire and passes it to the forwarding engine

FE - Forwarding Engine Runs IOS and maintains CEF tables Provides CEF switching services, feature capabilities Provides queuing and QoS services (through the RX and TX queue managers) NOTE – QoS will be covered in detail in the ‘Applications section’

FIM - Fabric Interface Module Provides fabric transmission services Two components: FIA – interface between forwarding engine and fabric interface SLI - does 8B/10B encoding and decoding of Cisco cells

Line Card Concepts: A Reference Architecture

Line Card Reference Architecture ToFab packet memory ToFab queue mgr ToFab Fabric Interface CPU PLIM Forwarding & Feature Complex Forwarding Lookup Tables FrFab queue mgr FrFab Fabric Interface FrFab packet memory

Forwarding Architecture Various routing protocols maintain individual routing databases. The routing table is built by using the best available paths from the routing protocols. From the IP routing table, we pre-resolve recursive routes and build the CEF table (a.k.a. FIB table) The CEF table is pushed down from the GRP to each linecard via IPC From the CEF table, HW-based linecards will build their own hardware forwarding tables

Summary Multiple levels of routing/forwarding information RP provides control plane services - IP routing protocols - MPLS label exchange protocols RP maintains RIB, FIB, LFIB LC have a copy of FIB and LFIB E2/3/4/4+/6 have a HW forwarding FIB and LFIB as well

Engine Architectures

Line Card - Switching Engines & ASICs Engine 0 – BMA – 622Mb Engine 1 - Salsa/BMA48 – 2.5Gb Engine 2 - PSA/TBM/RBM – 2.5Gb Engine 3 (aka ISE) – Alpha/Conga/Radar – 2.5Gb Engine 4 – RX/MCC/TX – 10Gb Engine 4+ - RX+/MCC/TX+ - 10Gb Engine 6 – Hermes/Ares/Hera – 20 Gb

Engine 0 Architecture

Engine 0 - Components R5000 CPU + L3FE ASIC BMA Main Memory QoS support with performance hit Main Memory Up to 256MB of DRAM Packet Memory Up to 256MB SDRAM split equally between Rx and Tx

Engine 0 Architecture PLIM L3 Engine Fabric Interface Packet Memory Rx POS ToFab BMA L3FE LC IOS Memory X Optics Framer ToFab FIA CPU SLI C Pure vanilla IP switching performance is about 420kpps. All features are done on the LC CPU, or in the BMA microcode. General feature-path performance is about 200kpps Use compiled ACLs for best performance Almost all s/w features are supported, check GSR S/W roadmap V Tx POS FrFab BMA FrFab FIA SLI R S Packet Memory PLIM L3 Engine Fabric Interface

1 port OC12 Engine 0 line card SLI FIA Mbus Agent Module TxBMA Tx Packet Memory Optics L3FE RxBMA Rx Packet Memory CPU

Engine 0 – OC12 with Features CPU-based switching Provides OC-12 performance with features Extensible/flexible architecture - easy to add more features WRED/MDRR in HW with performance hit Performance: No features - ~ 420 kpps With features - ~ 250 kpps

Engine 1 Architecture

Engine 1 - Components R5000 CPU + Salsa ASIC BMA48 Main Memory Salsa = Enhanced Layer 3 Fetch Engine (L3FE) Hardware IP lookup with software re-write BMA48 Performance enhanced BMA No QoS support Main Memory Up to 256MB of DRAM Packet Memory Up to 256MB SDRAM split equally between Rx and Tx

Engine 1 Architecture PLIM L3 Engine Fabric Interface Packet Memory Rx SOP Optics ToFab BMA48 Salsa Rx Trans LC IOS Memory Giga MAC X Tx Trans ToFab FIA48 CPU SLI C V Tx SOP FrFab BMA48 FrFab FIA48 SLI R S Packet Memory PLIM L3 Engine Fabric Interface

1 port GigE Engine 1 line card Tx BMA48 Salsa CPU Rx BMA48

Engine 1- Salsa Hardware enhancements to IP packet validation and FIB lookup assist Verify packet is IPv4 packets with no options. Identify that packet is PPP/HDLC encapsulated. checksum, length, TTL Update IP header (TTL, checksum) Perform IP lookup and cache FIB pointer for CPU re-write operation

E0/1 - Life of a Packet: Watching the Queues

Packet Arrives on Line Card (tofab) Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 4 non-IPC free queues: 26626/26626 (buffers specified/carved), 50.90%, 80 byte data size 1 101 26726 26625 65535 16184/16184 (buffers specified/carved), 30.94%, 608 byte data size 2 26727 42910 16184 65535 7831/7831 (buffers specified/carved), 14.97%, 1568 byte data size 3 42911 50741 7831 65535 IPC Queue: 100/100 (buffers specified/carved), 0.19%, 4112 byte data size 30 67 66 100 65535 Raw Queue: 31 0 0 0 65535 ToFab Queues: Slot 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 4 0 0 0 65535 5 0 0 0 65535 6 0 0 0 65535 7 0 0 0 65535 Mcast 0 0 0 65535 When the packet arrives from the PLIM, we try to allocate a free queue for that packet size. When we allocate the buffer for the new packet, the #Qelem value is decremented by 1 (max value was 26626). At this point, the packet is stored in a buffer within the packet memory, with a pointer to the packet maintained inside the BMA ASIC.

Move the Buffer onto the Raw Q (tofab) Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 4 non-IPC free queues: 26626/26626 (buffers specified/carved), 50.90%, 80 byte data size 1 101 26726 26625 65535 16184/16184 (buffers specified/carved), 30.94%, 608 byte data size 2 26727 42910 16184 65535 7831/7831 (buffers specified/carved), 14.97%, 1568 byte data size 3 42911 50741 7831 65535 IPC Queue: 100/100 (buffers specified/carved), 0.19%, 4112 byte data size 30 67 66 100 65535 Raw Queue: 31 0 0 1 65535 ToFab Queues: Slot 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 4 0 0 0 65535 5 0 0 0 65535 6 0 0 0 65535 7 0 0 0 65535 Mcast 0 0 0 65535 After buffering the packet, we send a copy of the packet header to the CPU via the raw queue. When the packet is placed on the raw queue, the #Qelem value is incremented. This triggers a CPU interrupt and begins the actual forwarding lookup.

FIB Result and ToFab Queuing (tofab) Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 4 non-IPC free queues: 26626/26626 (buffers specified/carved), 50.90%, 80 byte data size 1 101 26726 26625 65535 16184/16184 (buffers specified/carved), 30.94%, 608 byte data size 2 26727 42910 16184 65535 7831/7831 (buffers specified/carved), 14.97%, 1568 byte data size 3 42911 50741 7831 65535 IPC Queue: 100/100 (buffers specified/carved), 0.19%, 4112 byte data size 30 67 66 100 65535 Raw Queue: 31 0 0 0 65535 ToFab Queues: Slot 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 4 0 0 0 65535 5 0 0 0 65535 6 0 0 1 65535 7 0 0 0 65535 Mcast 0 0 0 65535 When the CPU returns the buffer to BMA with the results of the forwarding decision, we queue the packet onto the toFab queue for the destination slot. In this example, the destination interface is located in slot 6. Decrement the #Qelem for the raw queue, and increment the #Qelem counter for the destination toFab slot

Return the Buffer to the Free Q (tofab) Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 4 non-IPC free queues: 26626/26626 (buffers specified/carved), 50.90%, 80 byte data size 1 101 26726 26626 65535 16184/16184 (buffers specified/carved), 30.94%, 608 byte data size 2 26727 42910 16184 65535 7831/7831 (buffers specified/carved), 14.97%, 1568 byte data size 3 42911 50741 7831 65535 IPC Queue: 100/100 (buffers specified/carved), 0.19%, 4112 byte data size 30 67 66 100 65535 Raw Queue: 31 0 0 0 65535 ToFab Queues: Slot 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 4 0 0 0 65535 5 0 0 0 65535 6 0 0 0 65535 7 0 0 0 65535 Mcast 0 0 0 65535 After the buffer is handed off to the FIA interface, we return the packet back to the free queue, and the entire cycle begins again. Here, we decrement the toFab queue, and increment the free queue where we originally obtained the buffer.

Egress Card Receives the Packet (frfab) Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 4 non-IPC free queues: 26560/26560 (buffers specified/carved), 50.90%, 80 byte data size 1 101 26660 26559 65535 16144/16144 (buffers specified/carved), 30.94%, 608 byte data size 2 26661 42804 16144 65535 7811/7811 (buffers specified/carved), 14.97%, 1568 byte data size 3 42805 50615 7811 65535 1562/1562 (buffers specified/carved), 2.99%, 4544 byte data size 4 50616 52177 1562 65535 IPC Queue: 100/100 (buffers specified/carved), 0.19%, 4112 byte data size 30 78 77 100 65535 Raw Queue: 31 0 83 0 65535 Interface Queues: 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 On the from fabric side, the process is very similar to the toFab side, with the steps essentially reversed. First, we receive the packet from the frFab FIA. Again, we allocate a buffer from the free queue (decrement #Qelem for that buffer pool)

Queuing for Transmission (frfab) Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 4 non-IPC free queues: 26560/26560 (buffers specified/carved), 50.90%, 80 byte data size 1 101 26660 26559 65535 16144/16144 (buffers specified/carved), 30.94%, 608 byte data size 2 26661 42804 16144 65535 7811/7811 (buffers specified/carved), 14.97%, 1568 byte data size 3 42805 50615 7811 65535 1562/1562 (buffers specified/carved), 2.99%, 4544 byte data size 4 50616 52177 1562 65535 IPC Queue: 100/100 (buffers specified/carved), 0.19%, 4112 byte data size 30 78 77 100 65535 Raw Queue: 31 0 83 0 65535 Interface Queues: 0 0 0 1 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 After storing the packet, BMA uses the information from the buffer header to determine which interface queue to place the packet in. The packet buffer is transferred to the appropriate transmit queue for the destination interface.

Return the Buffer to the Free Q (frfab) Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 4 non-IPC free queues: 26560/26560 (buffers specified/carved), 50.90%, 80 byte data size 1 101 26660 26560 65535 16144/16144 (buffers specified/carved), 30.94%, 608 byte data size 2 26661 42804 16144 65535 7811/7811 (buffers specified/carved), 14.97%, 1568 byte data size 3 42805 50615 7811 65535 1562/1562 (buffers specified/carved), 2.99%, 4544 byte data size 4 50616 52177 1562 65535 IPC Queue: 100/100 (buffers specified/carved), 0.19%, 4112 byte data size 30 78 77 100 65535 Raw Queue: 31 0 83 0 65535 Interface Queues: 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 After storing the packet, BMA uses the information from the buffer header to determine which interface queue to place the packet in. The packet buffer is transferred to the appropriate transmit queue for the destination interface.

Engine 2 Architecture

Engine 2 Overview First programmable, hardware-based forwarding engine Multi-million PPS with some features Up to 4Mpps performance (no features)

Engine 2 Architecture PLIM L3 Engine Fabric Interface PSA Memory Packet Memory Rx SOP PSA RBM X Optics Framer CPU Salsa LC IOS Memory ToFab FIA48 SLI C PSA runs various microcode bundles to perform feature checking PIRC – limited subset of iCAR features PSA-ACLs – lots of restrictions but OK for short ACLs Sampled netflow, BGP-PA, etc… Ucode features are mutually exclusive with few exceptions (see S/W roadmap) V Tx SOP FrFab FIA48 TBM SLI R S Packet Memory PLIM L3 Engine Fabric Interface

1 port OC48 POS Engine 2 line card RBM PSA TBM

Engine 2 - Components R5000 CPU -> Slow Path Slow path (CPU computed) CEF tables, ICMPs, IP options, etc… PSA (Packet Switched ASIC) -> Fast Path Microcoded IP/MPLS lookup & feature processing RBM/TBM (Receive/Transmit Buffer Manager) Hardware WRED, MDRR Packet Memory 256MB SDRAM can be upgraded to 512MB SDRAM PSA Memory PSA copy of FIB table

Engine 2 – Rx Packet flow PLIM L3 Engine Fabric Interface Packet validation IP/MPLS lookup Feature processing (ACLs, CAR, Netflow, etc...) Append buffer header Determine loq, oq and freeq for packet Engine 2 – Rx Packet flow Extract packets from SONET/SDH payload Pass indication of input interface and packet header to PSA Payload passed to RBM PSA Memory Packet Memory add CRC to ciscoCell send transmission request to SCA Rx SOP PSA RBM Tofab queueing WRED MDRR Segment packet into ciscoCell X Optics Framer CPU Salsa LC IOS Memory ToFab FIA48 SLI C V Tx SOP FrFab FIA48 TBM SLI R 8B/10B encoding send cells to fabric SONET/SDH framer S Packet Memory PLIM L3 Engine Fabric Interface

Engine 2 – PSA Forwarding The Packet Switching ASIC is an IP and TAG forwarding engine The ASIC contains a 6 stage pipeline, Pointer and Table Lookup memory As packets move through the PSA pipeline, the forwarding decision and feature processing is completed

Each stage has a 25 clock budget @100MHz = 250ns, i.e. 4Mpps PSA Architecture Each stage has a 25 clock budget @100MHz = 250ns, i.e. 4Mpps Ext. SSRAM FIB TREE(256K) LEAVES/ADJ/STATS(256K) Fetch PreP PLU TLU PoP Gather Modifications to packet header (e.g. pushing MPLS Labels). Prepare packet for transmission to RBM Microcode engine which applies the results of the PLU/TLU lookup to the packet. Tasks include COS handling, MTU check, special case tests, setup of gather stage, feature processing, etc... IP/MPLS lookup machine Adjacency Lookup, Per Adjacency Counters Microcode engine which performs checks on the packet (protocol, length, TTL, IP CHKSUM) and extracts the appropriate address(es) for the main lookup. Some feature processing. MAC header checking, protocol ID checking, IP header checking, extraction of IP/MPLS address fields

RBM: Rx Queue Manager The RBM manages the linecard’s receive packet memory buffers and queues There are two major types of queues in RBM: LowQs (16 FreeQs, 1 RAWQs, an IPC FreeQ and spare queues) 2048 unicast Output Queues and 8 multicast queues 16 slots per chassis, 16 ports per slot, 8 queues per port = 2048 queues One hpr (high priority) queue is allocated per destination slot/port.

TBM: Tx Queue Manager The TBM manages the linecard’s transmit packet memory buffers and queues Three types of queues: Non-IPC freeQs, 1 CPU RawQ, IPC FreeQ 128 Output Queues Multicast RawQ 8 CoS queues per output port, 16 ports = 128 queues

Engine 2 – Tx Packet flow PLIM L3 Engine Fabric Interface PSA Memory Packet Memory Rx SOP PSA RBM Re-assemble packet from ciscoCells FrFab queueing WRED MDRR Append L2 header and send packet to PLIM Mcast duplication Verify and remove CRC from ciscoCell send cells to TBM Remove 8B/10B encoding X Optics Framer Put packets in SONET/SDH payload CPU Salsa LC IOS Memory ToFab FIA48 SLI C V SONET/SDH framer Tx SOP FrFab FIA48 TBM SLI R S Packet Memory PLIM L3 Engine Fabric Interface

E2 Feature Support Performance varies with features (eg. ACLs): Designed to be forwarding ASIC on a “backbone” card, ie does not natively support any features Features like ACLs, SNF, BGP PA added later on, but take performance hit Most new features require a separate ucode load and are mutually exclusive Performance varies with features (eg. ACLs): 128 line iACLs – 800kpps 128 line oACLs – 675 kpps 448 line iACLs – 690 kpps 448 line oACLs – 460 kpps

Engine 3 - IP Services Engine (ISE)

ISE Overview Programmable, hardware-based forwarding engine Up to 4Mpps performance (with features) Uses TCAMs for advanced feature processing Traffic shaping and advanced QoS support Flexible mapping of queues

ISE – Architecture PLIM L3 Engine Fabric Interface FUSCILLI FIB Table Memory Packet Memory ALPHA Optics RADAR TCAM FUSCILLI X SPECTRA GULF LC IOS Mem CPU PICANTE FIA SLI C V ALPHA FIA SLI R CONGA TCAM S Packet Memory PLIM L3 Engine Fabric Interface

4xOC12 POS ISE Linecard RADAR FUSCILLI RX TCAM R7K RX ALPHA PICANTE CONGA SPECTRA TX ALPHA GULF TX TCAM

ISE – Rx Packet flow PLIM L3 Engine Fabric Interface Packet validation IP/MPLS lookup Feature processing (ACLs, CAR, Netflow, etc...) Append buffer header Determine loq, oq and freeq for packet ISE – Rx Packet flow add CRC to ciscoCell send transmission request to SCA 8B/10B encoding send cells to fabric FIB Table Memory Packet Memory Handle channelization Extract packets from SONET/SDH payload Pass indication of input interface and packet header to RX Alpha Payload based to Radar ALPHA Optics RADAR TCAM FUSCILLI Tofab queueing Input rate shaping WRED MDRR Segment packet into ciscoCell X SPECTRA GULF LC IOS Mem CPU PICANTE FIA SLI C V SONET/SDH framer ALPHA FIA SLI R CONGA TCAM S Packet Memory PLIM L3 Engine Fabric Interface

ISE - ALPHA ALPHA Advanced Layer 3 Packet Handling ASIC Performs forwarding, classification, policing and accounting Two ALPHA chips, one in the receive path, one in the transmit path. This allows features to be implemented in both the ingress (RX) and egress (TX) paths 11 pipeline stages 3 micro-code stages for future expandability Utilizes TCAMs to perform high-speed feature processing. Each ALPHA has its own TCAM

11 stages of ALPHA Pipeline Ext. SSRAM LEAVES/ADJ/STATS FIB TREE Fetch PreP PLU PCM TLU TCAM access for altering PLU results (PBR, MPLS) MAC header checking, protocol ID checking, IP header checking, extraction of IP/MPLS address fields Microcoded stage which is capable of any general purpose activity on the packet Adjacency Lookup, Per Adjacency Counters MTRIE lookup machine CAMP 3 stages MIP PoP Gather Ext. CAM + SSRAM Microcoded stage – Performs feature actions, handling exception packets Processing packet structure, including stripping the old input encapsulation, stripping old MPLS labels if necessary, pushing new labels and computation of the new IP checksum Microcoded stage which is capable of any general purpose activity on the packet CAM Processor – Lookups for xACL (permit/deny), CAR token bucket maintenance, Netflow counters update

RADAR: Rx Queue Manager The RADAR manages the linecard’s receive packet memory buffers and queues There are three major types of queues in RADAR: LowQs (16 FreeQs, 3 RAWQs, an IPC FreeQ and spare queues) 2048 Input Shape Queues (rate-shaping) 2048 unicast Output Queues (16 unicast high priority queues) and 8 multicast queues One local output-queue is allocated per destination interface One hpr (high priority) queue is allocated per destination slot

RADAR: Input Shape Queues There are 2048 queues dedicated to ingress traffic shaping each with an independent ‘leaky bucket’ circuit. Each flow can be shaped in increments of 64kbps (from 64kbps up to line rate)

RADAR: Rx Queue Manager The Rx ALPHA decides which type of queue will be used for each packet.

ISE – TX packet flow PLIM L3 Engine Fabric Interface Remove 8B/10B encoding Verify and remove CRC from ciscoCell send cells to TX ALPHA FIB Table Memory Packet Memory Adjacency lookup Feature processing (ACLs, CAR, MQC, etc...) Update output_info field of buffer header with info from adjacency Handle channelization Put packets in SONET/SDH payload Re-assemble packet from ciscoCells FrFab queueing Output rate shaping WRED MDRR Append L2 header and send packet to PLIM Mcast duplication ALPHA Optics RADAR SONET/SDH framer TCAM FUSCILLI X SPECTRA GULF LC IOS Mem CPU PICANTE FIA SLI C V ALPHA FIA SLI R CONGA TCAM S Packet Memory PLIM L3 Engine Fabric Interface

CONGA: Tx Queue Manager The CONGA manages the linecard’s transmit packet memory buffers and queues Three types of queues: Non-IPC freeQs, 3 CPU RawQs, IPC FreeQ 2048 Output Queues (2 leaky buckets per queue for rate-shaping) Multicast RawQ Output queues divided equally among output ports Support for 512 logical interfaces Max Bandwidth shaping per port, Min and Max Bandwidth shaping per queue

CONGA Each ‘Shaped Output’ queue has a built in dual-leaky bucket mechanism with a programmable maximum and minimum rate (I.e. a DRR bandwidth guarantee) A second level of shaping is available per ‘Port’.

Engine 4+ - 10G Edge Services

Engine4+ “Edge Services” Lookup MTRIE Packet Memory Optics RX+ MCC PHAD (Optical Interface ASIC) Framer B A C K P L N E CPU Picante LC IOS Memory 10G FIA Ser Des 10G FIA TX+ Packet Memory PLIM L3 Engine Fabric Interface

Engine 4+ - Components R5000 CPU -> Slow Path Slow path (CPU computed) CEF tables, ICMPs RX+ ASIC -> Fast Path Hardware IP/MPLS lookup inc. Multicast, CAR, ACLs, MPLS-PE MCC ASIC Hardware WRED, MDRR Packet Memory 256MB SDRAM can be upgraded to 512MB SDRAM and 1024 in future TX+ ASIC -> Fast Path Hardware outbound traffic shaping, ACLs,

10x1 GigE Engine 4 linecard TX 10x1GE PLIM Picante RX MCC

RX+ - Packet processing ASIC Non-programmable high-speed ASIC providing 25 Mpps switching capacity Virtual CAM (vCAM) for features ACLs, CAR and PBR Line-rate for 40 byte packets at /32 FIB lookup

MCC – ToFab Queueing ASIC Manages receive packet memory and queues Three types of queues: Non-IPC freeQs, 8 CPU RawQs, IPC FreeQ 2048 Unicast VOQs, 8 multicast VOQs One high priority queue per destination slot/port

Engine4+ - RX Packet Flow Extract packets from SONET/SDH payload Protocol identification Verify packet length Append PLIM header Lookup MTRIE Packet SDRAM Manage packet buffers Perform WRED, MDRR Optics SONET/SDH framer RX+ MCC PHAD (Optical Interface ASIC) 8B/10B encoding Transmit cell over fabric IP unicast and multicast lookup MPLS lookup CAR/ACL feature processing Append buffer header and update loq, oq, and ideal freeq values Framer CPU Picante 10G FIA B A C K P L N E LC IOS Memory Packets segmented into cells Make packet transmission request Append ciscoCell CRC Ser Des 10G FIA TX+ Packet SRAM PLIM L3 Engine Fabric Interface

Engine4+ - TX Packet Flow Multicast packets duplicated Header re-written for output (MAC re-write) ACL/CAR performed RED/WRED performed Packets queued for output (16 ports. 8 queued/port) MDRR scheduling and output shaping performed Lookup MTRIE Packet SRAM Optics SONET/SDH framer RX+ Cells re-assembled into packets CRC checked Packet header reconstructed Packets scheduled to TX MCC PHAD (Optical Interface ASIC) Framer CPU Picante 10G FIA B A C K P L N E LC IOS Memory PLIM header removed Packets segments queued to SONET channels Packets sent within SONET payloads (POS) Ser Des 10G FIA TX+ Packet SRAM PLIM L3 Engine Fabric Interface

TX+ - TX Queueing ASIC Manages transmit packet memory and queues Four types of queues: Non-IPC freeQs, 8 CPU RawQs, IPC FreeQ, Multicast Raw Queue 128 Unicast OQs, 8 multicast OQs Per-destination port LLQ Performs output CAR, rate-shaping

Engine 6 – 20Gb Edge Services

Engine 6 PLIM L3 Engine Fabric Interface Lookup MTRIE Packet Memory Hermes Ares TCAM TFIA Zeus Framer & PHAD Optics CPU Picante EROS SERDES B A C K P L N E LC IOS Memory TFIA TCAM FFIA FFIA HERA Packet Memory PLIM L3 Engine Fabric Interface

Engine 6 - Components RM7000 CPU -> Slow Path Slow path (CPU computed) CEF tables, ICMPs Hermes ASIC -> Fast Path – 50Mpps @ 40Bytes Hardware IP/MPLS lookup inc. Multicast, CAR, ACLs, MPLS-PE, 6-PE, PBR, Loose & Strict uRPF Ares ASIC Hardware WRED, MDRR Hera ASIC -> Fast Path Hardware outbound traffic shaping, ACLs, SNF, Mcast 512Mb Dram Route Memory 512Mb RLDRAM Packet Memory TCAM4 ASICs Attached to Hermes and Hera for feature processing

Engine6 2xOC192 layout Power Supply Power Supply Optics Optics MBUS CPU MBUS CPU memory CPU memory CPU PICANTE PICANTE Hera Hera TCAM Zeus Zeus TCAM Eros Eros TCAM Hermes Ares Hermes Ares

Surface mounted RLDRAMs Engine6 8xOC48 layout SFP Pluggable Optics Surface mounted RLDRAMs

Engine 6 PLIM L3 Engine Fabric Interface RED/WRED performed IP/MPLS Lookup TCAM based feature processing (ACL/CAR/PBR/VRFs) PKT MOD TTL adj, ToS adj,IP checksum adj. Append buffer header and update loq, oq etc.. Queueing ASIC Manage packet buffers Perform WRED, MDRR Lookup MTRIE Packet Memory Framer + PHAD integrated Layer-1 processing alarms ,crc check, APS.. Pkts buffering Verify packet length Append PLIM header Multicast packets duplication MAC rewrite for output TCAM based output feature - ACL/CAR Output packet queuing RED/WRED performed MDRR scheduling Output traffic shaping Hermes Cells re-assembled into packets CRC checked Packet header reconstructed Packets scheduled to TX Ares TCAM TFIA Zeus Framer & PHAD Optics Picante EROS SERDES B A C K P L N E CPU LC IOS Memory TFIA TFIA + FFIA ASIC Packets segmented into cells Make packet transmission request Append ciscoCell CRC Layer1 processing PLIM header removed Packets segments queued to SONET channels Packets sent within SONET payloads (POS) TCAM FFIA FFIA HERA Packet Memory PLIM L3 Engine Fabric Interface

TCAM based feature TCAM used to implement key features 32000 ingress entries, 32000 egress entries shared between… ACL CAR – 32 car rules per port PBR VRF Selection Security ACLs are not merged

TCAM Basics TCAM - Ternary Content Addressable Memory – Match on 0 , 1 , X (don’t care) ACL/CAR/PBR/VRFs rules from CLI converted into Value Mask Result (VMR) format to be inserted in TCAM Value cells – key values ACL/CAR/PBR/VRFs values Mask cells = Significant Value Bits to be matched Result = Value && Mask - Action Security ACL – permit/deny CAR - Pointer to CAR buckets PBR – Adjacency VRF Selection – VRF Root

QoS flow Hermes Ares tofab frfab Hera ACL iCAR WRED MDRR COS Queues drop drop drop Hera frfab Shaping WRED oCAR ACL MDRR COS Queues drop drop drop

QoS support 2064 tofab queues 136 frfab queues 16x16x8 = 2048 unicast queues 8 local CPU queues 8 multicast queues Per priority queue per destination port 136 frfab queues 16x8 = 128 unicast queues Per priority queue per port

Presentation_ID © 2001, Cisco Systems, Inc. All rights reserved. 131 131 131

Engine 4+ - Line Card Family OC192 POS 4 x OC48 POS 1 x 10 GE 10 x 1 GE - EOS Modular GE 2 x OC48 DPT* OC192 DPT*