Presentation is loading. Please wait.

Presentation is loading. Please wait.

CRS-1 System Architecture Introduction

Similar presentations


Presentation on theme: "CRS-1 System Architecture Introduction"— Presentation transcript:

1 CRS-1 System Architecture Introduction
Sudhir Kumar CCIE (35219) HTTS,Cisco Systems Raj Pathak CCIE (38760) HTTS,Cisco Systems

2 CRS-1 System Architecture - Agenda
What is CRS ? CRS-1 Router Chassis Options CRS-1 System configuration CRS-1 System Closure View CRS-1 16 slot Hardware Overview CRS-1 8 Slot Hardware Overview CRS-1 4 Slot Hardware Overview CRS-1 Forwarding and Chassis Common Elements Switch Fabric CRS-1 Packet Flow

3 What is CRS Platform ? High performance Core, Peering and Multi-Service Edge router. Industry leading scalability and reliability Wide range of high speed interface options You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides.

4 CRS-1 Router Chassis Options
RPs MSCs PLIMs Fabric CRS/16 Front Back CRS/8 CRS/4 You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides. CRS/16 CRS/8 CRS/4

5 Cisco CRS-1 System Configurations
Single Shelf System 4 or 8 or 16 MSC and PLIM slots No Fabric chassis required 4 or 8 - Fabric cards in line card chassis Multishelf System (1.2T TO 92T) 2 to slot line card chassis' 1 to 8 fabric chassis containing fabric cards Config example : 2+1, 3+1, 4+2, 4+4, etc…

6 CRS-1 Closure View PLIM RP MSC Fabric
You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides. Fabric

7 CRS-1 System Components
You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides.

8 CRS 16 Slot Linecard Chassis
Midplane design with front & rear access Front 16 PLIM slots (same on all chassis) 2 RP slots + 2 Fan Controllers (16-slot specific) Back 16 MSC Slots (same on all chassis) 8 Fabric cards (16-slot specific) Power: ~13.2 KW (AC or DC)

9 CRS 8 Slot Linecard Chassis
Midplane design: Front 8 PLIM slots 2 RP slots (same as CRS-4) Back 8 MSC Slots 4 Fabric cards (8-slot specific) Power: 7.5 KW DC, KW AC

10 CRS 4 Slot Linecard Chassis
Midplane design: Front 4 PLIM slots 4 LC Slots 2 RP slots (same as CRS8) Back 4 Fabric cards (4-slot specific) Power: 4.7KW DC KW AC

11 CRS-1 Forwarding and Chassis Common Elements
The Cisco CRS-1 Modular Services Card is a high-performance Layer 3 forwarding engine. The card is responsible for all packet processing, including quality of service (QoS), classification, policing, and shaping, and it is equipped The interface module provides the physical connections to the network, including Layer 1 and 2 functions. Interface modules for the Cisco CRS-1 include: 1-port OC-768c/STM- 256c PoS, 4-port OC- 192c/STM-64c PoS, 16-port OC-48c/STM-16c PoS, 8-port 10 Gigabit Ethernet, 1-port OC- 768c/STM- 256c tunable WDMPOS, and 4-port 10 Gigabit Ethernet tunable WDMPHY. MSC – Modular Services Card PLIM – Physical Interfaces You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides.

12 PLIM / MSC Line Card Architecture
MSC PLIM M I D P L A N E OC192 Framer & Optics To Fabric ingressQ (RX) PSE L3 Engine PLA OC192 Framer & Optics Ingress Packet Flow PLIM cpuctrl CPU OC192 Framer & Optics 2 fabricQ (TX) PSE L3 Engine egressQ OC192 Framer & Optics Egress Packet Flow From Fabric

13 PLIM / MSC Line Card Architecture
CPU — Handle control plane functions, MSC configuration, management, protocol control and exception packet handling. SP — The SP controls card power-up, environmental monitoring, and Ethernet communication with the chassis’ RP boards. Packet Switch Engine (PSE) — Primary L3 feature processing ASIC,ACL checking, uRPF, BGP-PA as well as QOS functions such as policing & WRED IngressQ/EgressQ — IngressQ is the RX queuing ASIC. EgressQ is the TX queuing ASIC. Support features such as P2MDRR, low-latency queuing and shaping support. In addition, the IngressQ contains fabric queues and the packet to fabric cell conversion functionality. FabricQ — Reassembles the fabric cells and converts these back to full packets.provide low-latency and precedence-based queuing. You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides.

14 Route Processor - Functional Overview
System Master – Leads system initialization & operation Elects primary RP ( all systems have redundant RP’s) Coordinates fabric, MSC and PLIM boot Provides physical console access, Mgmt Ethernet and boot media Onboard disk for logging RP Controls all shelf management functions and runs routing protocols BGP, OSPF, ISIS, EIGRP, RIP, LDP, MPLS-TE… Build forwarding tables and sends to MSCs 2 GE for multi-chassis control You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides.

15 Route Processor - Architecture
SFP Module M I DPLANE LC FE Links FE/GE Switch CTL GE links PCI To Fabric CPU CPU IF MEM CTL IDE PCMCIA 2 From fabriq Aux & Console Single Broadcom switch 5618 Squirt complex replaces IngressQ and CPUCtrl IngressQ functionality is scaled down and renamed Spiller Squirt asic: Reduces cost, power and board space IngressQ and CPUCtrl functions are independent Squirt is connected to fabric with 8 links (1 link per plane) for a total of 16 Gbps (effective data rate Gbps.) Ingress Mgmt 10/100/1GE link Fabric Connection FPGA Card presence detect RP Mastership PROM Presence

16 Fan Controllers Overview
Fan controllers are present in both the 16-slot and 8-slot systems – provides monitoring and control for Fan trays CRS16 - Two Fan Controller boards which are situated on the front of the chassis above the RP slots – boards also provide BITS signal input interfaces Boards operate in tandem – no primary/backup relationship At least one Fan Controller must be operational for the system to operate, otherwise system will be shut down CRS8 - Fan controller functionality is integrated on the Fan Tray. BITS input is present on the RP Based on the reported state, fan speeds can be increased or decreased as needed

17 Alarm Modules on CRS-1 16 slot
Alarm Module is available only on CRS-1 16 Slot Chassis On right side of power shelf Draws power from the Power shelf directly Major, Minor, and Critical LED Monitors power shelf status Indicates any system alarms Has its own Service Processor If the system is operating properly “IOS-XR” appears in the LED

18 Switch Fabric The switch fabric is implemented through switch fabric cards installed in the chassis. The switch fabric uses a cell-switched, buffered three-stage (S123) Benes switch fabric architecture MSC divided the packets in to Cells Forward cells towards the appropriate egress MSC ( destination ) S123-Implement all 3 stages of fabric system in a standalone system S13 - Implement 1st and 3rd Stage and installed in Line card chassis S2 - Implement 2nd stage of fabric and installed in Fabric card chassis Fabric Cards - Single Shelf System Fabric Cards - Multishelf System You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides.

19 Cell Packing Packets are segmented into cells for efficient switching by the switch fabric hardware. Header Payload CRC 12b 30b 30b 30b 30b 4b 120 bytes 136 bytes 136 byte cells with 12 byte header, 120 byte payload, 4 byte CRC 1 or 2 packets per cell Packets must start on a 30 byte boundary Packets sharing a cell must be same priority and cast Entire cell travels over 1 fabric plane Fabric cell header includes (main ones listed) Data flag – is this cell data or control cast – uni or multicast priority – high or low vital bit – do not discard within fabric (set if either packet in cell is vital) Fabric Group ID – if mcast packet, this will tell S2 and S3 how to replicate Payload – can be one or two packets. Having the ability to put two packets into a cell improves the efficiency with small packets. If we have multiple packets within a cell, they must start on 30 byte boundaries and be the same cast (uni or multi) and priority A common question is “why a 120 byte payload?”. The answer is that it works well with real packet sizes, two packets per cell, and less obviously the underlying memory has a 30 byte unit of access

20 Fabric access – 3 Stage Switching Fabric
Line Card 2 1 8 S1 S2 S3 8 of 8 S1 S2 S3 2 of 8 S1 S2 S3 1 of 8 16 Line Card 2 1 (1) Actually 64Gb total ~ 50 Gb remain for data (2) Actually 128Gb total ~ 100 Gb remain for data 8

21 Packet Flow Summary – Logical View
PLIM MSC S1 S2 S3 MSC PLIM IP Packet Cells Cells CellsCells Cells IP Packet IP Data IP Data Ingress IP Packet Egress cells IP Packet Cells Switch Fabric Mid-Plane Mid-Plane

22 CRS-1 Packet Flow M I D P L A N E 4 3 2 1 PLIM 8 6 7 5
SONET/SDH/Ethernet deframing/decapsulation Layer 2 packet processing ASIC, Attaches PLIM buffer header Includes control information like ingress physical and logical port, frame and packet type M I D P L A N E Shaping and P2MDRR Queuing (priority , LLQ) Slicing the packet into 136 byte cells OC192 Framer & Optics To Fabric ingressQ (RX) PSE L3 Engine 4 PSE---Packet Switching Engine HW forwarding engine Prefix lookup IPV4/IPV6/MPLS/mulitcast ACL check Netflow Policing &WRED 3 2 Resequence /reassemble cells - converts into packets Queuing (priority , LLQ) PLA OC192 Framer & Optics Ingress Packet Flow HW forwarding engine L3 lookup, IPV4/IPV6/MPLS/Mulitcast ACL check Netflow Policing &WRED L2 encapsulation rewrite string 1 PLIM Shaping and P2MDRR Queuing (priority , LLQ) Framing cpuctrl CPU OC192 Framer & Optics You should normally have a log message that can be correlated with a behavioral description. Even without the log message, we should be able to deduce what process is involved in the behavior described. We will cover both traces in our tools section and stack decoding later in the presentation. * Restarting processes is not always the workaround for process related problems and can lead to unwanted side effects if you don’t know what you’re doing … see examples in later slides. 8 2 fabricQ (TX) PSE L3 Engine egressQ 6 7 OC192 Framer & Optics 5 Egress Packet Flow From Fabric

23


Download ppt "CRS-1 System Architecture Introduction"

Similar presentations


Ads by Google