Presentation on theme: "Progress and Challenges toward 100Gbps Ethernet"— Presentation transcript:
1 Progress and Challenges toward 100Gbps Ethernet Joel GoergenVP of Technology / Chief ScientistAbstract: This technical presentation will focus on the progress and challenges for development of technology and standards for 100 GbE. Joel is an active contributor to IEEE802.3 and the Optical Internetworking Forum (OIF) standards process. Joel will discuss design methodology, enabling technologies, emerging specifications, and crucial considerations for performance and reliability for this next iteration of LAN/WAN technology.
2 Overview Network Standards Today Available Technology Today Feasible Technology for 2009The Push for Standards within IEEE and OIFAnatomy of a 100Gbps or 160Gbps SolutionSummaryBackup Slides
3 Network Standards Today: The Basic Evolution 2010???100 GbE ???200210 GbE19961 GbELook at historyEthernet has gone in steps of 10100 GbE is the next logical Ethernet stepBUT you must emphasizeForce10 is a standards-based compnayWe have voting rights and participation on both the IETF and IEEEWe will follow ANY standard that is developed1994100Mb198310 Mb
5 Network Standards Today: The Desk top 1Gbps Ethernet10/100/1000 Copper ports have been shipping with most desktop and laptop machines for a few years.Fiber SMF/MMFIEEE a/b/g WirelessAverage useable bandwidth reaching 50Mbps
6 Network Standards Today: Clusters and Servers 1Gbps EthernetCopper10Gbps EthernetFiberCX-4
7 Network Standards Today: Coming Soon 10Gbps LRMMulti-mode fiber to 220meters.10Gbps Base-T100meters at more then 10Watts per port ???30meters short reach at 3Watts per port ???10Gbps Back Plane1Gbps, 4x3.125Gbps, 1x10Gbps over 1meter improved fr-4 material.
8 Available Technology Today: System Implementation A+B Front EndLine Card1GA1GSPIxBFront EndLine Card10G10GASPIx1G1GBPassiveCopperBackplaneAFabricABFabricB
9 Available Technology Today: System Implementation N+1 Front EndLine Card1GSPIx1GL1Front EndLine Card10G10GSPIxLn+11G1GPassiveCopperBackplane1stSwitchFabricL1LnLn+1NthSwitchFabricN+1SwitchFabric
10 Available Technology Today: Zoom to Front-end Line Card1GSPIx1GL1Front EndLine Card10G10GSPIxLn+11G1GPassiveCopperBackplane1stSwitchFabricL1LnLn+1NthSwitchFabricN+1SwitchFabric
11 Available Technology Today: Front-end CopperRJ45RJ21 (mini … carries 6 ports)FiberXFP and variants (10Gbps)SFP and variants (1Gbps)XENPAKLC/SC bulkhead for WAN modules
12 Available Technology Today: Front-end System Interfaces TBI10bit Interface. Max speed 3.125Gbps.SPI-4 / SXISystem Protocol Interface. 16bit Interface. Max speed 11Gbps.SPI-5System Protocol Interface. 16bit Interface. Max speed 50Gbps.XFI10Gbps Serial Interface.
13 Available Technology Today: Front-end Pipe Diameter 1Gbps1Gbps doesn’t handle a lot of data anymore.Non standard parallel also available based on OIF VSR.10Gbps LAN/WAN or OC-192As port density increases, using 10Gbps as an upstream pipe will no longer be effective.40Gbps OC-768Not effective port density in an asynchronous system.Optics cost close to 30times 10Gbps Ethernet.
14 Available Technology Today: Front-end Distance Requirements x00 m (MMF)SONET/SDH (Parallel): OIF VSR-4, VSR-5Ethernet:: 10GBASE-SR, 10GBASE-LX4, 10GBASE-LRM2-10 kmSONET/SDH: OC-192/STM-64 SR-1/I-64.1, OC-768/STM-256 VSR2000-3R2/etc.Ethernet: 10GBASE-LR~40 kmSONET/SDH: OC-192/STM-64 IR-2/S-64.2, OC-768/STM-256Ethernet: 10GBASE-ER~100 kmSONET/SDH: OC-192/STM-64 LR-2/L-64.2, OC-768/STM-256Ethernet: 10GBASE-ZRDWDMOTN: ITU G.709 OTU-2, OTU-3AssertionEach of these applications must be solved for ultra high data rate interfaces.
15 Available Technology Today: Increasing Pipe Diameter 1Gbps LAN by 10links parallel10Gbps LAN by x-links WDM10Gbps LAN by x physical linksMultiple OC-192 or OC-768 Channels
16 Available Technology Today: Zoom to Back Plane Front EndLine Card1GSPIx1GL1Front EndLine Card10G10GSPIxLn+11G1GPassiveCopperBackplane1stSwitchFabricL1LnLn+1NthSwitchFabricN+1SwitchFabric
17 Available Technology Today: Back Plane Data PacketLine Cards--GbE / 10 GbERPMsSFMsPower SuppliesSERDESBackplaneTraces
18 Available Technology Today: Making a Back Plane Simple! It’s just multiple sheets of glass with copper traces and copper planes added for electrical connections.Reference: Isola
19 Available Technology Today: Back Plane Pipe Diameter 1.25GbpsUsed in systems with five to ten year old technology.2.5Gbps/3.125GbpsUsed in systems with five year old or less technology.5Gbps/6.25GbpsUsed within the last 12 months.
20 Available Technology Today: Increasing Pipe Diameter Can’t WDM copper10.3Gbps/12.5GbpsNot largely deployed at this time.Increasing the pipe diameter on a back plane with assigned slot pins can only be done by changing the glass construction.
21 Available Technology Today: Pipe Diameter is NOT Flexible Once the pipe is designed and built to a certain pipe speed, making the pipe faster is extremely difficult, if not impossible.
22 Available Technology Today: Gbits Density per Slot with Front End and Back Plane Interfaces Combined Year System IntroducedSlot density200040Gbps200460Gbps2006/7 – in design now120GbpsBased on max back plane thickness of 300mils, 20TX and 20RX differential pipes.
23 Feasible Technology for 2009: Defining the Next Generation The overall network architecture for next generation ultra high (100, 120 and 160Gbps) data rate interfaces should be similar in concept to the successful network architecture deployed today using 10Gbps and 40Gbps interfaces.The internal node architectures for ultra high (100, 120 and 160Gbps) data rate interfaces should follow similar concepts in use for 10Gbps and 40Gbps interfaces.All new concepts need to be examined, but there are major advantages to scaling current methods with new technology.
24 Feasible Technology for 2009: Front-end Pipe Diameter 80Gbps … not enough Return On Investment100Gbps120Gbps160GbpsReasonable Channel Widths10λ by Gbps8λ by Gbps4λ by Gbps1λ by GbpsSuggest starting at an achievable channel width while pursuing a timeline to optimize the width in terms of density, power, feasibility, and cost - depending on optical interface application/reach.
25 Feasible Technology for 2009: Front-end Distance Requirements x00 m (MMF)SONET/SDH: OC-3072/STM-1024 VSREthernet: 100GBASE-S2-10 kmSONET/SDH: OC-3072/STM-1024 SREthernet: 100GBASE-L~40 kmSONET/SDH: OC-3072/STM-1024 IR-2Ethernet: 100GBASE-E~100 kmSONET/SDH: OC-3072/STM-1024 LR-2Ethernet: 100GBASE-ZDWDM (OTN)SONET/SDH: Mapping of OC-3072/STM-1024Ethernet: Mapping of 100GBASEAssertionThese optical interfaces are defined today at the lower speeds. It is highly likely that industry will want these same interface specifications for the ultra high speeds.Optical interfaces, with exception of VSR, are not typically defined in OIF. In order to specify the system level electrical interfaces, some idea of what industry will do with the optical interface has to be discussed. It is not the intent of this presentation to launch these optical interface efforts within OIF.
26 Feasible Technology for 2009: Front-end System Interfaces Reasonable Channel Widths (SPI-?)16 lane by Gbps10 lane by 10-16Gbps8 lane by Gbps5 lane by 20-32Gbps4 lane by 25-40GbpsPort Density is impacted by channel width. Fewer lanes translates to higher Port Density and less power.
27 Feasible Technology for 2009: Back Plane Pipe Diameter Reasonable Channel Widths16 lane by Gbps10 lane by10-16Gbps8 lane by Gbps5 lane by 20-32Gbps4 lane by 25-40GbpsPort Density is impacted by channel width. Fewer lanes translates to higher Port Density and less power.
28 Feasible Technology for 2009: Pipe Diameter is NOT Flexible New Back Plane designs will have to have pipes that can handle 20Gbps to 25Gbps.
29 Feasible Technology for 2009: Gbits Density per Slot with Front End and Back Plane Interfaces CombinedYear System IntroducedSlot density200040Gbps200460Gbps2006/7 – in design now120Gbps2009500GbpsBased on max back plane thickness of 300mils, 20TX and 20RX differential pipes.
30 Feasible Technology for 2009: 100Gbps Options Bit rate shown above is based on 100Gbps. Scale the bit rateaccordingly to achieve 160Gbps.
31 The Push for Standards: Interplay Between the OIF & IEEE OIF defines multi-source agreements within the Telecom Industry.Optics and EDC for LAN/WANSERDES definitionChannel models and simulation toolsIEEE 802 covers LAN/MAN Ethernet802.1 and define Ethernet over copper cables, fiber cables, and back planes.802.3 leverages efforts from OIF.Membership in both bodies is important for developing next generation standards.
32 The Push for Standards: OIF Force10 Labs introduced three efforts within OIF to drive 100Gbps to 160Gbps connectivity.Two interfaces for interconnecting optics, ASICs, and backplanes.A 25Gbps SERDESUpdates of design criteria to the Systems User Group
33 Case Study: Standards Process P802.3ah – Nov 2000 / Sept 2004 Call for InterestBy a member of 802.350% WG voteStudy GroupOpen participation75% WG PAR vote, 50% EC & Stds BdTask ForceOpen participation75% WG voteWorking Group BallotMembers of 802.375% WG ballot, EC approvalSponsor BallotPublic ballot groupLet’s look at the processKEY TAKEAWAYSIt takes three years for the IEEE to finalize a standardAs we will see in later slides, it takes over 3 years for a technology to have sufficient selling volume for pricing to be seen as attractive by the mass marketConclusion!Either 40 or 100 GbE are 6 to 10 years away from being adopted by the mainstream market75% of ballot groupStandards Board ApprovalRevCom & Stds Board50% votePublicationIEEE Staff, project leaders
34 Case Study: Standards Process 10GBASE_LRM: 2003 / 2006 Optical Power Budget (OMA)10GBASE-LRM Innovations:TWDP Software reference equalizer Determines EDC penalty of transmitterDual Launch Centre and MCP Maximum coverage for minimum EDC penaltyStress Channels Precursor, split and post-cursor Canonical tests for EDCLaunch power (min)- 4.5 dBm0.5 dB: Transmitter implementation0.4 dB: Fiber attenuation0.3 dB: RIN0.2 dB: Modal noise4.4 dB: TP3 TWDP and connector 99% confidence level0.9 dB: Unallocated powerdBmRequired effective receiver sensitivityNov03 CFIJan04 Study GroupMay04 TaskforceNov03 TF BallotMar05 WG BAllotDec05 Sponsor BallotMid-06 StandardTime lineReference: David Cunningham – Avago Technologies
35 Case Study: Standards Process 10GBASE_LRM Specified optical power levels (OMA)Optical input to receiver (TP3) compliance test allocationPower budget starting at TP2Launch power minimum- 4.5dBmTransmit implementation allowance = 0.5 dBConnector losses = 1.5dBAttenuation (2 dB)Fiber attenuation = 0.4 dBFiber attenuation = 0.4dBModal noise = 0.2 dBInteraction penalty = 0.1dB- 6.5 dBmStressed receiver sensitivityModal noise = 0.2 dBRIN = 0.3 dBNoise (0.5 dB)RIN = 0.3 dBIdeal EDC power penalty, PIE_D = 4.2dBTWDP and connector loss at 99th percentile (4.4 dB)Dispersion (4.2 dB)Unallocated margin 0.9 dBdBmEffective maximum unstressed 10GBASE-LRM receiver sensitivityReference: David Cunningham – Avago Technologies
36 Case Study: Standards Process 10GBASE_T: 2002 / 2006 Techno-babble64B/65B encoding (similar to 10GBASE-R)LDPC(1723,2048) framingDSQ128 constellation mapping (PAM16 with ½ the code points removed)Tomlinson-Harshima precoderReachCat 6 up to 55 m with the caveat of meeting TIA TSB-155Cat 6A up to 100 mCat 7 up to 100 mCat 5 and 5e are not specifiedPowerEstimates for worst case range from 10 to 15 WShort reach mode (30 m) has a target of sub 4 W
37 Case Study: Standards Process 10GBASE_T Noise and EMIAlien crosstalk has the biggest impact on UTP cablingScreened and/or shielded cabling has better performancePowerStrong preference for copper technologies, even though higher powerShort reach and better performance cable reduce power requirementTimelineThe standard is coming… products in the market end of `06, early `07Tutorial& CFIPARTask Forcereview802.3 BallotSponsorBallotNOV2002MAR2003JULNOVMAR2004JULNOVMAR2005JULNOVMAR2006JUL1st TechnicalPresentationD1.0D2.0D3.0STD
38 Birth of A Standard It Takes About 5 Years Ideas from industryFeasibility and researchCall for Interest (CFI) –100 GbE EFFORT IS HEREMarketing / Sales potential, technical feasibilityStudy GroupWork GroupDraftsFinal member vote
39 The Push for Standards: IEEE Force10 introduces a Call for Interest (CFI) in July 2006 IEEE802 with Tyco Electronics.Meetings will be held in the coming months to determine the CFI and the efforts required.We target July 2006 because of resources within IEEE.Joel Goergen and John D’Ambrosia will chair the CFI effort. The anchor team is composed of key contributors from Force10, Tyco, Intel, Quake, and Cisco. It has since broadened to include over 30 companies.
40 The Ethernet Alliance Promoting All Ethernet IEEE Work Key IEEE 802 Ethernet projects include100 GbEBackplane10 GbE LRM / MMF10 G Base-TForce10 is on the BoD, principle member20 companies at launchSun, Intel, Foundry, Broadcam. . .Now approaching 40 companiesLaunch January 10, 2006Opportunity for customers to speak on behalf of 100 GbE Ethernet
41 Anatomy of a 100Gbps Solution: Architectural Disclaimers There Are Many Ways to Implement a systemThis section covers two basic types.Issues facing 100Gbps ports are addressed in basic form.Channel Performance or ‘Pipe Capacity’ is difficult to measureTwo Popular Chassis Heights24in to 34in Height (2 or 3 Per Rack)10in to 14in Height (5 to 8 Per Rack)
42 Anatomy of a 100Gbps Solution: What is a SERDES? Device that attaches to the ‘channel’ or ‘pipe’Transmitter:Parallel to serialTap valuesPre-emphasisReceiver:Serial to ParallelClock and Data RecoveryDFECircuits are very sensitive to power noise and low Signal to Noise Ration (SNR)Reference: Altera
43 Anatomy of a 100Gbps Solution: Interfaces that use SERDES TBI10bit Interface. Max speed 3.125Gbps across all 10 lanes. This is a parallel interface that does not use SERDES technology.SPI-4 / SXISystem Protocol Interface. 16bit Interface. Max speed 11Gbps. This is a parallel interface that does not use SERDES technology.SPI-5System Protocol Interface. 16bit Interface. Max speed 50Gbps. This uses 16 SERDES interfaces at speeds up to 3.125Gbps.XFI10Gbps Serial Interface. This uses 1 SERDES at Gbps.XAUI10Gbps 4 lane Interface. This uses 4 SERDES devices at 3.125Gbps each.
44 Anatomy of a 100Gbps Solution: Power Noise thought … Line Card SERDES Noise LimitsAnalog target 60mVpp rippleDigital target 150mVpp rippleFabric SERDES Noise LimitsAnalog target 30mVpp rippleDigital target 100mVpp ripple100Gbps interfaces won’t operate well if these limits can not be meet.
45 Anatomy of a 100Gbps Solution: Memory Selection Advanced Content-Addressable Memory (CAM)Goal: Less power per searchGoal: 4 times more performanceGoal: Enhanced flexible table management schemesMemoriesReplacing SRAMs with DRAMs when performance allows to conserve costQuad Data Rate III SRAMs for speedSERDES based DRAMs for buffer memoryNeed to drive JEDEC for serial memories that can be easily implemented in a communication system.The industry is going to have to work harder to get high speed memories for Network Processing in order to reduce latency.Memory chips are usually the last thought! This will need to change for 100Gbps sustained performance.
46 Anatomy of a 100Gbps Solution: ASIC Selection High Speed InterfacesInterfaces to MACs, Backplane, Buffer Memory are all SERDES based. SERDES all the way. Higher gate counts with internal memories target to 6.25 SERDES; higher speeds difficult to design in this environment.SERDES used to replace parallel busing for reduced pin and gate countSmaller Process GeometryDefinitely 0.09 micron or lowerMore gates(100% more gates over 0.13 micron process)Better performance(25% better performance)Lower power(1/2 the 0.13 micron process power)Use power optimized librariesHierarchical Placement and Layout of the ChipsFlat placement is no longer a viable optionTo achieve cost control, ASIC SERDES speed is limited to 6.25Gbps in high density applications.
47 Anatomy of a 100Gbps Solution: N+1 Redundant Fabric - BP Front EndLine CardSPIxL1Front EndLine CardSPIxLn+1PassiveCopperBackplane1stSwitchFabricL1LnLn+1NthSwitchFabricN+1SwitchFabric
48 Anatomy of a 100Gbps Solution: N+1 Redundant Fabric – MP PassiveCopperMidplaneLine CardFront EndL1SPIxSPIxLine CardFront EndLn+1SPIxSPIx1stSwitchFabricL1LnLn+1NthSwitchFabricN+1SwitchFabric
49 Anatomy of a 100Gbps Solution: N+1 High Speed Channel Routing Line CardLine CardLine CardLine Card1stSwitchFabric2ndSwitchFabricNthSwitchFabricN+1SwitchFabric
50 Anatomy of a 100Gbps Solution: A/B Redundant Fabric - BP Front EndLine CardASPIxBFront EndLine CardASPIxBPassiveCopperBackplaneAFabricABFabricB
51 Anatomy of a 100Gbps Solution: A/B Redundant Fabric – MP PassiveCopperMidplaneLine CardFront EndASPIxSPIxBLine CardFront EndASPIxSPIxBAFabricABFabricB
52 Anatomy of a 100Gbps Solution: A/B High Speed Channel Routing SwitchFabricBSwitchFabricLineCardLineCardLineCardLineCard
53 Anatomy of a 100Gbps Solution: A Quick Thought ….. Looking at both Routing and Connector Complexity designed into the differential signaling ….Best Case: N+1 Fabric in a Back Plane.Worst Case: A/B Fabric in a Mid Plane.All implementations need to be examined for best possible performance over all deployed network interfaces. Manufacturability and channel (Pipe) noise are two of the bigger factors.
54 Anatomy of a 100Gbps Solution: Determine Trace Lengths After careful review of possible Line Card, Switch Fabric, and Back Plane Architectural blocks, determine the range of trace lengths that exist between a SERDES transmitter and a SERDES receiver. 30 inches or .75meters total should do it.Several factors stem from trace length.Band WidthReflections from via and or thru-holesCircuit board materialBERCodingKeep in mind that the goal is to target one or both basic chassis dimensions.
55 Anatomy of a 100Gbps Solution: Channel Model Description A “Channel” or “Pipe” is a high speed single-ended or differential signal connecting the SERDES transmitter to the SERDES receiver. The context of “Channel” or “Pipe” from this point is considered differential.Develop a channel model based on the implications of Architectural choices and trace lengths.Identifies a clean launch route to a BGA device.Identifies design constraints and concerns.Includes practical recommendations.Identifies channel Bandwidth.
56 Anatomy of a 100Gbps Solution: Channel Simulation Model TP5InformativeTP1TP2 and TP3 not usedTP4XMTRCONNPLUGCONNJACKCONNJACKCONNPLUGDCblockRCVFILTERRCVSLICERFR4+FR4+FR4+FR4+TransmitterChannelequivalent cap circuitReceiverBack PlaneLine Card: Receiver
57 Anatomy of a 100Gbps Solution: Channel: Back Plane Shows signal trace connecting pins on separate connectors across a back plane.
58 Anatomy of a 100Gbps Solution: Channel: Line Card Receiver Shows a signal trace connecting the back plane to a SERDES in a Ball Grid Array (BGA) package.
59 Channel Model Definition: Back Plane Band Width How do we Evaluate the signal speed that can be placed on a channel?2Ghz to 3Ghz Band WidthSupports 2.5Gps NRZ – 8B10B2Ghz to 4Ghz Band WidthSupports 3.125Gps NRZ – 8B10B2Ghz to 5Ghz Band Width (4Ghz low FEXT)Supports 6.25Gps PAM4Supports 3.125Gps NRZ – 8B10B or Scrambling2Ghz to 6.5GhzSupports 6.25Gps NRZ – 8B10BLimited Scrambling Algorithms2Ghz to 7.5GhzSupports 12Gps2Ghz to 9GhzSupports 25Ghz multi-level
62 Anatomy of a 100Gbps Solution: Channel Model Limit Lines
63 Anatomy of a 100Gbps Solution: Comments on Limit Lines IEEE802.3ae XAUI is a 5year old channel model limit line.IEEE P802.3ap channel model limit is based on mathematical representation of improved FR-4 material properties and closely matches “real life” channels. This type of modeling will be essential for 100Gbps interfaces.A real channel is shown with typical design violations common in the days of XAUI. Attention to specific design techniques in the channel launch conditions can eliminate the violation to the defined channel limits.
64 Anatomy of a 100Gbps Solution: Receiver Conditions – Case 1 TP5InformativeTP4tracetracetrace dogbone12mil24mil24milx32mil24milx32mil24mil24mil621mil BGAAG13mil Drill13mil Drill13mil Drill24mil24mil24miltrace to BPtrace34mil Anti Pad
65 Anatomy of a 100Gbps Solution: Constraints & Concerns – Case 1 Poor Signal Integrity – SDD11/22/21Standard Cad ApproachEasiest / Lowest Cost to ImplementApproach will not have the required performance for SERDES implementations used in 100Gbps interfaces.
66 Anatomy of a 100Gbps Solution: Receiver Conditions– Case 4 TP5InformativeTP4tracetracetrace dogbone12mil24mil24milx32mil24milx32mil621mil BGAAG13mil Drill24miltrace to BP34mil Anti Pad
67 Anatomy of a 100Gbps Solution: Constraints & Concerns – Case 4 Ideal Signal IntegrityEliminates two VIASIncreases pad impedance to reduce SDD11/22High speed BGA pins must reside on the outer pin rowsCrosstalk to traces routed under the open ground pad is an issue for both the BGA and the Capacitor footprintRequires 50mil pitch BGA packaging to avoid ground plane isolation on the ground layer under the BGA padsPotential to require additional routing layer
68 Available Technology Today: Remember this Slide ? Circuit board material is just multiple sheets of glass with copper traces and copper planes added for electrical connections.Reference: Isola
69 Anatomy of a 100Gbps Solution: Channel Design Considerations Circuit Board Material Selection is Based on the Following:Temperature and Humidity effects on Df (Dissipation Factor) and Dk (Dielectric Constant)Required mounting holes for mother-card mounting, shock and vibrationRequired number of times a chip or connector can be replacedRequired number of times a pin can be replaced on a back planeAspect ratio (Drilled hole size to board thickness)Power plane copper weightCoding / Signaling scheme
70 Anatomy of a 100Gbps Solution: Materials in Perspective Graph provided by Zhi Wong
72 Anatomy of a 100Gbps Solution: Channel or Pipe Considerations Channel Constraints Include the Following:Return LossThru-hole reflectionsRouting reflectionsInsertion Loss based on material categoryInsertion Loss based on length to first reflection pointDefine coding and baud rate based on material categoryConnector hole crosstalkTrace to trace crosstalkDC blocking Capacitor at the SERDES to avoid shorting DC between cards.Temperature and Humidity losses/expectations based on material category
73 Channel Model Starting Point: Materials in Perspective Target AreaGraph provided by Zhi Wong
76 Anatomy of a 100Gbps Solution: Channel Design Considerations Channel BERData transmitted across the back plane channel is usually done in a frame with header and payloadThe frame size can be anywhere from a few hundred bytes to 16Kbytes, typicalA typical frame contains many PHY-layer packetsBER of 10E-12 will result in a frame error of 10E-7 or less, depending on distributionThat is a lot of frame loss
77 Anatomy of a 100Gbps Solution: Channel Design Considerations Channel BERCustomers want to see a frame loss of zeroSystems architects want to see a frame loss of zeroZero error is difficult to test and verify … none of us will live that longThe BER goal should be 10E-15It can be tested and verified at the system design levelSimulate to 10E-17Any frame loss beyond that will have minimal effect on current packet handling/processing algorithmsCurrent SERDES do not support this. Effective 10E-15 is obtained by both power noise control and channel model lossThis will be tough to get through, but without this tight requirement, 100Gbps interfaces will need to run faster by 3% to 7%. Or worse, pay a latency penalty for using FEC or DFE.
78 Anatomy of a 100Gbps Solution: Remember Interface Speeds … Reasonable Channel Widths for 100Gbps:16 lane by 6.25Gbps *BEST10 lane by 10Gbps8 lane by 12.5Gbps5 lane by 20Gbps4 lane by 25Gbps *BEST
79 Anatomy of a 100Gbps Solution: Channel Signaling Thoughts NRZIn general, breaks down after 12.5Gbps8B10B is not going to work at 25Gbps64B66B is not going to work at 25GbpsScrambling is not going to work at 25GbpsDuo-BinaryDemonstrated to 33GbpsPAM4 or PAMx
80 Anatomy of a 100Gbps Solution: Designing for EMI Compatibility Treat each slot as a unique chamberShielding Effectiveness determines the maximum number of 1GigE ports, 10GigE ports, or 100GigE ports before saturating emissions requirements.Requires top and bottom seal using honeycombSeal the back plane / mid planeCross-hatch chassis groundChassis ground edge guard and not edge plateDigital ground sandwich for all signal layersProvide carrier mating surfaceEMI follows wave equations. Signaling spectrum must be considered.
81 Anatomy of a 100Gbps Solution: Power Design Power Routing Architecture from Inputs to All CardsBus barPower boardCabling harnessDistribution through the back plane / mid plane using copper foilDesign the Input Filter for Maximum Insertion Loss and Return LossProtects your own equipmentProtects all equipment on the power circuitDesign Current Flow Paths for 15DegC Max Rise, 5 DegC TypicalDesign all Distribution Thru-holes to Support 200% Loading at 60DegCProvides for the case when the incorrect drill size is selected in the drilling machine and escapes computer comparison. Unlikely case but required in carrier applicationsPower Follows Ohm’s Law. It Can Not Be Increased without Major Changes or Serious Thermal Concerns
82 SummaryIndustry has been successful scaling speed since 10Mbps in 1983.The efforts in 1GigE and 10GigE have taught us many aspects of interfaces and interface technology.100Gbps and 160Gbps success will depend on useable chip and optics interfaces.Significant effort is underway in both IEEE and OIF to define and invent interface to support the next generation speeds.Systems designers will need to address many new issues to support 100Gbps port densities of 56 or more per box.
84 Backup SlidesThe following slides provide additional detail to support information provided within the base presentation.
85 Acronym Cheat Sheet CDR – Clock and Data Recovery CEI – Common Electrical InterfaceCGND / DGND – Chassis Ground / Digital GroundEDC – Electronic Dispersion CompensationMAC – Media Access ControlMDNEXT / MDFEXT – Multi Disturber Near / Far End Cross TalkMSA – Multi Source AgreementNEXT / FEXT – Near / Far End Cross TalkOIF – Optical Internetworking ForumPLL - Physical Link LayerSERDES – Serialize / De-serializeSFI – System Framer InterfaceSMF / MMF – Single Mode Fiber / Multi Mode FiberXAUI – 10Gig Attachment Unit Interface
86 Anatomy of a 100Gbps Solution: Basic Line Card Architecture PHY,FramerNetworkProcessorFabricInterfacePMDNon “WireSpeed” µPProtocol Stacks,ApplicationsAPPs
87 Anatomy of a 100Gbps Solution: Basic Line Card Architecture 1 MediaForwardingEngineBackplaneOpticalor CopperReserved for PowerNetworkProcessorSERDESArchitecture:Long trace lengths.Poor power noise control means worse than...Analog target 60mVpp rippleDigital target 150mVpp ripplePoor SERDES to connector signal flow will maximize ground noise.This layout is not a good choice for 100Gbps.
88 Anatomy of a 100Gbps Solution: Basic Line Card Architecture 2 MediaForwardingEngineBackplaneOpticalor CopperReserved for PowerNetworkProcessorSERDArchitecture:Clean trace routing.Good power noise control means better than...Analog target 60mVpp rippleDigital target 150mVpp rippleExcellent SERDES to connector signal flow to minimize ground noise.Best choice for 100Gbps systems.
89 Anatomy of a 100Gbps Solution: Basic Line Card Architecture 3 MidplaneMediaForwardingEngineOpticalor CopperReserved for PowerNetworkProcessorSERDArchitecture:Clean trace routing.Good power noise control means better than...Analog target 60mVpp rippleDigital target 150mVpp rippleDifficult SERDES to connector signal flow because of Mid Plane.This layout is not a good choice for 100Gbps.
90 Anatomy of a 100Gbps Solution: Basic Switch Fabric Architecture Digital orAnalogX barLine CardInterfaceLine CardInterfaceNon “WireSpeed” µP
91 Anatomy of a 100Gbps Solution: Basic Switch Fabric Architecture 1 Reserved for PowerDigitalCross BarSERDESArchitecture:Long trace lengths.Poor power noise control means worse than...Analog target 30mVpp rippleDigital target 100mVpp ripplePoor SERDES to connector signal flow will maximize ground noise.This layout is not a good choice for 100Gbps.
92 Anatomy of a 100Gbps Solution: Basic Switch Fabric Architecture 2 DigitalCross BarSERDReserved for PowerArchitecture:Clean trace routing.Good power noise control means better than...Analog target 30mVpp rippleDigital target 100mVpp rippleExcellent SERDES to connector signal flow to minimize ground noise.
93 Anatomy of a 100Gbps Solution: Basic Switch Fabric Architecture 3 AnalogCrossBarReserved for PowerArchitecture:Clean trace routing.Good power noise control means better than...Analog target 30mVpp rippleDigital target 100mVpp rippleExcellent SERDES to connector signal flow to minimize ground noise.True Analog Fabric is not used anymore.
94 Anatomy of a 100Gbps Solution: Back Plane or Mid Plane RedundancyN+1 FabricA / B FabricConnectionsBack PlaneMid Plane
95 Anatomy of a 100Gbps Solution: Trace Length Combinations - Max 24in to 34in height (2 or 3 per rack)
96 Anatomy of a 100Gbps Solution: Trace Length Combinations - Min 24in to 34in height (2 or 3 per rack)
97 Anatomy of a 100Gbps Solution: Trace Length Combinations - Max 10in to 14in height (5 to 8 per rack)
98 Anatomy of a 100Gbps Solution: Trace Length Combinations - Min 10in to 14in height (5 to 8 per rack)
99 Anatomy of a 100Gbps Solution: Receiver Conditions– Case 2 TP5InformativeTP4tracetracetrace dogbone12mil24mil24milx32mil24milx32mil24mil24mil621mil BGAAG13mil Drill13mil Drill13mil Drill24mil24mil24miltrace to BPtrace34mil Anti Pad
100 Anatomy of a 100Gbps Solution: Constraints & Concerns – Case 2 Crosstalk to Traces Routed Under the Open Ground Pad is an IssueAllows Good Pin Escape from the BGAPoor Signal Integrity - Has High SDD11/22/21 at the BGAPotential to Require Additional Routing LayerApproach will not have the required performance for SERDES implementations used in 100Gbps interfaces.
101 Anatomy of a 100Gbps Solution: Receiver Conditions– Case 3 TP5InformativeTP4tracetracetrace dogbone12mil24mil24milx32mil24milx32mil24mil24mil621mil BGAAG13mil Drill13mil Drill13mil Drill24mil24mil24miltrace to BPtrace34mil Anti Pad
102 Anatomy of a 100Gbps Solution: Constraints & Concerns – Case 3 Allows for Inner High Speed Pad Usage within the BGAMedium Poor Signal Integrity – SDD11/22/21. Has the Extra VIAS to Content with in the Break OutIncreases pad impedance to reduce SDD11/22Crosstalk to Traces Routed Under the Open Ground Pad is an Issue for Both the BGA and the Capacitor FootprintAllows Good Pin Escape from the BGAPotential to Require Additional Routing LayerRequires 50mil Pitch BGA Packaging to Avoid Ground Plane Isolation on the Ground Layer Under the BGA Pads
103 Anatomy of a 100Gbps Solution: Stack-up Detail
104 Requirements to Consider when Increasing Channel Speed Signaling Scheme vs Available BandwidthNEXT/FEXT MarginsAverage Power Noise as Seen by the Receive Slicing Circuit and the PLLInsertion Loss (SDD21) Limits