Presentation is loading. Please wait.

Presentation is loading. Please wait.

Design Challenges for Next Generation, High Speed Ethernet: 40 and 100 GbE Sponsored by: Ethernet Alliance ® Panel Organizer:John D’Ambrosia, Sr. Scientist.

Similar presentations


Presentation on theme: "Design Challenges for Next Generation, High Speed Ethernet: 40 and 100 GbE Sponsored by: Ethernet Alliance ® Panel Organizer:John D’Ambrosia, Sr. Scientist."— Presentation transcript:

1 Design Challenges for Next Generation, High Speed Ethernet: 40 and 100 GbE Sponsored by: Ethernet Alliance ® Panel Organizer:John D’Ambrosia, Sr. Scientist Force10 Networks Chair, IEEE 802.3ba Task Force

2 DesignCon 2009 Ethernet Alliance University Program Purpose: Facilitate collaboration between academia and Ethernet industry Help students acquire practical perspectives of academic theories Encourage academia in engineering and computer science programs to become more involved in developing new Ethernet concepts Who Benefits: Faculty Speaking opportunities and press opportunities Publication of technical papers Connect with industry peers Potential research funding Students Network with industry and standards leaders Contribute research studies on Ethernet technologies White paper challenge Internship program

3 DesignCon 2009 January 2009 Academic Members

4 Page 4 © 2009 Dell’Oro Group Panel Overview Ilango Ganga – Intel Corporation  High speed server adoption Joel Goergen – Force10 Networks  Anatomy of high-speed chassis Adam Healey – LSI Corporation  Electrical interfaces for 40 and 100 Gigabit Ethernet David Stauffer – IBM Corporation  Challenges surrounding higher-bandwidth serdes, channels and backplane technologies

5 Page 5 © 2009 Dell’Oro Group Installed Base of 10 GE Port Shipments by Major Platform Type Port Shipments in Millions Servers Optical Switches Routers

6 Page 6 © 2009 Dell’Oro Group Potential 10 GE Ports for Higher-Speed Aggregation Port Shipments in Millions

7 Page 7 © 2009 Dell’Oro Group 10 GE Server Connectivity – All Server Types Port Shipments in Millions Cards Directly on Motherboard

8 Design Challenges for Next Gen Ethernet − Server End Station perspective Ilango Ganga Communications Architect, Intel Corporation Editor-in-Chief, IEEE P802.3ba Task Force

9 40GbE and 100GbE Computing and Networking 40G optimized for server/compute BW and server traffic aggregation needs 100G optimized for Network core and network aggregation needs Source: An Overview: Next Generation of Ethernet – IEEE 802 HSSG_Tutorial_1107 HSSG_Tutorial_1107

10 Server I/O BW drivers Higher system processing capability Multi-core processors Higher speed memory, systems bus, and next gen. process technologies Server Virtualization Consolidation of multiple logical servers in a single physical server Converged networking and storage Multiple I/O connections converging to single connection with fabric virtualization Clustered servers Scientific, financial, oil/gas exploration, engineering workloads Internet applications IPTV, Web 2.0 Transition to 10GbE and Multiple 10GbE will drive the future transition to 40GbE

11 System capabilities & design constraints System & I/O capabilities Today’s Server systems are capable of 10GbE I/O convergence happening at 10GbE Systems capable of handling multiple 10GbE from 2009+ Next generation I/O bus upgrades (for e.g. PCIe Gen3) Blade backplanes/midplanes are capable of multiple 10G lanes, 4 lane backplanes are scalable to 40G (KR  KR4) Design Constraints Performance Cost Power Density (Form factor/size)

12 High speed LAN controllers Today’s 10G LAN controllers handle more and more advanced packet processing in hardware, for example: Packet classification I/O Virtualization Protocol offloads MAC/Serdes Handle dual Ports Design challenges for packet processing capabilities at 40G speeds Fixed Power constrains for PCI adapters, Blade adapters Advanced packet processing at multiple 10G (e.g. 4x10G) and 40G Integration of 40G MAC and serdes technologies Can leverage multiple 10G serdes technology Host bus upgrades to next gen system I/O speeds Convergence of NIC/HBA/Virtualization models in a single controller SW challenges to scaling

13 Summary Server consolidation, storage & network convergence, cluster, and video applications will drive the need for higher I/O bandwidths Consolidation with 10G / multiple 10G, and then to 40G Multi-core processors, next generation System busses, and blade backplane/midplane systems expected to be capable of 40G I/O speeds in 3 years time frame Performance/Cost/Power constraints will drive the design choices for 40G Network controllers Implementations expected to leverage 10G technologies for faster time to market

14 Copyright © 2008 Force10 Networks, Inc. All rights reserved. The Call for Industry Research on Next-Generation Electrical Signaling Joel Goergen Vice President of Technology, Chief Scientist Force10 Networks

15 Anatomy of a 100 Gbps Solution: Chassis Chassis design issues to consider –Backplane and channel signaling for higher internal speeds –Lower system BER –Connectors –N+1 switch fabric –Reduced EMI –Clean power routing architecture –Thermal and cooling –Cable management All design aspects must also meet local regulatory standards Copyright © 2008 Force10 Networks, Inc. All rights reserved.

16 Anatomy of a 100 Gbps Solution: Interface / Connectors Copyright © 2008 Force10 Networks, Inc. All rights reserved. 16 NPU CAM 200 MSPS 140 Lookup DataBase SRAM 400 MHZ DDRII+ 5072 Lookup DataBase SRAM 400 MHZ DDRII+ 2536 140 CAM 200 MSPS 10 x CEI- 11G- SR Inter laken/S PI-S Ingress Packet Parsing Ingress Lookup Ingress Packet Edit 10 x CEI- 11G- SR Inter laken/ SPI-S Egress Lookup Ingress Packet Edit 100g MAC and Phy Fibre Ingress L ink List SRAM 400 MHZ QDRII+ 5072 Ingress Buffer SDRAM 1 Ghz DDR 123256 32 10 x CEI- 11G- SR Inter laken/ SPI-S Ingress Buffer SDRAM 1 Ghz DDR 123256 32 Egress L ink List SRAM 400 MHZ QDRII+ 5072 16 x CEI- 11G- LR TM Clock, reset, PCI Express, Test Pins 100 Clock, reset, PCI Express, Test Pins 100 Back Plane PowerMemory Interface Package & Die Size Connector

17 Copyright © 2008 Force10 Networks, Inc. All rights reserved. Anatomy of a 100 Gbps Solution: Signal Integrity Backplane SERDES: The Building Block of an Interface Traces #1 #2 #3

18 Design Challenges for Next- Generation, High-Speed Ethernet: 40 and 100 GbE Adam Healey LSI Corporation

19 19 HealeyDesignCon 2009 Electrical interfaces for 40 and 100 Gb/s Ethernet Interfaces consist of an aggregation of 10 Gb/s serial lanes Chip-to-chip Chip-to-module (retimed) XLAUI (40 Gb/s) and CAUI (100 Gb/s) Up to 1 m and 2 connectors 40GBASE-KR4 40 Gb/s Backplane Ethernet Up to 10 m 40GBASE-CR4 (40 Gb/s) 100GBASE-CR10 (100 Gb/s) Copper cable assembly Chip-to-module (limiting) 40 and 100 Gb/s Parallel Physical Interface (PPI) ASIC Limiters Lasers Detectors Drivers n = 4 or 10 nn nn n n n ASIC n = 4 or 10 nn nn n 10:4 4:10 ASIC 10 4 WDM 10 4 ASIC 4 4 4 4

20 20 HealeyDesignCon 2009 Design considerations Expand the scope of 10 Gb/s Backplane Ethernet (10GBASE-KR) –10GBASE-KR is the basis of the specifications for backplane and copper cable assemblies –Loss, noise profiles of cable assemblies and associated host card wiring distinct from backplane – must confirm interoperability Define the superset serdes –A single serdes core that supports multiple interface standards –Flexible interface that can face either the backplane or the faceplate –Common receptacle for optical module and copper cable assembly Signal integrity challenges –Increase in density of 10 Gb/s channels –Increase in trace routing distance to satisfy routing constraints (more loss) –Increase in crosstalk Testability –Test each lane of the multi-lane interface in isolation (multiplies test time) –Test interface as an aggregate (multiplies test equipment)

21 21 HealeyDesignCon 2009 An eye to the future Future demand for higher density implies the need for a more narrow interface Detector 0 Detector 1 Detector 2 Detector 3 WDM 10:4 410 Color 0 Color 1 Color 2 Color 3 WDM 10:4 410 ASIC 10 Detector 0 Detector 1 Detector 2 Detector 3 WDM 44 Color 0 Color 1 Color 2 Color 3 WDM 44 ASIC 4 4 CAUI (10 x 10 Gb/s) 100GBASE-LR4 or 100GBASE-ER4 (4 x 25 Gbs) OIF CEI-28-SR? (4 x 25 Gb/s) Pluggable module boundary First generation Next generation 4:4

22 IBM Server and Technology Group DesignCon 2009 © 2009 IBM Corporation Design Challenges for Next-Generation, High-Speed Ethernet: 40 and 100 GbE DesignCon 2009 February 4, 2009 David R. Stauffer Senior Technical Staff Member IBM ASIC Design Center OIF Physical & Link Layer Working Group Chair

23 IBM Server and Technology Group © 2009 IBM Corporation DesignCon 2009  Bandwidth growth forecasts historically show networking applications double bandwidth every 18 months. This is the motivation for 40/100 GbE (802.3ba) standards development.  Although early 40/100 GbE systems will depend on 10Gb/s backplane Serdes technology (802.3ap), this leads to an unmanageable number of differential pairs to meet system bandwidth.  Conclusion: Higher bandwidth Serdes technology will be required. ~25 Gb/s is optimal. Bandwidth Density Projections

24 IBM Server and Technology Group © 2009 IBM Corporation DesignCon 2009  Achieving 25 Gb/s serial data on backplanes requires evolutionary advances in both Serdes and backplane technology for a cost effective solution.  Backplane advances need to address:  Sdd21 loss targets (see proposed CEI-25-LR Sdd21 in figure)  Crosstalk minimization (better connectors?)  Serdes advances need to address:  Improved performance in the presence of crosstalk.  Power per 25 Gb/s link less than 1.5x power per 10 Gb/s link. Serdes & Channel Evolution

25 IBM Server and Technology Group © 2009 IBM Corporation DesignCon 2009 Significant Issues  Backplane Technology:  Sdd21 Insertion Loss must be achieved without significant impacts to manufacturing yield or cost.  Advanced materials may be required but only if acceptable manufacturing yield is achievable.  Advanced design techniques (i.e. broadside coupling) may be required.  Better connectors are needed to minimize crosstalk, reflections, etc.  Serdes Technology:  Signaling solution must be evolutionary to meet power targets and allow current levels of integration on ASIC chips.  Crosstalk is a significant concern at higher baud rates. Current crosstalk cancellation schemes do not work generically in backplane environments.  FEC schemes can achieve performance but at the cost of power and latency. So far this cost has not found market acceptance.  Multi-level signaling schemes have not shown promise.

26 Questions?


Download ppt "Design Challenges for Next Generation, High Speed Ethernet: 40 and 100 GbE Sponsored by: Ethernet Alliance ® Panel Organizer:John D’Ambrosia, Sr. Scientist."

Similar presentations


Ads by Google