Download presentation
Presentation is loading. Please wait.
Published byJasmin Stevens Modified over 9 years ago
1
OptIPuter Backplane: Architecture, Research Plan, Implementation Plan Joe Mambretti, Director, (j-mambretti@northwestern.edu)j-mambretti@northwestern.edu International Center for Advanced Internet Research (www.icair.org)www.icair.org Director, Metropolitan Research and Education Network (www.mren.org)www.mren.org Partner, StarLight/STAR TAP, PI-OMNINet (www.icair.org/omninet)www.icair.org/omninet OptIPuter Backplane Workshop OptIPuter AHM CalIT 2 January 17, 2006
2
Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources, Centralized Management Distributed Device, Dynamic Services, Visible & Accessible Resources, Integrated As Required By Apps Limited Functionality, Flexibility Unlimited Functionality, Flexibility LambdaGrid Control Plane Paradigm Shift Ref: OptIPuter Backplane Project, UCLP
3
OptIPuter Architecture, Joint Project w/UCSD, EVL, UIC Source: Andrew Chien, UCSD OptIPuter Software Architect ODIN Signalling, Control, Management Techniques
4
Controlle r Client Device Client Controlle r Controlle r Optical Control Plane Client Layer Control Plane Optical Layer Control Plane Client Layer Traffic Plane Optical Layer – Switched Traffic Plane UNI I-UNI CI
5
Network Side Interface WS BPEL APIs GMPLS as a uniform control plane SNMP, with extensions as the basis of a management plane Extended MIB capabilities –L3 –IEEE L2 MIB Developments –MIB Integration with higher layer functionality Monitoring, Analysis, Reporting
6
L2 10 GE 10 GE Node Compute Clusters APIs 10 GE NICs 10 Gbps Switch on a Chip Currently, Low Cost Devices, Lost Per Port Cost 240 GE SNMP Standard Services –Spanning Tree –vLANs –Priority Queuing IEEE Enhancing Scalability
7
IEEE L2 Scaling Enhancements Current Lack of Hierarchy IEEE Developing Hierarchical Architecture Network Partitioning (802.1q, vLAN tagging) Multiple Spanning Trees (802.1s) Segmentation (802.1ad, “Provider Bridges”) Enables Subnets To be Characterized Differently Than Core IETF – Architecture for Closer Integration With Ethernet –GMPLS As Uniform Control Plane –Generalized UNI for Subnets –Link State Routing In Control Plane –TTL Capability to Data Plane –Pseudo – Wire Capabilities
8
L2 Services Enhancements Metro Ethernet Forum Three Primary Technical Specifications/Standards –Ethernet Services Model (ESM) Ethernet Service Attributes (Core “Building Blocks”) Architectural Framework For Creating an Ethernet Service No Specific Service – Any Potential Service –Ethernet Services Definitions (ESD) Guidelines for Using ESM Components for Service Development Provides Example Service Types and Variations of Types –Ethernet Line (E-Line) –Ethernet LAN (E-LAN) –Ethernet Traffic Management (ETM) Implications for Operations, Traffic Management, Performance, eg, Managing Services vs Pipes Quality of Service Agreements, Guarantees
9
L1 10 Gbps 10 GE Node Compute Clsuters APIs Automated Switch Panels GMPLS IETF GMPLS UNI (vs ONI UNI, Implications for Restoration Reliability) 10 G Ports MEMs Based –Services Lightpaths with Attributes, Uni-directional, Bi-directional Highly Secure Paths OVPN Optical Multicast Protected Through Associated Groups ITU-T SG Generic VPN Architecture (Y.1311), Service Requirements (Y.1312), L1 VPN Architecture (Y.1313)
10
Resource Physical Processing Monitoring and Adjustment HP-PPFSHP-APP2HP-APP3HP-APP4 VS ODIN Server Creates/Deletes LPs, Status Inquiry tcp Access Policy (AAA) Process Registration Discovery/Resource Manager, Incl Link Groups Addresses Previously OGSA/OGSI, Soon OGSA/OASIS WSRF Process Instantiation Monitoring ConfDB Lambda Routing: Topology discovery, DB of physical links Create new path, optimize path selection Traffic engineering Constraint-based routing O-UNI interworking and control integration Path selection, protection/restoration tool - GMPLS Data Plane System Manager Discovery Config Communicate Interlink Stop/Start Module Resource Balance Interface Adjustments GMPLS Tools LP Signaling for I-NNI Attribute Designation, eg Uni, Bi directional LP Labeling Link Group designations Control Channel monitoring, physical fault detection, isolation, adjustment, connection validation etc OSM UNI-N
11
The OptIPuter LambdaGrid AmsterdamChicago Seattle San Diego StarLight Northwestern UICUoA CERN NASA Goddard CENIC San Diego GigaPOP UCSD CENIC LA GigaPOP NASA Ames NASA JPL ISI UCI
12
OMNInet Network Configuration 2006 10 GE To Ca*Net 4 710 Lake Shore Photonic Node 600 S. Federal Photonic Node W Taylor 750 North Lake Shore Photonic Node 10/100/ GIGE 10/100/ GIGE 10/100/ GIGE 10/100/ GIGE 10 GE Optera 5200 10Gb/s TSPR Photonic Node PP 8600 10 GE PP 8600 PP 8600 Optera 5200 10Gb/s TSPR 10 GE Optera 5200 10Gb/s TSPR Optera 5200 10Gb/s TSPR 1310 nm 10 GbE WAN PHY interfaces 10 GE PP 8600 Fiber … CAMPUS FIBER (16) EVL/UIC OM5200 LAC/UIC OM5200 CAMPUS FIBER (4) INITIAL CONFIG: 10 LAMBDA (all GIGE) StarLight Interconnect with other research networks 10GE LAN PHY (Dec 03) 8x8x8 Scalable photonic switch Trunk side – 10 G WDM OFA on all trunks TECH/NU-E OM5200 CAMPUS FIBER (4) INITIAL CONFIG: 10 LAMBDAS (ALL GIGE) Optera Metro 5200 OFA NWUEN-1 NWUEN-5 NWUEN-6 NWUEN-2 NWUEN-3 NWUEN-4 NWUEN-8NWUEN-9 NWUEN-7 Fiber in use Fiber not in use 5200 OFA Optera 5200 OFA 5200 OFA DOT Clusters
13
Optical Switch High Performance L2 Switch High Performance L2 Switch High Performance L2 Switch High Performance L2 Switch 1 x 10G Wan Trib Content OC-192 – with TFEC16 OC-192 – without TFEC12 Ge8 OC-480 TFEC Link Non - TFEC Link Ge (x2) Only TFEC link can support OC- 192c (10G Wan) operation Non -TFEC link used to transport Ge traffic Ge (x2) Default configuration: Tribs can be moved as needed Could have 2 facing L2 SW Default configuration: Tribs can be moved as needed Could have 2 facing L2 SW TFEC = Out of band error correction OMNInet 2005 750 North Lake Shore Drive 710 North Lake Shore Drive 1890 W. Taylor 600 N. Federal
14
Extensions to Other Sites Via Illinois’ I-WIRE Research Areas Displays/VR Collaboration Rendering Applications Data Mining NCSA Argonne UIC/EVL Research Areas Latency-Tolerant Algorithms Interaction of SAN/LAN/WAN technologies Clusters UIUC CS StarLight Source: Charlie Catlett
16
Summary Optical Services: Baseline + 5 Years 200520062007200820092010 Dedicated Lightpaths Enhanced Direct Addressing Additional LPs National, Global Site Expansion: Multiple Labs Site Expansion: Multiple Labs Dynamic Lightpath Allocation Increased Number of Nodes on LPs Increased Allocation Capacity US Increased Allocation Capacity Global Increased Allocation Capacity: Sites Highly Distributed Control Plane Persistent Inter Domain Signaling National Global Multi-Domain Distributed Control Extension to Additional Net Elements Persistence: Common Facilities Additional Facility Implementations Deterministic Paths Close Integration w/ App Signaling Increased Attribute Parameters Increased Adjustment Parameters Performance Metrics and Methods Enhanced Recovery Restoration Autonomous Dyn. Lightpath Peer. Multi-Domain ADLP Integration with Management Sys Extensions of ADLP Peering E2E ADLPRecovery Restoration Multi-Service Layer Integration 5-10 MSI Facilities 10-20 MSI Facilities 20-40 MSI Facilities Additional US, Global Facilities Optical MulticastEnhanced Control of OM OM/App Integration Expansion to Addtn’l Objects Expansion to Addtn’l Apps Expansion to Addtn’l Apps App/Optical Integration App API-Op Ser Validation Integration with Optical Services Monitoring Techniques Analysis Techniques Recovery, Restoration Wavelength Routing Persistent Wavelength Routing Multi-Domain Wavelength Routing Multi-layer Integration Multi-Services Integration Enhanced Recovery Restoration
17
Summary Optical Technologies: Baseline + 5 Years 200520062007200820092010 O-APIsAdditional Experiments w/ Architecture App Specific APIs Variable APIs Integrated with Common Services Enhancement of Architecture Additional Deployment Distributed Control Systems, Multi-Domain Additional Experiments w/ Architecture Integration with ROADMs Expansion to Edge Sites Enhancement of Architecture Additional Deployment OOO Switches At Selected Core Sites At Selected Core, Edge Sites + Experimental Solid State OSWs Solid State OSW Deployment Solid State At Core, Edge O-UNIsAt Selected Core Sites At Selected Core, Edge Sites Deployment At All Key Sites Additional Deployments Wide Deployment Service Abstraction – GMPLS Integr. Additional Signaling Integration Increased Transparency & LayerElimination Increased Integration with ID/Obj.Discovery Prototype Arch for App Specific Serv Abstractns. Formalization of Enhanced Architecture Policy Based Access Control Additional Experiments w/ Architecture Formalization of Architecture, eg Via WS Expansion to Additional Resources L1 Security Enhancements Formalization of Enhanced Architecture New Id, Object and Discovery Mechanisms Integration of New Id, Obj, Dis w/ New Arch. Integration With Multiple Integrated Serv. Integration w/New Management Sys Extensions to various TE Functions Persistent at Core, Edge Facilities DWDM CWDM Integration with Edge Optics Integration with BP Optics Additional MUX/DMUX Increased Stream Granularity 2D MEMs3D LP SwitchesExperimental Opt Packet SWs Prototype Deployed OPSW At Core SitesAt Edge and Core Sites
18
Summary Optical Interoperability Issues: Baseline + 5 Years 200520062007200820092010 Common Open Services Definitions Common Services R&D Common Services Experimentation Initial Standards Formalization Establish CSD Enhancement Process On-Going COS Architecture Initial Implementations Expansion of Functionality Initial Standards Formalization Enhancement Process On-Going Open Protocols And Standards Initial Implementations Expansion of Functionality Initial Standards Formalization Enhancement Process On-Going Distributed Control V2 with WS Integration Multi-Service Integration New Services Integration Extensions, Horizontal, Vert Integration with New Opt Core Multi-Domain Interoperability Enhancement of Signaling Functionality Access Policy Services Expansion to Additional Domains Increasing US, Global Extensions Implementation At GLIF Open Exchanges (4) 5-10 OE Sites10-20 OE Sites20-30 OE Sites30-40 OE Sites40-50 OE Sites Basics Services at Key US, Global Research Sites 15-30 Sites30-60 Sites60-90 Sites90-120 Sites120-150 Sites Basic Services at Key US Science Sites 7-15 Sites15-30 Sites30-45 Sites45-60 Sites60-75 Sites Service Est. At Selected Labs 30-60 Labs60-120 Labs120-180 Labs180-240 Labs240-300 Labs
19
Overall Networking Plan Seattle Chicago San Diego (iGRID,UCSD) Dedicated Lightpaths NLR Pacific Wave CENIC PW/CENIC University of Amsterdam StarLight NetherLight 4 Dedicated Paths Route B NetherLight Dedicated Lightpaths San Diego (iGRID, UCSD) Seattle 4*1Gpbs Paths + One Control Channel
20
AMROEBA Network Topology L2SW L3 (GbE) L2SW iGRID Conference OME UvA VanGogh Grid Clusters iCAIR DOT Grid Clusters iGRID Demonstartion Control L2SW SURFNet/ University of Amsterdam StarLight L2SW Visualization
23
www.startap.net/starlight
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.