An Architecture for Data Intensive Service Enabled by Next Generation Optical Networks Nortel Networks International Center for Advanced Internet Research.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

Photonic TeraStream and ODIN By Jeremy Weinberger The iCAIR iGRID2002 Demonstration Shows How Global Applications Can Use Intelligent Signaling to Provision.
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Information Society Technologies programme 1 IST Programme - 8th Call Area IV.2 : Computing Communications and Networks Area.
Electronic Visualization Laboratory University of Illinois at Chicago Photonic Interdomain Negotiator (PIN): Interoperate Heterogeneous Control & Management.
A Possible New Dawn for the Future GÉANT Network Architecture
Business Model Concepts for Dynamically Provisioned Optical Networks Tal Lavian DWDM RAM DWDM RAM Defense Advanced Research Projects Agency.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Application-engaged Dynamic Orchestration of Optical Network Resources DWDM RAM DWDM RAM Defense Advanced Research Projects Agency BUSINESS.
# 1 A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Networks DWDM RAM DWDM RAM Defense Advanced Research Projects.
Business Models for Dynamically Provisioned Optical Networks Tal Lavian DWDM RAM DWDM RAM Defense Advanced Research Projects Agency BUSINESS.
A Platform for Data Intensive Services Enabled by Next Generation Dynamic Optical Networks DWDM RAM DWDM RAM Defense Advanced Research.
1 DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks S. Figueira, S. Naiksatam, H. Cohen, D. Cutrell, P. Daspit, D. Gutierrez, D. Hoang, T.
1 Automatic Dynamic Run-time Optical Network Reservations John R. Lange Ananth I. Sundararaj and Peter A. Dinda Prescience Lab Department of Computer Science.
1 Lambda Data Grid A Grid Computing Platform where Communication Function is in Balance with Computation and Storage Tal Lavian.
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
1 Chapter 10 Introduction to Metropolitan Area Networks and Wide Area Networks Data Communications and Computer Networks: A Business User’s Approach.
Abstraction and Control of Transport Networks (ACTN) BoF
OptIPuter Backplane: Architecture, Research Plan, Implementation Plan Joe Mambretti, Director,
LECTURE 9 CT1303 LAN. LAN DEVICES Network: Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and.
DISTRIBUTED COMPUTING
DWDM RAM NTONC DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM-RAM demonstration sponsored by Nortel.
Repeaters and Hubs Repeaters: simplest type of connectivity devices that regenerate a digital signal Operate in Physical layer Cannot improve or correct.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
CA*net 4 Open Grid Services for Management of Optical Networks CENIC Workshop May 6, 2002
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
Metro OptIPuter Backplane: Architecture, Research Plan, Implementation Plan Joe Mambretti, Director,
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
1 Dynamic Service Provisioning in Converged Network Infrastructure Muckai Girish Atoga Systems.
Tal Lavian Advanced Technology Research, Nortel Networks Impact of Grid Computing on Network Operators and HW Vendors Hot Interconnect.
Dynamic Lightpath Services on the Internet2 Network Rick Summerhill Director, Network Research, Architecture, Technologies, Internet2 TERENA May.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
GridNets, October 1, AR-PIN/PDC: Flexible Advance Reservation of Intradomain and Interdomain Lightpaths Eric He, Xi Wang, Jason Leigh Electronic.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
A DVANCE AND I MMEDIATE R ESERVATIONS OF V IRTUALIZED N ETWORK R ESOURCES Laia Ferrao Fundació i2CAT
GigaPort NG Network SURFnet6 and NetherLight Erik-Jan Bos Director of Network Services, SURFnet GDB Meeting, SARA&NIKHEF, Amsterdam October 13, 2004.
Internet of Things. IoT Novel paradigm – Rapidly gaining ground in the wireless scenario Basic idea – Pervasive presence around us a variety of things.
1 Simple provisioning, complex consolidation – An approach to improve the efficiency of provisioning oriented optical networks Tamás Kárász Budapest University.
A Platform for Data Intensive Services Enabled by Next Generation Dynamic Optical Networks DWDM RAM DWDM RAM Defense Advanced Research.
For WSIS 2003, CERN and the International Center for Advanced Internet Research (iCAIR) designed several demonstrations of next generation.
Rehab AlFallaj.  Network:  Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and do specific task.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
Data Grid Plane Network Grid Plane Dynamic Optical Network Lambda OGSI-ification Network Resource Service Data Transfer Service Generic Data-Intensive.
Challenges in the Next Generation Internet Xin Yuan Department of Computer Science Florida State University
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
Zagreb Optical Packet Switching the technology and its potential role in future communication networks Results from.
Admela Jukan jukan at uiuc.edu March 15, 2005 GGF 13, Seoul Issues of Network Control Plane Interactions with Grid Applications.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
BDTS and Its Evaluation on IGTMD link C. Chen, S. Soudan, M. Pasin, B. Chen, D. Divakaran, P. Primet CC-IN2P3, LIP ENS-Lyon
MPLS Introduction How MPLS Works ?? MPLS - The Motivation MPLS Application MPLS Advantages Conclusion.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
1 Lambda Data Grid An Agile Optical Platform for Grid Computing and Data-intensive Applications Tal Lavian Qualifying Exam Proposal October 29th, 2003.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM-RAM demonstration sponsored by Nortel Networks and.
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
Grid Optical Burst Switched Networks
SURFnet6: the Dutch hybrid network initiative
StarPlane: Application Specific Management of Photonic Networks
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM
University of Technology
Intelligent Network Services through Active Flow Manipulation
Presentation transcript:

An Architecture for Data Intensive Service Enabled by Next Generation Optical Networks Nortel Networks International Center for Advanced Internet Research (iCAIR), NWU, Chicago Santa Clara University, California University of Technology, Sydney DWDM RAM DWDM RAM Tal Lavian : Nortel Networks Labs NTONC

Agenda Challenges –Growth of Data-Intensive Applications Architecture –Lambda Data Grid Lambda Scheduling Result –Demos and Experiment Summary

Radical mismatch: L1 – L3 Radical mismatch between the optical transmission world and the electrical forwarding/routing world. Currently, a single strand of optical fiber can transmit more bandwidth than the entire Internet core Current L3 architecture can’t effectively transmit PetaBytes or 100s of TeraBytes Current L1-L0 limitations: Manual allocation, takes 6-12 months - Static. –Static means: not dynamic, no end-point connection, no service architecture, no glue layers, no applications underlay routing

Growth of Data-Intensive Applications IP data transfer: 1.5TB (10 12 ), 1.5KB packets –Routing decisions: 1 Billion times (10 9 ) –Over every hop Web, Telnet, – small files Fundamental limitations with data-intensive applications –multi TeraBytes or PetaBytes of data –Moving 10KB and 10GB (or 10TB) are different ( x 10 6, x 10 9 ) –1Mbs & 10Gbs are different ( x 10 6)

Lambda Hourglass Data Intensive app requirements –HEP –Astrophysics/Astronomy –Bioinformatics –Computational Chemistry Inexpensive disk – 1TB < $1,000 DWDM –Abundant optical bandwidth One fiber strand –280 λs, OC-192 Data-Intensive Applications Lambda Data Grid Abundant Optical Bandwidth CERN 1-PB 2.8 Tbs on single fiber strand

Challenge: Emerging data intensive applications require: Extremely high performance, long term data flows Scalability for data volume and global reach Adjustability to unpredictable traffic behavior Integration with multiple Grid resources Response: DWDM-RAM - An architecture for data intensive Grids enabled by next generation dynamic optical networks, incorporating new methods for lightpath provisioning DWDM RAM DWDM RAM

DWDM-RAM: An architecture designed to meet the networking challenges of extremely large scale Grid applications. Traditional network infrastructure cannot meet these demands, especially, requirements for intensive data flows DWDM-RAM Components Include: Data management services Intelligent middleware Dynamic lightpath provisioning State-of-the-art photonic technologies Wide-area photonic testbed implementation DWDM RAM DWDM RAM

Agenda Challenges –Growth of Data-Intensive Applications Architecture –Lambda Data Grid Lambda Scheduling Result –Demos and Experiment Summary

4x10GE Northwestern U Optical Switching Platform Passport 8600 Application Cluster A four-node multi-site optical metro testbed network in Chicago -- the first 10GE service trial! A test bed for all-optical switching and advanced high-speed services OMNInet testbed Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL Application Cluster Optical Switching Platform Passport x10GE StarLight OPTera Metro 5200 Application Cluster Optical Switching Platform Passport x10GE 8x1GE UIC CA*net3--Chicago Optical Switching Platform Passport 8600 Closed loop 4x10GE 8x1GE Loop OMNInet Core Nodes

What is Lambda Data Grid? A service architecture –comply with OGSA –Lambda as an OGSI service –on-demand and scheduled Lambda GT3 implementation Demos in booth 1722 Data Grid Service Plane Network Service Plane Centralize Optical Network Control Lambda Service Grid Computing Applications Grid Middleware

Optical Control Network Network Service Request Data Transmission Plane OmniNet Control Plane ODIN UNI-N ODIN UNI-N Connection Control L3 router L2 switch Data storage switch Data Path Control Data Path Control DATA GRID SERVICE PLANE DWDM-RAM Service Control Architecture 1 n Data Center 1 n 1 n Data Path Data Center Service Control Service Control NETWORK SERVICE PLANE GRID Service Request

Data Transmission Plane Connection Control L3 router Data storage switch Data Center 1 n 1 n Data Path Data Center Applications Optical Control Plane Data Path Control Data Transfer Scheduler Storage Resource Service Dynamic Optical Network Basic Network Resource Service Other Services Processing Resource Service Network Resource Scheduler Basic Data Transfer Service Data Transfer Service Network Transfer Service External Services

Data Management Services OGSA/OGSI compliant Capable of receiving and understanding application requests Has complete knowledge of network resources Transmits signals to intelligent middleware Understands communications from Grid infrastructure Adjusts to changing requirements Understands edge resources On-demand or scheduled processing Supports various models for scheduling, priority setting, event synchronization

Intelligent Middleware for Adaptive Optical Networking OGSA/OGSI compliant Integrated with Globus Receives requests from data services Knowledgeable about Grid resources Has complete understanding of dynamic lightpath provisioning Communicates to optical network services layer Can be integrated with GRAM for co-management Architecture is flexible and extensible

Dynamic Lightpath Provisioning Services Optical Dynamic Intelligent Networking (ODIN) OGSA/OGSI compliant Receives requests from middleware services Knowledgeable about optical network resources Provides dynamic lightpath provisioning Communicates to optical network protocol layer Precise wavelength control Intradomain as well as interdomain Contains mechanisms for extending lightpaths through E-Paths - electronic paths

Agenda Challenges –Growth of Data-Intensive Applications Architecture –Lambda Data Grid Lambda Scheduling Result –Demos and Experiment Summary

Design for Scheduling Network and Data Transfers scheduled Data Management schedule coordinates network, retrieval, and sourcing services (using their schedulers) Network Management has own schedule Variety of request models Fixed – at a specific time, for specific duration Under-constrained – e.g. ASAP, or within a window Auto-rescheduling for optimization Facilitated by under-constrained requests Data Management reschedules for its own requests request of Network Management

Example 1: Time Shift Request for 1/2 hour between 4:00 and 5:30 on Segment D granted to User W at 4:00 New request from User X for same segment for 1 hour between 3:30 and 5:00 Reschedule user W to 4:30; user X to 3:30. Everyone is happy. Route allocated for a time slot; new request comes in; 1st route can be rescheduled for a later slot within window to accommodate new request 4:305:005:304:003:30 D 4:305:005:304:003:30 W 4:305:005:304:003:30 D W

Example 2: Reroute Request for 1 hour between nodes A and B between 7:00 and 8:30 is granted using Segment X (and other segments) for 7:00 New request for 2 hours between nodes C and D between 7:00 and 9:30 This route needs to use Segment E to be satisfied Reroute the first request to take another path thru the topology to free up Segment E for the 2nd request. Everyone is happy A D B C X 7:00-8:00 A D B C X Y Route allocated; new request comes in for a segment in use; 1st route can be altered to use different path to allow 2nd to also be serviced in its time window

Agenda Challenges –Growth of Data-Intensive Applications Architecture –Lambda Data Grid Lambda Scheduling Result –Demos and Experiment Summary

Path Allocation Overhead as a % of the Total Transfer Time Knee point shows the file size for which overhead is insignificant 1GB5GB 500GB

Agenda Challenges –Growth of Data-Intensive Applications Architecture –Lambda Data Grid Lambda Scheduling Result –Demos and Experiment Summary

Next generation optical networking provides significant new capabilities for Grid applications and services, especially for high performance data intensive processes DWDM-RAM architecture provides a framework for exploiting these new capabilities These conclusions are not only conceptual – they are being proven and demonstrated on OMNInet – a wide-area metro advanced photonic testbed

Thank you !

NRM OGSA Compliance OGSI interface GridService PortType with two application-oriented methods: allocatePath(fromHost, toHost,...) deallocatePath(allocationID) Usable by a variety of Grid applications Java-oriented SOAP implementation using the Globus Toolkit 3.0

Network Resource Manager Presents application-oriented OGSI / Web Services interfaces for network resource (lightpath) allocation Hides network details from applications Implemented in Java Items in blue are planned

Scheduling : Extending Grid Services OGSI interfaces Web Service implemented using SOAP and JAX-RPC Non-OGSI clients also supported GARA and GRAM extensions Network scheduling is new dimension Under-constrained (conditional) requests Elective rescheduling/renegotiation Scheduled data resource reservation service (“Provide 2 TB storage between 14:00 and 18:00 tomorrow”) DWDM-RAM October 2003Architecture Page 6

Enabling High Performance Support for Data-Intensive Services With On-Demand Lightpaths Created By Dynamic Lambda Provisioning, Supported by Advanced Photonic Technologies OGSA/OGSI Compliant Service Optical Service Layer: Optical Dynamic Intelligent Network (ODIN) Services Incorporates Specialized Signaling Utilizes Provisioning Tool: IETF GMPLS New Photonic Protocols Lightpath Services

Fiber CAMPUS FIBER (16) CAMPUS FIBER (4) Grid Clusters 10/100/ GIGE 10 GE To Ca*Net 4 Lake Shore Photonic Node S. Federal Photonic Node W Taylor Sheridan Photonic Node 10/100/ GIGE 10/100/ GIGE 10/100/ GIGE 10 GE Optera Gb/s TSPR Photonic Node  PP GE PP 8600 PP 8600        Optera Gb/s TSPR 10 GE Optera Gb/s TSPR     Optera Gb/s TSPR     1310 nm 10 GbE WAN PHY interfaces 10 GE PP 8600 … EVL/UIC OM5200 LAC/UIC OM5200 INITIAL CONFIG: 10 LAMBDA (all GIGE) StarLight Interconnect with other research networks 10GE LAN PHY (Dec 03) TECH/NU-E OM5200 CAMPUS FIBER (4) INITIAL CONFIG: 10 LAMBDAS (ALL GIGE) Optera Metro 5200 OFA NWUEN-1 NWUEN-5 NWUEN-6 NWUEN-2 NWUEN-3 NWUEN-4 NWUEN-8NWUEN-9 NWUEN-7 Fiber in use Fiber not in use 5200 OFA Optera 5200 OFA 5200 OFA OMNInet 8x8x8 Scalable photonic switch Trunk side – 10 G WDM OFA on all trunks

Relative Fiber power Relative power Tone code A/D PPS Control Middleware tap OFA D/A Management & OSC Routing VOA D/A Power measurement switch Switch Control AWG Temp. Control alg. D/A A/D AWG Heater + - Set point DSP Algorithms & Measurement tap PhotoDetector Drivers/data translation Connection verification Path ID Corr. Fault isolation Gain Controller Leveling Transient compensator Power Corr. + - LOS + - Photonics Database 100FX PHY/MAC Splitter OSC cct FLIP Rapid Detect Photonic H/W Physical Layer Optical Monitoring and Adjustment

Summary (I) Allow applications/services –to be deployed over the Lambda Data Grid Expand OGSA –for integration with optical network Extend OGSI –interface with optical control –infrastructure and mechanisms Extend GRAM and GARA –to provide framework for network resources optimization Provide generalized framework for multi-party data scheduling

Summary (II) Treating the network as a Grid resource Circuit switching paradigm moving large amounts of data over the optical network, quickly and efficiently Demonstration of on-demand and advance scheduling use of the optical network Demonstration of under-constrained scheduling requests The optical network as a shared resource –may be temporarily dedicated to serving individual tasks –high overall throughput, utilization, and service ratio. Potential applications include –support of E-Science, massive off-site backups, disaster recovery, commercial data replication (security, data mining, etc.)

Extension of Under-Constrained Concepts Initially, we use simple time windows More complex extensions –any time after 7:30 –within 3 hours after Event B happens –cost function (time) –numerical priorities for job requests Extend (eventually) concept of under- constrained to user-specified utility functions for costing, priorities, callbacks to request scheduled jobs to be rerouted/rescheduled (client can say yea or nay)