Requirements Mechanisms + Policies API. Domain 1Domain 2Domain 3 2 B A GNSA LDG 1 3 GNSA LDG LDG control plane Service plane Chicago Amsterdam.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

Photonic TeraStream and ODIN By Jeremy Weinberger The iCAIR iGRID2002 Demonstration Shows How Global Applications Can Use Intelligent Signaling to Provision.
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
Electronic Visualization Laboratory University of Illinois at Chicago Photonic Interdomain Negotiator (PIN): Interoperate Heterogeneous Control & Management.
Database Architectures and the Web
System Center 2012 R2 Overview
High Performance Computing Course Notes Grid Computing.
Telecom Italia GRID activities for 6th FP Program Maurizio Cecchi 3/4 October 2002.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
DevOps and Private Cloud Automation 23 April 2015 Hal Clark.
Application-engaged Dynamic Orchestration of Optical Network Resources DWDM RAM DWDM RAM Defense Advanced Research Projects Agency BUSINESS.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Grids and Grid Technologies for Wide-Area Distributed Computing Mark Baker, Rajkumar Buyya and Domenico Laforenza.
1 DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks S. Figueira, S. Naiksatam, H. Cohen, D. Cutrell, P. Daspit, D. Gutierrez, D. Hoang, T.
1 Lambda Data Grid A Grid Computing Platform where Communication Function is in Balance with Computation and Storage Tal Lavian.
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
 Distributed Software Chapter 18 - Distributed Software1.
Cloud Computing for the Enterprise November 18th, This work is licensed under a Creative Commons.
Cloud Computing Saneel Bidaye uni-slb2181. What is Cloud Computing? Cloud Computing refers to both the applications delivered as services over the Internet.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Cloud Computing. What is Cloud Computing? Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable.
Presenter: Dipesh Gautam.  Introduction  Why Data Grid?  High Level View  Design Considerations  Data Grid Services  Topology  Grids and Cloud.
DISTRIBUTED COMPUTING
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
1 High-Level Carrier Requirements for Cross Layer Optimization Dave McDysan Verizon.
Implementation Considerations in an On-Demand Switched Lightpath Network Adapting the Network to the Application Rob Keates Optical Architecture and PLM.
Grid – Path to Pervasive Adoption Mark Linesch Chairman, Global Grid Forum Hewlett Packard Corporation.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
OS Services And Networking Support Juan Wang Qi Pan Department of Computer Science Southeastern University August 1999.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Erik Radius Manager Network Services SURFnet, The Netherlands Joint Techs Workshop Columbus, OH - July 20, 2004 GigaPort Next Generation Network & SURFnet6.
1 Dynamic Service Provisioning in Converged Network Infrastructure Muckai Girish Atoga Systems.
Tal Lavian Advanced Technology Research, Nortel Networks Impact of Grid Computing on Network Operators and HW Vendors Hot Interconnect.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
Tal Lavian UC Berkeley, and Advanced Technology Research, Nortel Networks Randy Katz – UC Berkeley John Strand – AT&T Research.
7. Grid Computing Systems and Resource Management
Internet of Things. IoT Novel paradigm – Rapidly gaining ground in the wireless scenario Basic idea – Pervasive presence around us a variety of things.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Data Grid Plane Network Grid Plane Dynamic Optical Network Lambda OGSI-ification Network Resource Service Data Transfer Service Generic Data-Intensive.
© 2015 IBM Corporation IBM PureApplication Executive Symposium Diego Segre Vice President, Middleware, Break down the barriers to digital.
Next Generation of Apache Hadoop MapReduce Owen
Introduction to Avaya’s SDN Architecture February 2015.
Distributed Geospatial Information Processing (DGIP) Prof. Wenwen Li School of Geographical Sciences and Urban Planning 5644 Coor Hall
© 2013, CYAN, INC. 11 Software Defined Metro Networks TNC2013 Virtualization and Innovation Robin Massey SE Manager EMEA
Admela Jukan jukan at uiuc.edu March 15, 2005 GGF 13, Seoul Issues of Network Control Plane Interactions with Grid Applications.
Franco Travostino and Admela Jukan jukan at uiuc.edu June 30, 2005 GGF 14, Chicago Grid Network Services Architecture (GNSA) draft-ggf-ghpn-netserv-2.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
CLOUD ARCHITECTURE Many organizations and researchers have defined the architecture for cloud computing. Basically the whole system can be divided into.
CIS 700-5: The Design and Implementation of Cloud Networks
Implementation Considerations in an On-Demand Switched Lightpath Network Adapting the Network to the Application Rob Keates Optical Architecture and PLM.
SURFnet6: the Dutch hybrid network initiative
Grid Network Services: Lessons from SC04 draft-ggf-bas-sc04demo-0.doc
Grid Computing.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM
University of Technology
GGF15 – Grids and Network Virtualization
Software Defined Networking (SDN)
Internet and Web Simple client-server model
Large Scale Distributed Computing
Presentation transcript:

Requirements Mechanisms + Policies API

Domain 1Domain 2Domain 3 2 B A GNSA LDG 1 3 GNSA LDG LDG control plane Service plane Chicago Amsterdam

Require fat unidirectional pipes Tight QoS requirements (jitter, delay, data loss) Simultaneous connectivity to multiple sites Multi-domain Dynamic connectivity hard to manage Unknown sequence of connections Request to receive data from 1 hour up to 1 day Innovation for new bio-science Architecture forces optimization of BW utilization Tradeoff between BW and storage Network Issues Personnel required at remote location Remote instrument access (Radio- telescope) N* previous scenarioAccess multiple remote DB Copy from remote DB: Takes ~10 days (unpredictable) Store then copy/analyze Point-to-Point data transfer of multi-TB data sets Current MOPApplication Scenario

2 PB/YearComputational fluid dynamics 1 PB/yearMagnetic fusion 5 PB/YearPlasma physics 3 PB/YearNuclear physics 1 PB/yearSLAC (BaBar experiments) 10 PB/yearCERN LHC (Higgs boson search) 5 PB/yearRHIC (Quark-gluon plasma experiments) <10 PB/yearCEBAF (Hadron structure experiments) Estimated 2008 Data Generation

Processor Performance Traffic Growth 2x/12 months x x x 32 2x/18 months

Data Transmission Plane optical Control Plane 1 n DB 1 n 1 n Storage Optical Control Network Network Service Plane Data Grid Service Plane NRS DTS Compute NMI Scientific workflow Apps Middleware Resource managers

PRA CreateReservationFromRequest SRA1 SRA2 SRA3 ( ConstructProposals) GetWindowsForRequest ProposeReschedule etc. ConstructProposalForWindow

Fabric UDP ODINResources Grid FTP BIRN Mouse Apps Middleware TCP/HTTP Grid Layered Architecture Lambda Data Grid IP Connectivity Application Resource Collaborative BIRN Workflow NMI NRS BIRN Toolkit Lambda Resource managers DB StorageComputation Optical Control WSRF Optical protocols Optical HW OGSA

Availability: Abundant Optical Bandwidth Requirements: Data-Intensive e-Science apps Lambda Data Grid

Applications Our emphasis Grid work’s emphasis Applications Middleware Network Middleware Network

Terabit/s 100Gb/s 10Gb/s 1Gb/s Fiber transmission Edge computer limitations

Optical Control Network Network Service Request Data Transmission Plane OmniNet Control Plane ODIN UNI-N ODIN UNI-N Connection Control L3 router L2 switch Data storage switch Data Path Control Data Path Control DATA GRID SERVICE PLANE 1 n 1 n 1 n Data Path Data Center Service Control Service Control NETWORK SERVICE PLANE GRID Service Request Data Center DWDM-RAM Service Control Architecture

Optical Control Network Network Service Request Data Transmission Plane Optical Control Plane Optical Control UNI-N Optical Control UNI-N Connection Control L3 router L2 switch Data storage switch Data Path Control Data Path Control DATA GRID SERVICE PLANE 1 n 1 n 1 n Data Path Data Center Service Control Service Control NETWORK SERVICE PLANE GRID Service Request Data Center DWDM-RAM Service Control Architecture

From 100 Days to 100 Seconds

LambdaData Grid - GlobusServices Fabric SABUL UDP ODIN OMNInet Storage Bricks Grid FTP GRAM GSI e-Science applications Multidisciplinary Simulation SOAP TCP/HTTP NRS Storage Service DTS IP Connectivity Application Resource Collaborative GARA

Problem Solving Environment Applications and Supporting Tools Application Development Support Common Grid Services Local Resources Grid Information Service Uniform Resource Access Brokering Global Queuing Global Event Services Co- Scheduling Data Cataloguing Uniform Data Access Communicatio n Services Authorization Grid Security Infrastructure (authentication, proxy, secure transport) Auditing Fault Management Monitoring Communication Resource Manager CPUs Resource Manager Tertiary Storage Resource Manager On-Line Storage Resource Manager Scientific Instruments Resource Manager Monitors Resource Manager Highspeed Data Transport Resource Manager net QoS Grid access (proxy authentication, authorization, initiation) Grid task initiation Collective Grid Services Fabric Data Replication High performance computing and Processor memory co-allocation Security and Generic AAA Optical Networking Researched in other programlines Imported from the Globus toolkit

value time Window value time Increasingvalue time Decreasing value time Peak value time Level value time Asymptotic Increasing value time Asymptotic Increasing value time Step

Application Services data control data control ChicagoAmsterdam AAA LDG AAA LDG OMNInet ODIN Starligh t Netherligh t UvA ASTNSNMP

Multi-EndPoint Communication Network Transfers Faster than Individual Machines –A Terabit flow? A 100Gbit flow? A 10Gbps flow w/ 1Gbps NIC’s –Clusters are Cost-effective means to terminate Fast transfers –Support Flexible, Robust, General N-to-M Communication –Manage Heterogeneity, Multiple Transfers, Data Accessibility Uh-oh!

λ Data ReceiverData Source FTP clientFTP server DMS NRM Client App Data Management Service

A D B C X 7:00-8:00 A D B C X Y

Data service Scheduling logic Replica service NMI /IF Apps mware I/F Proposal evaluation NRS I/F GT4 /IF Data calc DTS Topology map Scheduling algorithm Proposal constructor NMI /IF DTS IF Scheduling service Optical control I/F Proposal evaluator GT4 /IF Network allocation Net calc NRS

Visualization X 1,000 Storage X 400 Computation X 500 Few-to-few 10Gbps connectivity with C=500, S=400, and V=1,000. Will require budget of 100 trillion dollars a year

Visualization X 1,000 Storage X 400 Computation X 500

Enabling new degrees of App/Net coupling Hybrid Optical Packet –Use ephemeral optical circuits to steer the herd of elephants (few to few) –Mice or individual elephants go through packet technologies (many to many) –Either application-driven or network-sensed; hands-free in either case –Other hybrid networks being explored (e.g., wireless + wireline) Application-engaged networks –The application makes itself known to the network –The network recognizes its footprints (via tokens, deep packet inspection) –E.g., storage management applications Workflow-engaged networks –Through workflow languages, the network is privy to the overall “flight-plan” –Failure-handling is cognizant of the same –Network services can anticipate the next step, or what-if’s –E.g., healthcare workflows over a distributed hospital enterprise

BIRN With the OptIPuter we are Addressing the Challenges of Large and Distributed Data Each Brain is Big Data and Comparisons Must be Made Between Many!. ~5um } 512 x 512 x100,000

Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources, Centralized Management Distributed Device, Dynamic Services, Visible & Accessible Resources, Integrated As Required By Apps Limited Functionality, Flexibility Unlimited Functionality, Flexibility OptIPuter Paradigm Shift

Parallelism Has Come to Optical Networking (WDM) Source: Steve Wallach, Chiaro Networks “Lambdas” Parallel Lambdas Will Drive This Decade The Way Parallel Processors Drove the 1990s

From: Smarr Talk “The Beginning of the Access Grid” April 15,

P-CSCF Phys. PCSCF Session Convergence & Nexus Establishment End-to-end Policy DRAC Built-in Services (sampler) Workflow Language 3rd Party Services AAA Access Value-Add Services Sources/Sinks Topology Metro Core Proxy P-CSCF Phys. P-CSCF Proxy Grid Community Scheduler smart bandwidth management Layer x L1 interworking Alternate Site Failover SLA Monitoring and Verification Service Discovery Workflow Language Interpreter Bird’s eye View of the Service Stack Legacy Sessions (Management & Control Planes) Control Plane A Control Plane B OAM OAMP OAM OAM OAM OAM OAM OAM

How It Works: A Notional View Admin. Application connectivity plane virtualization plane dynamic provisioning plane Alert, Adapt, Route, Accelerate Detect supply events supply Agile Network(s) Application(s) AAA NE from/to peering DRACs demand Negotiate DRAC, portable SW

Routed IP Network GE Customer A network Customer A network Customer B network Customer B network High-cap user GE PP 8600 PP 8600 DRAC-driven Bypass in Action VLAN X Routed IP VLAN X Routed IP Layer 1 Bandwidth Layer 1 Bandwidth Control Plane “DRAC” UNI VLAN Y Cloud Bypass Flows differentiated On IP Subnet or port

Application Services Going multi-domain: the SC2004 Demonstrator data control data control ChicagoAmsterdam finesse the control of bandwidth across multiple domains while exploiting scalability and intra-, inter-domain fault recovery thru layering of a novel SOA upon legacy control planes and NEs AAA DRAC AAA DRAC OMNInet ODIN Starligh t Netherligh t UvA ASTNSNMP

2nd Case Study: DataCenter CPU + DATA + NET Orchestration Impact of Virtualization on the End-to-End Session

Site 1 Site 2 Site 3 Horizontal IT Integration The way a provider gainfully operates for many paying customers over a geographical footprint What Joe Smith thinks that he’s getting in exchange for a monthly check to a provider Joe Smith’s own Mainframe and good Apps ,000 blades vs. Joe Smith’s Virtual Machines run here w/ Apps, Licenses (SAN not shown) w/ right-sized bandwidth, 24x7, at 100% disaster-free Zip code “Horizontal IT Integration” has multiple facets: Virtualization, SOA, Grid Computing, e-Utilities, Service Grids

Virtual Compute Plane Virtualized Data-Centers w/ integrated resource control App VM App VM App VM Multi-domain Virtual Network, Security & Services Plane Policies Meta-Scheduler Multi-resource Coordination Plane Secure Router Nortel VR5000 L3 Switch Nortel ERS8600 L2 Switch Nortel ES5500 Metro OE Gateway Nortel OM35/5/65 Application Switch Nortel AS2424SSL User Plane

WS Computing RM WS Device RM WS Storage RM In Focus: Multi-Resource Coordination Plane WS = Web Services RM = Resource Manager Instruments Sensors SCADA RFID infr. Meta-Scheduler Multi-resource Coordination Plane DRAC WS Execution Engines DRAC WS DRAC WS

Computation at the Right Place & Time! We migrate live Xen VMs, unbeknownst to applications and clients, with dynamic cpu+data+net orchestration Computation at the Right Place & Time! We migrate live Xen VMs, unbeknownst to applications and clients, with dynamic cpu+data+net orchestration Seattle Netherlight Amsterdam NYC Toronto SC|2005 UvA Starlight Chicago VMs Dynamic Lightpaths hitless remote rendering The SC05 “VM Turntable” Demonstrator

The whole IT Industry is on a journey Old World Static Silo Physical Manual Application New World Dynamic Shared Virtual Automated Service © GGF

WHAT ARE WEB SERVICES? Web services are simple XML-based messages for machine-machine messaging –Web services don’t necessarily involve web browsers –Think of web services as XML-based APIs Web services use standard internet technologies to interact dynamically with one another –Well understood security model –Loosely coupled –Can be combined to form complex services –Open agreed standards connect disparate platforms Middleware based on web services has enjoyed tremendous success in the past five years –eBay/PayPal, Amazon and Google all big users of web services Google’s web service offerings: >Search Google’s eight billion web page database >Dictionary lookup eBay’s usage of web services: >1 billion web service transactions per month >40% of listings now generated via web services Web services rapidly becoming an essential part of many IT services in both B2B and B2C market categories

Example: Lightpath Scheduling Request for 1/2 hour between 4:00 and 5:30 on Segment D granted to User W at 4:00 New request from User X for same segment for 1 hour between 3:30 and 5:00 Reschedule user W to 4:30; user X to 3:30. Everyone is happy. Route allocated for a time slot; new request comes in; 1st route can be rescheduled for a later slot within window to accommodate new request 4:305:005:304:003:30 W 4:305:005:304:003:30 X 4:305:005:304:003:30 W X ☺

4:305:005:304:003:30 W 4:305:005:304:003:30 X 4:305:005:304:003:30 W X A B C

Scheduling Example - Reroute Request for 1 hour between nodes A and B between 7:00 and 8:30 is granted using Segment X (and other segments) is granted for 7:00 New request for 2 hours between nodes C and D between 7:00 and 9:30 This route needs to use Segment E to be satisfied Reroute the first request to take another path thru the topology to free up Segment E for the 2nd request. Everyone is happy A D B C X 7:00-8:00 A D B C X Y Route allocated; new request comes in for a segment in use; 1st route can be altered to use different path to allow 2nd to also be serviced in its time window ☺

x.x.x.1 y.y.y.1 Optical cut-through

x.x.x.1 y.y.y.1 Optical cut-through

e-Science example Application ScenarioCurrent MOPNetwork Issues Pt – Pt Data Transfer of Multi-TB Data Sets  Copy from remote DB: Takes ~10 days (unpredictable)  Store then copy/analyze  Want << 1 day  << 1 hour, innovation for new bio-science  Architecture forced to optimize BW utilization at cost of storage Access multiple remote DB  N* Previous Scenario  Simultaneous connectivity to multiple sites  Multi-domain  Dynamic connectivity hard to manage  Don’t know next connection needs Remote instrument access (Radio- telescope)  Cant be done from home research institute  Need fat unidirectional pipes  Tight QoS requirements (jitter, delay, data loss) Other Observations: Not Feasible To Port Computation to Data Delays Preclude Interactive Research: Copy, Then Analyze Uncertain Transport Times Force A Sequential Process – Schedule Processing After Data Has Arrived No cooperation/interaction among Storage, Computation & Network Middlewares Dynamic network allocation as part of Grid Workflow, allows for new scientific experiments that are not possible with today’s static allocation