The UltraLight Program

Slides:



Advertisements
Similar presentations
Photonic TeraStream and ODIN By Jeremy Weinberger The iCAIR iGRID2002 Demonstration Shows How Global Applications Can Use Intelligent Signaling to Provision.
Advertisements

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved. 4-Feb-14 The 21 st Century Intelligent Network Tony Li, Carl DeSousa.
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
Information Society Technologies programme 1 IST Programme - 8th Call Area IV.2 : Computing Communications and Networks Area.
DRAGON Dynamic Resource Allocation via GMPLS Optical Networks Tom Lehman University of Southern California Information Sciences Institute (USC/ISI) National.
Towards Virtual Routers as a Service 6th GI/ITG KuVS Workshop on “Future Internet” November 22, 2010 Hannover Zdravko Bozakov.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Introduction and Overview “the grid” – a proposed distributed computing infrastructure for advanced science and engineering. Purpose: grid concept is motivated.
© 2009 Cisco Systems, Inc. All rights reserved. ROUTE v1.0—1-1 Planning Routing Services Assessing Complex Enterprise Network Requirements.
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
Sept 29-30, 2005 Cambridge, MA 1 Grand Challenges Workshop for Computer Systems Software Brett D. Fleisch Program Director National Science Foundation.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
HENP, Grids and the Networks They Depend Upon Shawn McKee March 2004 National Internet2 Day.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
1 4/23/2007 Introduction to Grid computing Sunil Avutu Graduate Student Dept.of Computer Science.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
The Internet2 HENP Working Group Internet2 Spring Meeting April 9, 2003.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.
1 Revision to DOE proposal Resource Optimization in Hybrid Core Networks with 100G Links Original submission: April 30, 2009 Date: May 4, 2009 PI: Malathi.
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
DATA ACCESS and DATA MANAGEMENT CHALLENGES in CMS.
READ ME FIRST Use this template to create your Partner datasheet for Azure Stack Foundation. The intent is that this document can be saved to PDF and provided.
REMOTE MANAGEMENT OF SYSTEM
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
Multi-layer software defined networking in GÉANT
draft-bernini-nfvrg-vnf-orchestration
Grid Optical Burst Switched Networks
Welcome Network Virtualization & Hybridization Thomas Ndousse
LESSON 2.1_A Networking Fundamentals Understand Switches.

Ad-hoc Networks.
Webinar: Cost Reduction Strategies Using SD-WAN
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Grid Computing.
Integration of Network Services Interface version 2 with the JUNOS Space SDK
How Smart Networks are Changing Corporate Networks
WP7 objectives, achievements and plans
Internet2 Performance Update
University of Technology
Cloud Computing.
Designing Routing and Switching Architectures. Howard C. Berkowitz
Chapter 2: Static Routing
Asynchronous Transfer Mode
ESnet Network Measurements ESCC Feb Joe Metzger
Network Virtualisation for Packet Optical Networks
ExaO: Software Defined Data Distribution for Exascale Sciences
National Internet2 Day - Sciences and Engineering Overview
Big-Data around the world
Dynamic Circuit Service Hands-On GMPLS Engineering Workshop
SCARIe: eVLBI Software Correlation Over Dynamic Lambda Grids
The Internet2 HENP SIG Internet2 Fall Meeting September 28, 2004
Proposal for a DØ Remote Analysis Model (DØRAM)
05/08/09.
QoS based pricing in IP Networks
Requirements of Computing in Network
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan

UltraLight Topics Introduction: What is the UltraLight Program? History Program Goals and Details Current Status and Summary

What is UltraLight? UltraLight is a program to explore the integration of cutting-edge network technology with the grid computing and data infrastructure of HEP/Astronomy The program intends to explore network configurations from common shared infrastructure (current IP networks) thru dedicated optical paths point-to-point. A critical aspect of UltraLight is its integration with two driving application domains in support of their national and international eScience collaborations: LHC-HEP and eVLBI-Astronomy The Collaboration includes: Caltech Florida Int. Univ. MIT Univ. of Florida Univ. of Michigan UC Riverside BNL FNAL SLAC UCAID/Internet2

Some History… The UltraLight Collaboration was originally formed in Spring 2003 in response to an NSF Experimental Infrastructure in Networking (EIN) RFP in ANIR After not being selected, the program was refocused on LHC/HEP and eVLBI/Astronomy and submitted to “Physics at the Information Frontier” (PIF) in MPS at NSF Collaboration was notified at the end of 2003 that the PIF program was being postponed 1 year. Suggested that proposals be redirected to the NSF ITR program. ITR Deadline is February 25th, 2004.

HENP Network Roadmap LHC Physics will require large bandwidth capability over a globally distributed network. The HENP Bandwidth Roadmap is shown in the table below:

eVLBI and UltraLight e-VLBI is a major thrust of UltraLight and can directly complement LHC-HEPs mode of using the network, allowing us to explore new strategies for network conditioning and bandwidth management. The e-VLBI work under this proposal will be multi-pronged in an effort to leverage the many new capabilities provided by UltraLight network and to provide the national and international VLBI community with advanced tools and services that are tailored to the e-VLBI application. e-VLBI stands to benefit from an UltraLight infrastructure in numerous ways: Higher sensitivity Faster turnaround Lower costs Quick diagnostics and tests New correlation methods e-VLBI will provide a different eScience perspective and validate the operation and efficiency of network bandwidth sharing between disparate scientific groups

UltraLight Architecture UltraLight envisions extending and augmenting the existing grid computing infrastructure (currently focused on CPU/storage) to include the network as an integral component. A second aspect is strengthening and extending “end-to-end” monitoring and planning

UltraLight Proposal Outline

Workplan and Phased Deployment UltraLight envisions a 4 year program to deliver a new, high-performance, network-integrated infrastructure: Phase I will last 12 months and focus on deploying the initial network infrastructure and bringing up first services Phase II will last 18 months and concentrate on implementing all the needed services and extending the infrastructure to additional sites Phase III will complete UltraLight and last 18 months. The focus will be on a transition to production in support of LHC Physics and eVLBI Astronomy

UltraLight Network: PHASE I Implementation via “sharing” with HOPI/NLR MIT not yet “optically” coupled

UltraLight Network: PHASE II Move toward multiple “lambdas” Bring in BNL and MIT

UltraLight Network: PHASE III Move into production Optical switching fully enabled amongst primary sites Integrated international infrastructure

Equipment and Interconnects The UltraLight optical switching topology is shown UltraLight plans to integrate data caches and CPU resources to provide integration testing and optimization

UltraLight Network UltraLight is a hybrid packet- and circuit-switched network infrastructure employing ultrascale protocols and dynamic building of optical paths to provide efficient fair-sharing on long range networks up to the 10 Gbps range, while protecting the performance of real-time streams and enabling them to coexist with massive data transfers. Circuit switched: “Intelligent photonics” (using wavelengths dynamically to construct and tear down wavelength paths rapidly and on demand through cost-effective wavelength routing) are a natural match to the peer-to-peer interactions required to meet the needs of leading-edge, data-intensive science. Packet switched: Many applications can effectively utilize the existing, cost effective networks provided by shared packet switched infrastructure. A subset of applications require more stringent guarantees than a best-effort network can provide, and so we are planning to utilize MPLS as an itermediate option

MPLS Topology Current network engineering knowledge is insufficient to predict what combination of “best-effort” packet switching, QoS-enabled packet switching, MPLS and dedicated circuits will be most effective in supporting these applications. We will use MPLS and other modes of bandwidth management, along with dynamic adjustments of optical paths and their provisioning, in order to develop the means to optimize end-to-end performance among a set of virtualized disk servers, a variety of real-time processes, and other traffic flows.

Logical Diagram of UltraLight Grid Enabled Analysis An “UltraLight” user’s perspective of the system: Important to note that the system helps interpret and optimize itself while “summarizing” the details for ease of use

Summary and Status UltraLight promises to deliver the critical missing component for future eScience: the integrated, managed network We have a strong team in place, as well as a detailed plan, to provide the needed infrastructure and services for production use by LHC turn-on at the end of 2007 Currently we are preparing the proposal for ITR submission by February 25, 2004 We will need to augment the proposal with additional grants to enable us to reach our goal of having UltraLight be a pervasive and effective infrastructure for LHC physics and eVLBI Astronomy

Questions? (or Answers)?