UltraLight Status Report

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Transport SDN: Key Drivers & Elements
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
1 ESnet Network Measurements ESCC Feb Joe Metzger
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 WHY NEED NETWORKING? - Access to remote information - Person-to-person communication - Cooperative work online - Resource sharing.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
Dan Nae California Institute of Technology The US LHCNet Project ICFA Workshop, Krakow October 2006.
February 2006 Iosif Legrand 1 Iosif Legrand California Institute of Technology February 2006 February 2006 An Agent Based, Dynamic Service System to Monitor,
1 VINCI : Virtual Intelligent Networks for Computing Infrastructures An Integrated Network Services System to Control and Optimize Workflows in Distributed.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Chapter2 Networking Fundamentals
1 MonALISA Team Iosif Legrand, Harvey Newman, Ramiro Voicu, Costin Grigoras, Ciprian Dobre, Alexandru Costan MonALISA capabilities for the LHCOPN LHCOPN.
US LHC NWG Dynamic Circuit Services in US LHCNet Artur Barczyk, Caltech Joint Techs Workshop Honolulu, 01/23/2008.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
J. Bunn, D. Nae, H. Newman, S. Ravot, R. Voicu, X. Su, Y. Xia California Institute of Technology US LHCNet US LHC Network Working Group Meeting October.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Joint Techs 17 July 2006 University of Wisconsin, Madison,
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Internet2 Networks, Current and Future Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 International ICFA Workshop.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
T0-T1 Networking Meeting 16th June Meeting
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
Fermilab T1 infrastructure
Grid Optical Burst Switched Networks
Distributed Mobility Management for Future 5G Networks : Overview and Analysis of Existing Approaches IEEE Wireless Communications January 2015 F. Giust,
“A Data Movement Service for the LHC”
Welcome Network Virtualization & Hybridization Thomas Ndousse
Chapter 1 Communication Networks and Services
California Institute of Technology
InterDomain Dynamic Circuit Network Demo
StarPlane: Application Specific Management of Photonic Networks
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Grid Computing.
Integration of Network Services Interface version 2 with the JUNOS Space SDK
CT1303 LAN Rehab AlFallaj.
Cloud Computing.
Management of Virtual Execution Environments 3 June 2008
IS3120 Network Communications Infrastructure
Akari Project an Initiative on Designing a New Generation Network
Module 5 - Switches CCNA 3 version 3.0.
ESnet Network Measurements ESCC Feb Joe Metzger
5th EU DataGrid Conference
Lecture 1 Overview of Communication Networks and Services
LHC Tier 2 Networking BOF
SAMANVITHA RAMAYANAM 18TH FEBRUARY 2010 CPE 691
Fall 2006 Internet2 Member Meeting
Wide-Area Networking at SLAC
Optical communications & networking - an Overview
EE 122: Lecture 22 (Overlay Networks)
The UltraLight Program
Abilene Update Rick Summerhill
Christian Todorov Internet2 Fall Member Meeting October 9, 2007
Chapter 8 – Data switching and routing
Presentation transcript:

UltraLight Status Report Shawn McKee / University of Michigan USATLAS Tier2 Meeting August 17, 2006 - Harvard

Reminder:The UltraLight Project UltraLight is A four year $2M NSF ITR funded by MPS. Application driven Network R&D. A collaboration of BNL, Buffalo, Caltech, CERN, Florida, FIU, FNAL, Internet2, Michigan, MIT, SLAC, Vanderbilt. Significant international participation: Brazil, Japan, Korea amongst many others. Goal: Enable the network as a managed resource. Meta-Goal: Enable physics analysis and discoveries which could not otherwise be achieved.

Status Update There are three areas which I want to make note of for the Tier-2s Work on new UltraLight kernel Development of VINCI/LISA/Endhost agents (US ATLAS test of this in Fall…) Work on FTS (either with FTS developers or as an equivalent project) …and one addendum on US LHCNet…

UltraLight Kernel Development Having a standard tuned kernel is very important for a number of UltraLight activities: Breaking the 1 GB/sec disk-to-disk barrier Exploring TCP congestion control protocols Optimizing our capability for demos and performance The planned kernel incorporates the latest FAST and Web100 patches over a 2.6.17-7 kernel and includes the latest RAID and 10GE NIC drivers. The UltraLight web page (http://www.ultralight.org ) has a Kernel page which provides the details off the Workgroup->Network page

Optical Path Plans Emerging “light path” technologies are becoming popular in the Grid community: They can extend and augment existing grid computing infrastructures, currently focused on CPU/storage, to include the network as an integral Grid component. Those technologies seem to be the most effective way to offer network resource provisioning on-demand between end-systems. A major capability we are developing in Ultralight is the ability to dynamically switch optical paths across the node, bypassing electronic equipment via a fiber cross connect. The ability to switch dynamically provides additional functionality and also models the more abstract case where switching is done between colors (ITU grid lambdas).

VINCI: A Multi-Agent System VINCI and the underlying MonALISA framework use a system of autonomous agents to support a wide range of dynamic services Agents in the MonALISA servers self-organize and collaborate with each other to manage access to distributed resources, to make effective decisions in planning workflow, to respond to problems that affect multiple sites, or to carry out other globally-distributed tasks Agents running on end-users’ desktops or clusters detect and adapt to their local environment so they can function properly. They locate and receive real-time information from a variety of MonALISA services, aggregate and present results to users, or feed information to higher level services Agents with built-in “intelligence” are required to engage in negotiations (for network resources, for example), and to make pro-active run-time decisions, while responding to changes in the environment

The Main VINCI Services Application End User Agent Topology Discovery GMPLS MPLS OS SNMP Scheduling ; Dynamic Path Allocation Control Path Provisioning Failure Detection Authentication, Authorization, Accounting Learning Prediction System Evaluation & Optimization MONITORING

Agents to Create on Demand an Optical Path or Tree Discovery & Secure Connection 2 ML Agent 3 ML Demon MonALISA Optical Switch Optical Switch The time to create a path on demand is less than 1s independent of the location and the number of connections 1 Control & Monitor the switch Optical Switch ML Agent MonALISA Runs a ML Demon >ml_path IP1 IP4 “copy file IP4” ML Agent 4 MonALISA ML proxy services used in Agent Communication

DEMO: MonALISA and path-building An example of optical path building using MonALISA is shown at: http://ultralight.caltech.edu/web-site/gae/movies/ml_optical_path/ml_os.htm One of the focus areas for UltraLight is being able to dynamically construct point-to-point light-paths where supported. We still have a pending proposal (PLaNetS) focused on creating a managed dynamic network infrastructure…

LISA, EVO and Endhosts Many of you are familiar with VRVS. Its successor is called EVO (Enabling Virtual Organizations). It improves on VRVS in a number of ways: Support for H.263 (capture and send your desktop as another video source for a conference) IM like capability (presence/chat) Better device / OS / Language support Significantly improved reliability and scalability Related to this last point is a the “merger” of MonALISA and VRVS in EVO. Endhost agents (LISA) are now an integral part of EVO. Endhost agents monitor the user’s hosts and react to changing conditions Something like this is envisioned as a component of deploying a ‘managed network’ Prototype testing of network agent this fall?

FTS and UltraLight… To date there has been little interaction between people working on the network and those working on data transport for ATLAS (or LHC in general) There is a significant amount of work architecting, developing and hardening the data management (and transport) for ATLAS…little time (or understanding of possibilities) for the network. A dynamic managed network introduces new possibilities. Research efforts in networking need to be fed into the data transport architecting. UltraLight is planning to engage the FTS developers and try to determine their understanding of (and plans for) the network. GOAL: Account for the network and improve robustness and performance of data transport and the overall infrastructure.

Aside: US LHCNet Status and Plans The following 7 slides (from Harvey Newman) provide some details about US LHCNet and its plans to support LHC scale physics requirements. Details are provided for reference but I won’t cover them in my limited time.

Next Generation LHCNet: Add Optical Circuit-Oriented Services Based on CIENA “Core Director” Optical Multiplexers Highly reliable in production environments Robust fallback, at the optical layer Circuit-oriented services: Guaranteed Bandwidth Ethernet Private Line (EPL) Sophisticated standards-based software: VCAT/LCAS. VCAT logical channels: highly granular bandwidth management LCAS: dynamically adjust channels Highly scalable and cost effective, especially for many OC-192 Links CERN-FNAL Primary EPL CERN-FNAL Secondary EPL This is consistent with the directions of the other major R&E networks such as Internet2/Abilene, GEANT (pan-European), ESnet SDN

Next Generation LHCNet: Add Optical Circuit-Oriented Services CERN-FNAL Secondary EPL CERN-FNAL Primary EPL Force10 switches for Layer 3 services

Topology JANUARY 2007

BNL and Long Island MAN Ring - Feb., 2006 to 2008 CERN Chicago (ESnet IP core) LI MAN – Diverse dual core connection Chi Europe Brookhaven National Lab, Upton, NY 2007 or 2008 USLHCnet Abilene NYSERNet SINet (Japan) CANARIE (Canada) HEANet (Ireland) Qatar GEANT second MAN switch 2006 10GE circuit MAN LAN ESnet demarc Cisco 6509 DWDM ring (KeySpan Communications) ESnet MAN T320 ESnet IP core LI MAN DWDM ESnet MAN 2007 10GE circuit ESnet SDN core switch BNL IP gateway other NYC site 2007 or 2008 2007 or 2008 USLHCnet SDN core DWDM 10 Gb/s circuits International USLHCnet circuits (proposed) production IP core SDN/provisioned virtual circuits 2007 circuits/facilities 32 AoA, NYC ESnet SDN core Chicago (Syracuse – Cleveland – Chicago) Washington (ESnet IP core) Washington

FNAL and Chicago MAN Ring CERN CERN (Via MANLAN) ORNL OC192 NRL, UltraScienceNet, etc. T320 T320 ESnet IP core ESnet SDN core IWire Starlight sw Qwest hub (NBC Bld.) Starlight LHCNet Notes FNAL Qwest FNAL FNAL Shared w/ IWire ESnet All circuits are 10Gb/s ESnet fiber (single lambda) ESnet Switch/RTR FNAL ESnet site gateway router ESnet/ Qwest site equip.

Next Generation LHCNet Circuit oriented services Bandwidth guarantees at flexible rate Provide for data transfer deadlines for remote data analysis Traffic isolation for unfriendly data transport protocols Security CIENA Platforms OSRP Optical Signaling and Routing Protocol Distributed signal and routing protocol which abstracts physical network resources Based on G.ASON Advertise topology information and capacity availability Connection management (Provisioning/Restoration) Resource discovery and maintenance Ethernet Private Line (EPL) – Point-to-Point Dedicated bandwidth tunnels: guaranteed end-to-end performance VCAT/LCAS/GFP-F allows for resilient, right-sized tunnels Automated end-to-end provisioning Technology and bandwidth roadmap in line with ESNet (SDN), Internet2 (HOPI/NEWNET) and GEANT plans

Milestones: 2006-2007 May to September 2006: Service Challenge 4 August 2006: Selection of telecom provider(s) from among those responding to the call for tender October 2006: Provisioning of new transatlantic circuits Fall 2006: Evaluation of CIENA platforms Try and buy agreement End 2006: 1st Deployment of Next-generation US LHCNet Transition to new circuit-oriented backbone, based on optical multiplexers. Maintain full switched and routed IP service for a controlled portion of the bandwidth Summer 2007: Start of LHC operations

Conclusion A number of developments are in progress. Tier-2’s should be able to benefit and can hopefully help drive these developments via testing feedback Networking developments need to be feed into existing and planned software for LHC US LHCNet is planning to support LHC scale requirements for connectivity and manageability