TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.

Slides:



Advertisements
Similar presentations
Circuit Monitoring July 16 th 2011, OGF 32: NMC-WG Jason Zurawski, Internet2 Research Liaison.
Advertisements

LambdaStation Phil DeMar Don Petravick NeSC Oct. 7, 2004.
Chapter 3: Planning a Network Upgrade
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
TeraPaths TeraPaths: Flow-Based End-to-End QoS Paths through Modern Hybrid WANs Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
Authenticated QoS Project Overview Andy Adamson Research Investigator Center for Information Technology Integration University of Michigan Ann Arbor.
TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation Bruce Gibbard Dimitrios Katramatos Shawn McKee Dantong.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
1 ITC242 – Introduction to Data Communications Week 12 Topic 18 Chapter 19 Network Management.
A Model for Grid User Management Rich Baker Dantong Yu Tomasz Wlodek Brookhaven National Lab.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 8 Introduction to Printers in a Windows Server 2008 Network.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 11 Managing and Monitoring a Windows Server 2008 Network.
Hands-On Microsoft Windows Server 2008 Chapter 11 Server and Network Monitoring.
Windows Server 2008 Chapter 11 Last Update
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
NG/VITA Strategy & Architecture Tony Shoot
ACM 511 Chapter 2. Communication Communicating the Messages The best approach is to divide the data into smaller, more manageable pieces to send over.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Identifying Application Impacts on Network Design Designing and Supporting Computer.
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
Center of Excellence Wireless and Information Technology CEWIT 2008 TeraPaths: Managing Flow-Based End-to-End QoS Paths Experience and Lessons Learned.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
Lambda Station Project Andrey Bobyshev; Phil DeMar; Matt Crawford ESCC/Internet2 Winter 2008 Joint Techs January 22; Honolulu, HI
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
USATLAS Network/Storage and Load Testing Jay Packard Dantong Yu Brookhaven National Lab.
1 NORTHROP GRUMMAN PRIVATE / PROPRIETARY LEVEL 1 NG/VITA Strategy & Architecture NG/VITA Strategy & Architecture Tony Shoot December 19, 2006.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Network Security. 2 SECURITY REQUIREMENTS Privacy (Confidentiality) Data only be accessible by authorized parties Authenticity A host or service be able.
Hierarchical Network Design – a Review 1 RD-CSY3021.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
Practical Distributed Authorization for GARA Andy Adamson and Olga Kornievskaia Center for Information Technology Integration University of Michigan, USA.
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
LAN QoS and WAN MPLS: Status and Plan Dantong Yu and Shawn Mckee DOE Site Visit December 13, 2004.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
1 TeraPaths and dynamic circuits  Strong interest to expand testbed to sites connected to Internet2 (especially US ATLAS T2 sites)  Plans started in.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Rational Unified Process Fundamentals Module 4: Core Workflows II - Concepts Rational Unified Process Fundamentals Module 4: Core Workflows II - Concepts.
Dynamic Circuit Network An Introduction John Vollbrecht, Internet2 May 26, 2008.
SDN and OSCARS how-to Evangelos Chaniotakis Network Engineering Group ESCC Indianapoilis, July 2009 Energy Sciences Network Lawrence Berkeley National.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
TeraPaths TeraPaths:Configuring End-to-End Virtual Network Paths With QoS Guarantees Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
The TeraPaths Testbed: Exploring End-to-End Network QoS Dimitrios Katramatos, Dantong Yu, Bruce Gibbard, Shawn McKee TridentCom 2007 Presented by D.Katramatos,
CONNECTING TO THE INTERNET
NG/VITA Strategy & Architecture Tony Shoot December 19, 2006
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
LQCD Computing Operations
IS4680 Security Auditing for Compliance
Presentation transcript:

TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December 14, 2005 Brookhaven National Laboratory

2 Outline  Introduction to TeraPaths.  Proposed Primitive Infrastructure.  Last Year’s Accomplishments:  End to End QoS Test, Deployment and Verification.  Software Development Activities.  Current Status Summary.  Future Works: Near Term Plan and Long Term Plan.  Related Works.

3  This Project Investigates the Integration and Use of MPLS and LAN QoS Based Differentiated Network Services in the ATLAS Data Intensive Distributed Computing Environment As a Way to Manage the Network As a Critical Resource.  The Collaboration Includes BNL and University of Michigan, and Other Collaborators From OSCAR (ESNET), UltraLight (UMich), and DWMI (SLAC).  Dimitrios Katramatos, David Stampf, Shawn Mckee, Frank Burstein, Scott Bradley, Dantong Yu, Razvan Popescu, Bruce Gibbard. What Is TeraPaths?

4 Proposed Infrastructure GridFtp & SRM LAN/MPLS ESnet TeraPaths Resource Manager MPLS and Remote LAN QoS requests Traffic Identification addresses, port #, DSCP bits Grid AA Network Usage Policy Data Management Data Transfer Bandwidth Requests & Releases OSCARS IN(E)GRESS Monitoring SE Second/Third year LAN QoS M10 LAN/MPLS Remote TeraPaths

5 Progress  Enabled LAN QoS Inside BNL Campus Network Along the Path From Gridftp Servers and SRM Into the Border Routers.  Tested and Verified MPLS Paths Between BNL and LBL, SLAC (Network Monitoring Project), FNAL. MPLS/QoS Path Between BNL and UM for Super Computing 2005 was setup.  Set up Network Monitoring Tools Provided by DWMI, Install Other Monitoring Tools When Necessary.  Integrated the LAN QoS With MPLS Paths Offered by OSCAR.  Verified LAN QoS Capability by Injecting Regular Traffic and Prioritized Traffic in LAN. (BNL ATLAS and ITD)  Verified WAN MPLS/LAN QoS Bandwidth Provisioning (BNL ATLAS and UM)  Did Integration Test and Documented Experience. (BNL ATLAS and UM)  Verified the Impact Between Prioritized Traffic and Best Effort Traffic. Learned the Effectiveness and Efficiency of MPLS/LAN QoS and Its Impact to Overall Network Performance.

6 Prioritized Traffic vs. Best Effort Traffic

7 SC2005 Traffic From BNL to UMich.

8 Software Development: Automate the MPLS/LAN QoS Setup  Implement Automatic/Manual QoS Request System  Web Interface for Manually Inputting QoS Requests.  Access the Capability From a Program (C, C++, Java, Perl, Shell Scripts, Etc.).  Work With a Variety of Networking Components.  Work With the Network Providers (WAN, Remote LAN).  Provide Access Control and Accounting.  Provide Monitoring of System.  Design Goals: To Permit the Reservation of End-to-End Network Resources in Order to Assure a Specified “Quality of Service”  User Can Request Minimum Bandwidth, Start Time, and Duration  User Will Either Receive What They Requested or a “Counter Offer”  Must Work End-to-end With One User Request

9 TeraPaths System Architecture Site A (initiator) Site B (remote) WAN web services WAN monitoring WAN web services route planner scheduler user manager site monitor router manager hardware drivers route planner scheduler user manager site monitor router manager Web page APIs Cmd line QoS requests

Typical Usage  User (or User Agent) Requests a Specified Bandwidth Between 2 Endpoints (IP & Port) for a Specified Time and Duration.  Functionality:  Determines Path & Negotiates With Routers Along Path for Agreeable Criteria  Reports Back to User  User Accepts or Rejects (Time Out)  The System Commits the Reservation End-to-end  The User Indicates the Start of the Connection & Proceeds.

Software Solution  The Software Components Are Being Realized As “Web Services”  Each Network Hardware Component (Router/switch) in the Path Will Have a Web Server Responsible for Maintaining Its Schedule and for Programming the Component As Needed.  Example – at BNL, a Web Service Makes a Telnet Connection to a Cisco Router and Programs the QoS Acls.  Others May Use SNMP or Other Means  For Other Network Provisioning Services That Have Their Own Means of Doing This, a Web Service Will Provide a Front End.  Web Services Benefits  The Services Are Typically Written in Java and Are Completely Portable.  Accessible From Web or Program  Provide an “Interface” As Well As an Implementation. Sites With Different Equipment Can Plug & Play by Programming to the Interface.  Highly Compatible With Grid Services, Straightforward to Port It Into Grid Services and Web Services Resource Framework (WSRF in GT4).

The Web Services Provided  Reservation Reserve(reservation)  The Argument Is the Requested Reservation, the Return Value Is the Counter Offer.  Boolean Commit(reservation)  Can Commit to the Reservation or Let It Time Out.  Boolean Start(reservation)  At Start Time, the User Must “Pick up the Ticket”. This Is When the Routers Are Actually Reconfigured  Boolean Finish(reservation)  To Be Polite & Possibly Release the Resources Early  Reservation[] Getreservations(time, Time)  Gets the Existing Reservations for a Particular Time Interval  Int[] Getbandwidths()  Provides a List of Bandwidths for the Managed Component

13 Summary of Progress  A Full Featured LAN QoS Simulation Testbed Was Created in a Private Network Environment: Consists of Four Hosts and Two CISCO Switches (Same Models As Production Infrastructure)  Implemented Mechanisms to Classify Data Transfers to Multiple Classes.  Software Development Well Underway and Demo Version Is Running on Our Testbed.  Collaborate Effectively With Other DOE Projects.

14 Recent Goals, Milestones and Work Items  Implement Bandwidth Scheduling System.  Current: First Come, First Serve for LAN Bandwidth.  Software System Integration With Oscar MPLS Scheduler.  Information Collector for DWMI (BNL ATLAS and SLAC).  Integrate TeraPaths/QoS Into Data Transfer Tools Such As File Transfer Services (FTS) and Reliable File Transfer.  Service Challenge 4 Data Transfer Between BNL and Tier 2, Utilize Wan MPLS and LAN QoS. ATLAS Data Management (DMS)  Data Transfer Tools (FTS)  Data Channel Manager  TeraPaths Bandwidth Reservation  Monitoring and Accounting.

15 Extend Scope and Increase Functionality of Prototype Service T Inter-network Domain QoS Establishment, Dynamically Creating, Adjusting of Sub-partitioning QoS Enabled Paths to Meet Time Constrained Network Requirements. q Create Site Level Network Resource Manager for Multiple VOs Vying for Limited WAN Resources. q Provide Dynamic Bandwidth Re-adjusting Based on Resource Usage Policy and Path Utilization Status Collected from Network Monitoring (DWMI). q Research Network Scheduling and Improve Efficiency of Network Resources. q Integration With Software Products From Other Network Projects: OSCAR, Lambda Station, and DWMI. q Improve User Authentication, Authorization, and Accounting.  Goal: to Broaden Deployment and Capability to Tier 1 and Tier 2 Sites, Create Services to Be Honored/Adopted by CERN ATLAS/LHC Tier 0.

16 TTeraPaths Monitoring Projects (SLAC).  BRUW (Bandwidth Reservation for User Work): Internet 2.   OSCAR Project ESnet.  UltraLight: NSF funded.  Lambda Station. Related Works