Path Monitoring Tools Deployment Planning for U.S. T123

Slides:



Advertisements
Similar presentations
Circuit Monitoring July 16 th 2011, OGF 32: NMC-WG Jason Zurawski, Internet2 Research Liaison.
Advertisements

Connect. Communicate. Collaborate Click to edit Master title style MODULE 1: perfSONAR TECHNICAL OVERVIEW.
Connect. Communicate. Collaborate Towards Multi-domain Monitoring for the Research Networks Nicolas Simar, Dante TNC 2005, Poznan, June 2005.
Connect. Communicate. Collaborate GÉANT2 JRA1 & perfSONAR Loukik Kudarimoti, DANTE 28 th May, 2006 RNP Workshop, Curitiba.
1 ESnet Network Measurement Current Status Joe Metzger Jan 24th 2008 ESCC meeting Energy Sciences Network Lawrence Berkeley National Laboratory Networking.
PerfSONAR Performance Monitoring Framework Matt Zekauskas, GENI Measurement Workshop June 26, 2009 Madison, Wisconsin.
1 ESnet Network Measurements ESCC Feb Joe Metzger
Connect. Communicate. Collaborate perfSONAR and Wavelengths Monitoring LHC meeting, Cambridge, 16 of June 2006 Matthias Hamm - DFN Nicolas Simar - DANTE.
Microsoft Active Directory(AD) A presentation by Robert, Jasmine, Val and Scott IMT546 December 11, 2004.
PerfSONAR Architecture: Design, Usage, Extension and Next Steps Presented by Prof. Martin Swany University of Delaware / Internet2 05 August, th.
PerfSONAR Information Services Update Jason Zurawski Feb 2, 2009 Winter Joint Techs 2009, College Station Texas.
PerfSONAR Eric L. Boyd. 2 perfSONAR: Overview Joint effort of ESnet, GÉANT2 JRA1 and Internet2 Herding cats or babysitting rottweilers? Webservices network.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks perfSONAR deployment over Spanish LHC Tier.
Internet2 Performance Update Jeff W. Boote Senior Network Software Engineer Internet2.
1 Measuring Circuit Based Networks Joint Techs Feb Joe Metzger
PiPEs Server Discovery – Adding NDT testing to the piPEs architecture Rich Carlson Internet2 April 20, 2004.
Connect. Communicate. Collaborate Implementing Multi-Domain Monitoring Services for European Research Networks Szymon Trocha, PSNC A. Hanemann, L. Kudarimoti,
OGF Network Measurement Control WG Jeff Boote Internet2 Martin Swany University of Delaware Jason Zurawski Internet2.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Network Schemata Martin Swany. Perspective UNIS – Uniform Network Information Schema –Unification of perfSONAR Lookup Service (LS) and Topology Service.
PerfSONAR-PS Functionality February 11 th 2010, APAN 29 – perfSONAR Workshop Jeff Boote, Assistant Director R&D.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Jeremy Nowell EPCC, University of Edinburgh A Standards Based Alarms Service for Monitoring Federated Networks.
© 2006 Open Grid Forum Network Monitoring and Usage Introduction to OGF Standards.
LAMP: Leveraging and Abstracting Measurements with perfSONAR Guilherme Fernandes
January 16 GGF14 NMWG Chicago (June 05) Jeff Boote – Internet2 Eric Boyd - Internet2.
Internet2 End-to-End Performance Initiative Eric L. Boyd Director of Performance Architecture and Technologies Internet2.
US LHC Tier-2 Network Performance BCP Mar-3-08 LHC Community Network Performance Recommended BCP Eric Boyd Deputy Technology Officer Internet2.
PerfSONAR-PS Working Group Aaron Brown/Jason Zurawski January 21, 2008 TIP 2008 – Honolulu, HI.
PerfSONAR WG 2006 Spring Member Meeting Jeff W. Boote 24 April 2006.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
Connect. Communicate. Collaborate JRA1 Status Update Stephan Kraft, RRZE FAU Erlangen-Nürnberg JRA1 Montpellier Meeting, October 2006.
LHCONE Monitoring Thoughts June 14 th, LHCOPN/LHCONE Meeting Jason Zurawski – Research Liaison.
22-Mar-2005 Internet2 Performance Architecture & Technologies Update Jeff W. Boote.
PiPEfitters Salt Lake City Jt Techs (Feb 05) Jeff Boote - Internet2.
DICE Diagnostic Service Joe Metzger Joint Techs Measurement Working Group January
Connect communicate collaborate perfSONAR MDM News Domenico Vicinanza DANTE (UK)
Advanced Network Diagnostic Tools Richard Carlson EVN-NREN workshop.
PerfSONAR Update Jason Zurawski October, 2007 OGF 21 – Seattle WA.
Campana (CERN-IT/SDC), McKee (Michigan) 16 October 2013 Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Information Services July 22 nd 2010, GENI I&M Working Group Jason Zurawski - Internet2.
EGI-InSPIRE EGI-InSPIRE RI Network Troubleshooting and PerfSONAR-Lite_TSS Mario Reale GARR.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Status of perfSONAR Tools Jason Zurawski April 23, 2007 Spring Member Meeting.
Grid Services for Digital Archive Tao-Sheng Chen Academia Sinica Computing Centre
LHC Path Monitoring Tools Deployment Planning Jeff Boote Internet2/R&D May 27, 2008 US ATLAS T2/T3 Workshop at UM.
perfSONAR WG Meeting (06FMM)
Internet2 End-to-End Performance Initiative
InterDomain Dynamic Circuit Network Demo
Networking for the Future of Science
Robert Szuman – Poznań Supercomputing and Networking Center, Poland
GGF14 NMWG Chicago (June 05)
PerfSONAR: Development Status
Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS
Integration of Network Services Interface version 2 with the JUNOS Space SDK
Network Monitoring and Troubleshooting with perfSONAR MDM
Internet2 Performance Update
Deployment & Advanced Regular Testing Strategies
Frederic Schaer, Sophie Ferry
ESnet Network Measurements ESCC Feb Joe Metzger
Interoperability with Existing PS-PS Instances
Internet2 E2E piPEs Joining the Federation of Network Measurement Infrastructures Eric L. Boyd 26 December 2018.
E2E piPEs Overview Eric L. Boyd Internet2 24 February 2019.
Interoperable Measurement Frameworks: Internet2 E2E piPEs and NLANR Advisor Eric L. Boyd Internet2 17 April 2019.
“Detective”: Integrating NDT and E2E piPEs
Internet2 E2E piPEs Project
Presentation transcript:

Path Monitoring Tools Deployment Planning for U.S. T123 Jeff Boote Internet2/R&D June 17, 2008 LHCOPN Meeting @ CERN

Implementation Considerations Constraints Implications Different LHC participants are interested in performance over different paths Sites must be able to monitor from their site to other sites of interest (T2 want to actively probe upstream T1s and downstream T3s) Must support monitoring as well as diagnostic interactions Must gracefully degrade given lower levels of participation from sites Diagnostics should aid applications to make performance choices Different analysis should be made available to different users LHC sites have different influence over their ‘local area’ network (T3’s administrative boundary is likely the physics department) A single monitoring appliance or service is impractical Scaling issues become intractable beyond the T0/T1 paths (mostly due to support issues, but data is more difficult as well) Locally configurable system per-site with prescribed minimal support Don’t get me wrong: An appliance at the ‘core’ makes deploying out the the edges much easier! Solution needs to integrate on-demand with scheduled probes Solution needs ability to determine what diagnostics or tools are available from remote sites Application interfaces must be open, available and usable Analysis (GUI) applications must be available and usable (and new analysis tools must be easy to create) Measurement host placement can be varying topological distances from the LHC compute servers

System Architecture perfSONAR Autonomous (federated) measurement deployments Global Discovery Web services SOA solution exposing existing diagnostics and monitoring tools NMWG - Schema

What is perfSONAR A collaboration An architecture & a set of protocols Production network operators focused on designing and building tools that they will deploy and use on their networks to provide monitoring and diagnostic capabilities to themselves and their user communities. An architecture & a set of protocols Web Services Architecture Protocols based on the Open Grid Forum Network Measurement Working Group Schemata Several interoperable software implementations Java, Perl, Python… A Deployed Measurement infrastructure To address our goals and vision, we have started (along with others) the perfSONAR project

perfSONAR Architecture Interoperable network measurement middleware (SOA): Modular Web services-based Decentralized Locally controlled Integrates: Network measurement tools and archives Data manipulation Information Services Discovery Topology Authentication and authorization Based on: Open Grid Forum Network Measurement Working Group schema Currently attempting to formalize specification of perfSONAR protocols in a new OGF WG (NMC)

Decouple 3 phases of a Measurement Infrastructure

perfSONAR Services Measurement Point Service Enables the initiation of performance tests Measurement Archive Service Stores and publishes performance monitoring results Transformation Service Transform the data (aggregation, concatenation, correlation, translation, etc) These services are specifically concerned with the job of network performance measurement and analysis The services register themselves to the LS and mention their capabilities. They can also leave or be removed from a LS if a service gets down (keepalives).

Information Services Lookup Service Allows the client to discover the existing services and other LS services. Dynamic: services registration themselves to the LS and mention their capabilities, they can also leave or be removed if a service goes down. Topology Service Make the network topology information available to the framework. Find the closest MP, provide topology information for visualisation tools Authentication Service Based on Existing efforts: Internet2 MAT, GN2-JRA5 Authentication & Authorization functionality for the framework Users can have several roles, the authorization is done based on the user role. Trust relationship between networks These services are the infrastructure of the architecture concerned with the job of federating the available network measurement and diagnostic tools The services register themselves to the LS and mention their capabilities. They can also leave or be removed from a LS if a service gets down (keepalives).

Example perfSonar client interaction Where can I get more about network Doman B/IP d,e,f and Domain A/IP a,b,c? gLS Useful graph LS A, LS B Client Where is link utilization for – IPs a,b,c? a,b,c : Network A, MA A Get link utilization d,e,f Where is link utilization for - IPs d,e,f? Here you go Get Link utilization a,b,c Here you go d,e,f : Network B, MA B LS A LS B MA B MA A a b f e c d Network A Network B

LHC Deployments Specific set of available services at each participant site (Network Measurement System – NMS) Small number of ‘support’ services supported by backbone network organizations Complimentary diagnostic services at backbone network locations Analysis clients Some directly available to end researchers, some specifically designed for NOC personnel Specific services are determined as proscribed by Joe’s previous presentation

Service Deployment Internet2 gLS ESnet gLS Dante gLS Can be used to find any NMS deployment Service Deployment Internet2 gLS ESnet gLS Dante gLS Registers with *any* gLS LHC T? NMS LHC T? NMS LHC T? NMS LHC T? NMS LHC T? NMS LHC T1/2/3 NMS NMS == Network Measurment System

LHC Site NMS Deployment Analysis/User Interfaces Scheduled Probes /Archives Diagnostic Daemons Target (ICMP) This is likely to take one to two hosts to implement. * Required * Optional

Required Deployment Functionality Resources required Host with ICMP access Need to be able to ‘ping’ and ‘traceroute’ to somewhere on the site Diagnostic Daemons NDT OWAMPD BWCTLD Registration of availability Accessible host (firewall modifications likely) Modest linux systems (two) Must run a daemon that registers tool availability to gLS

Optional Deployment Functionality Resources required Data archiving Regularly scheduled probes pingER/owamp/bwctl Circuit Status/Utilization linux system with reasonable amount of disk space (possibly two) Configuration to interact with existing infrastructure

Hardware Requirements 2 Dedicated hosts (~$500/host) Differentiate network tests from LHC application tests Gives ‘local’ servers to test LHC servers against 1 for latency related tests (and pS infrastructure tasks – will move if this causes problems) Minimal CPU/memory/disk requirements Recommend minimum 2.0 Ghz/1GB Best if power-management disabled, and in temperature controlled environment Nearly any NIC is ok Recommend a 10/100/1000 Mbps NIC (on-board is fine) 1 for throughput related tests CPU/NIC needs to be ‘right sized’ for throughput intensities Recommend 10/100/1000 Mbps NIC (on-board is fine)

Software Deployment Issues Difficult software prerequisites Web100 kernel (for NDT) Oracle XMLDB (for archives and info services) This is the free open-source XMLDB formerly known as SleepyCat OWAMP stable clock requirments (NTP is not accurate in a VM…) Deployment options NPT (Network Performance Toolkit) knoppix disk Current version has most ‘required’ functionality Only lacks ‘registration’ for lookup-service Intend to have LHC related perfSONAR services on disk by July Jt Techs conference RPM installs pS-PS development team can support 32-bit RHEL5 RPMS directly Looking for community involvement to support additional OS/hardware architectures Source installs

Before the demo… Summary More information A scalable infrastructure for providing network performance information to interested LHC participants (humans as well as applications) Open Source licenses and development model Multiple deployment options Interfaces for any application to consume the data Internet2 (and ESnet and partners are committed to supporting these tools) http://www.internet2.edu/performance/pS-PS http://e2epi.internet2.edu/bwctl/ http://e2epi.internet2.edu/ndt/ http://e2epi.internet2.edu/owamp/ http://www-iepm.slac.stanford.edu/pinger/ Internet2 Community Performance WG https://mail.internet2.edu/wws/info/performance-announce

Jt Techs Deployment Deployment Locations: Diagnostic Services Services UNL, LBL, USLHC (cal-tech), CERN, UM, Internet2(office) Diagnostic Services NDT NPAD BWCTL OWAMP Services SNMP MA PingER MA perfSONAR-BUOY (BWCTL/OWAMP scheduled)

Measurement System Deployment