Monitoring and Discovery in a Web Services Framework: Functionality and Performance of Globus Toolkit MDS4 Jennifer M. Schopf Argonne National Laboratory.

Slides:



Advertisements
Similar presentations
Symantec 2010 Windows 7 Migration EMEA Results. Methodology Applied Research performed survey 1,360 enterprises worldwide SMBs and enterprises Cross-industry.
Advertisements

Symantec 2010 Windows 7 Migration Global Results.
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
AGVISE Laboratories %Zone or Grid Samples – Northwood laboratory
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
PDAs Accept Context-Free Languages
ALAK ROY. Assistant Professor Dept. of CSE NIT Agartala
Process Description and Control
EuroCondens SGB E.
Reinforcement Learning
Slide 1Fig 26-CO, p.795. Slide 2Fig 26-1, p.796 Slide 3Fig 26-2, p.797.
Sequential Logic Design
Copyright © 2013 Elsevier Inc. All rights reserved.
September 2013 ASTM Officers Training Workshop September 2013 ASTM Officers Training Workshop ASTM Member Website Tools September 2013 ASTM Officers Training.
TeraGrid Deployment Test of Grid Software JP Navarro TeraGrid Software Integration University of Chicago OGF 21 October 19, 2007.
David Burdett May 11, 2004 Package Binding for WS CDL.
18 Copyright © 2005, Oracle. All rights reserved. Distributing Modular Applications: Introduction to Web Services.
Create an Application Title 1Y - Youth Chapter 5.
Add Governors Discretionary (1G) Grants Chapter 6.
CALENDAR.
CHAPTER 18 The Ankle and Lower Leg
The 5S numbers game..
© Tally Solutions Pvt. Ltd. All Rights Reserved Shoper 9 License Management December 09.
A Fractional Order (Proportional and Derivative) Motion Controller Design for A Class of Second-order Systems Center for Self-Organizing Intelligent.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Welcome. © 2008 ADP, Inc. 2 Overview A Look at the Web Site Question and Answer Session Agenda.
Break Time Remaining 10:00.
The basics for simulations
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Database Performance Tuning and Query Optimization
Employee & Manager Self Service Overview
1 IMDS Tutorial Integrated Microarray Database System.
Regression with Panel Data
Operating Systems Operating Systems - Winter 2012 Chapter 2 - Processes Vrije Universiteit Amsterdam.
Dynamic Access Control the file server, reimagined Presented by Mark on twitter 1 contents copyright 2013 Mark Minasi.
Progressive Aerobic Cardiovascular Endurance Run
FAFSA on the Web Preview Presentation December 2013.
CSE 6007 Mobile Ad Hoc Wireless Networks
MaK_Full ahead loaded 1 Alarm Page Directory (F11)
Facebook Pages 101: Your Organization’s Foothold on the Social Web A Volunteer Leader Webinar Sponsored by CACO December 1, 2010 Andrew Gossen, Senior.
Welcome to Instructions and tips for the online application process 1 June 2012.
When you see… Find the zeros You think….
GEtServices Services Training For Suppliers Requests/Proposals.
2011 WINNISQUAM COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=1021.
Before Between After.
2011 FRANKLIN COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=332.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
WorkKeys Internet Version Training
1 Titre de la diapositive SDMO Industries – Training Département MICS KERYS 09- MICS KERYS – WEBSITE.
Static Equilibrium; Elasticity and Fracture
Clock will move after 1 minute
1 © 2004, Cisco Systems, Inc. All rights reserved. CCNA 1 v3.1 Module 9 TCP/IP Protocol Suite and IP Addressing.
Chapter 11 Creating Framed Layouts Principles of Web Design, 4 th Edition.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Pointers and Linked Lists.
Select a time to count down from the clock above
9. Two Functions of Two Random Variables
A Data Warehouse Mining Tool Stephen Turner Chris Frala
1 Dr. Scott Schaefer Least Squares Curves, Rational Representations, Splines and Continuity.
Advanced Users Training 1 ENTERPRISE REPORTING FINANCIAL REPORTS.
Outlook 2013 Web App (OWA) User Guide Durham Technical Community College.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Introduction Embedded Universal Tools and Online Features 2.
Presented to: By: Date: Federal Aviation Administration FAA Safety Team FAASafety.gov AMT Awards Program Sun ‘n Fun Bryan Neville, FAASTeam April 21, 2009.
Schutzvermerk nach DIN 34 beachten 05/04/15 Seite 1 Training EPAM and CANopen Basic Solution: Password * * Level 1 Level 2 * Level 3 Password2 IP-Adr.
Monitoring and Discovery in a Web Services Framework: Functionality and Performance of Globus Toolkit MDS4 Jennifer M. Schopf Argonne National Laboratory.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Presentation transcript:

Monitoring and Discovery in a Web Services Framework: Functionality and Performance of Globus Toolkit MDS4 Jennifer M. Schopf Argonne National Laboratory UK National eScience Centre (NeSC) Sept 11, 2006

2 What is a Grid l Resource sharing u Computers, storage, sensors, networks, … u Sharing always conditional: issues of trust, policy, negotiation, payment, … l Coordinated problem solving u Beyond client-server: distributed data analysis, computation, collaboration, … l Dynamic, multi-institutional virtual orgs u Community overlays on classic org structures u Large or small, static or dynamic

3 Why is this hard/different? l Lack of central control u Where things run u When they run l Shared resources u Contention, variability l Communication u Different sites implies different sys admins, users, institutional goals, and often strong personalities

4 So why do it? l Computations that need to be done with a time limit l Data that cant fit on one site l Data owned by multiple sites l Applications that need to be run bigger, faster, more

5 What Is Grid Monitoring? l Sharing of community data between sites using a standard interface for querying and notification u Data of interest to more than one site u Data of interest to more than one person u Summary data is possible to help scalability l Must deal with failures u Both of information sources and servers l Data likely to be inaccurate u Generally needs to be acceptable for data to be dated

6 Common Use Cases l Decide what resource to submit a job to, or to transfer a file from l Keep track of services and be warned of failures l Run common actions to track performance behavior l Validate sites meet a (configuration) guideline

7 OUTLINE l Grid Monitoring and Use Cases l MDS4 u Information Providers u Higher level services u WebMDS l Deployments u Metascheduling data for TeraGrid u Service failure warning for ESG l Performance Numbers l MDS For You!

8 What is MDS4? l Grid-level monitoring system used most often for resource selection and error notification u Aid user/agent to identify host(s) on which to run an application u Make sure that they are up and running correctly l Uses standard interfaces to provide publishing of data, discovery, and data access, including subscription/notification u WS-ResourceProperties, WS-BaseNotification, WS- ServiceGroup l Functions as an hourglass to provide a common interface to lower-level monitoring tools

9 Standard Schemas (GLUE schema, eg) Information Users : Schedulers, Portals, Warning Systems, etc. Cluster monitors (Ganglia, Hawkeye, Clumon, and Nagios) Services (GRAM, RFT, RLS) Queuing systems (PBS, LSF, Torque) WS standard interfaces for subscription, registration, notification

10 Web Service Resource Framework (WS-RF) l Defines standard interfaces and behaviors for distributed system integration, especially (for us): u Standard XML-based service information model u Standard interfaces for push and pull mode access to service data l Notification and subscription

11 MDS4 Uses Web Service Standards l WS-ResourceProperties u Defines a mechanism by which Web Services can describe and publish resource properties, or sets of information about a resource u Resource property types defined in services WSDL u Resource properties can be retrieved using WS- ResourceProperties query operations l WS-BaseNotification u Defines a subscription/notification interface for accessing resource property information l WS-ServiceGroup u Defines a mechanism for grouping related resources and/or services together as service groups

12 MDS4 Components l Information providers u Monitoring is a part of every WSRF service u Non-WS services are also be used l Higher level services u Index Service – a way to aggregate data u Trigger Service – a way to be notified of changes u Both built on common aggregator framework l Clients u WebMDS l All of the tool are schema-agnostic, but interoperability needs a well-understood common language

13 Information Providers l Data sources for the higher-level services l Some are built into services u Any WSRF-compliant service publishes some data automatically u WS-RF gives us standard Query/Subscribe/Notify interfaces u GT4 services: ServiceMetaDataInfo element includes start time, version, and service type name u Most of them also publish additional useful information as resource properties

14 Information Providers: GT4 Services l Reliable File Transfer Service (RFT) u Service status data, number of active transfers, transfer status, information about the resource running the service l Community Authorization Service (CAS) u Identifies the VO served by the service instance l Replica Location Service (RLS) u Note: not a WS u Location of replicas on physical storage systems (based on user registrations) for later queries

15 Information Providers (2) l Other sources of data u Any executables u Other (non-WS) services u Interface to another archive or data store u File scraping l Just need to produce a valid XML document

16 Information Providers: Cluster and Queue Data l Interfaces to Hawkeye, Ganglia, CluMon, Nagios u Basic host data (name, ID), processor information, memory size, OS name and version, file system data, processor load data u Some condor/cluster specific data u This can also be done for sub-clusters, not just at the host level l Interfaces to PBS, Torque, LSF u Queue information, number of CPUs available and free, job count information, some memory statistics and host info for head node of cluster

17 Other Information Providers l File Scraping u Mostly used for data you cant find programmatically u System downtime, contact info for sys admins, online help web pages, etc. l Others as contributed by the community!

18 Higher-Level Services l Index Service u Caching registry l Trigger Service u Warn on error conditions l All of these have common needs, and are built on a common framework

19 MDS4 Index Service l Index Service is both registry and cache u Datatype and data provider info, like a registry (UDDI) u Last value of data, like a cache l Subscribes to information providers l In memory default approach u DB backing store currently being discussed to allow for very large indexes l Can be set up for a site or set of sites, a specific set of project data, or for user-specific data only l Can be a multi-rooted hierarchy u No *global* index

20 MDS4 Trigger Service l Subscribe to a set of resource properties l Evaluate that data against a set of pre- configured conditions (triggers) l When a condition matches, action occurs u is sent to pre-defined address u Website updated

21 Common Aspects 1) Collect information from information providers u Java class that implements an interface to collect XML-formatted data u Query uses WS-ResourceProperty mechanisms to poll a WSRF service u Subscription uses WS-Notification subscription/notification u Execution executes an administrator-supplied program to collect information 2) Common interfaces to external services u These should all have the standard WS-RF service interfaces

22 Common Aspects (2) 3) Common configuration mechanism u Maintain information about which information providers to use and their associated parameters u Specify what data to get, and from where 4) Services are self-cleaning u Each registration has a lifetime u If a registration expires without being refreshed, it and its associated data are removed from the server 5) Soft consistency model u Flexible update rates from different IPs u Published information is recent, but not guaranteed to be the absolute latest u Load caused by information updates is reduced at the expense of having slightly older information u Free disk space on a system 5 minutes ago rather than 2 seconds ago

23 Aggregator Framework

24 Aggregator Framework is a General Service l This can be used for other higher-level services that want to u Subscribe to Information Provider u Do some action u Present standard interfaces l Archive Service u Subscribe to data, put it in a database, query to retrieve, currently in discussion for development l Prediction Service u Subscribe to data, run a predictor on it, publish results l Compliance Service u Subscribe to data, verify a software stack match to definition, publish yes or no

25 WebMDS User Interface l Web-based interface to WSRF resource property information l User-friendly front-end to Index Service l Uses standard resource property requests to query resource property data l XSLT transforms to format and display them l Customized pages are simply done by using HTML form options and creating your own XSLT transforms l Sample page: u ds?info=indexinfo&xsl=servicegroupxsl

26 WebMDS Service

27

28

29

30

31 WebMDS Site 3 App B Index App B Index Site 3 Index Site 3 Index Rsc3.a RLS I Rsc3.b RLS II Rsc3.b Site 1 West Coast Index West Coast Index Trigger Service Rsc2.a Hawkeye Rsc2.b GRAM II Site 2 Index Site 2 Index Site 2 Index Ganglia/LSF Rsc1.c GRAM (LSF) I Ganglia/LSF Rsc1.c GRAM (LSF) GRAM (LSF) II Rsc1.a Ganglia/PBS Rsc1.b GRAM (PBS) I Ganglia/PBS Rsc1.b GRAM (PBS) GRAM (PBS) II Site 1 Index Site 1 Index Site 1 Index RFT Rsc1.d II AA BB CC DD EE VO Index FF Trigger action

32 Site 1 Ganglia/LSF Rsc1.c GRAM (LSF) I Ganglia/LSF Rsc1.c GRAM (LSF) GRAM (LSF) II Rsc1.a Ganglia/PBS Rsc1.b GRAM (PBS) I Ganglia/PBS Rsc1.b GRAM (PBS) GRAM (PBS) II Site 1 Index Site 1 Index Site 1 Index RFT Rsc1.d II AA WebMDS Site 3 App B Index App B Index Site 3 Index Site 3 Index Rsc3.a RLS I Rsc3.b RLS II Rsc3.b West Coast Index West Coast Index Trigger Service Rsc2.a Hawkeye Rsc2.b GRAM II Site 2 Index Site 2 Index Site 2 Index BB CC DD EE VO Index FF Trigger action Index Container Service Registration II RFTABC

33

34

35 WebMDS Site 3 App B Index App B Index Site 3 Index Site 3 Index Rsc3.a RLS I Rsc3.b RLS II Rsc3.b Site 1 West Coast Index West Coast Index Trigger Service Rsc2.a Hawkeye Rsc2.b GRAM II Site 2 Index Site 2 Index Site 2 Index Ganglia/LSF Rsc1.c GRAM (LSF) I Ganglia/LSF Rsc1.c GRAM (LSF) GRAM (LSF) II Rsc1.a Ganglia/PBS Rsc1.b GRAM (PBS) I Ganglia/PBS Rsc1.b GRAM (PBS) GRAM (PBS) II Site 1 Index Site 1 Index Site 1 Index RFT Rsc1.d II AA BB CC DD EE VO Index FF Trigger action

36 Any questions before I walk through two current deployments? l Grid Monitoring and Use Cases l MDS4 u Information Providers u Higher-level services u WebMDS l Deployments u Metascheduling Data for TeraGrid u Service Failure warning for ESG l Performance Numbers l MDS for You!

37 Working with TeraGrid l Large US project across 9 different sites u Different hardware, queuing systems and lower level monitoring packages l Starting to explore MetaScheduling approaches u Currently evaluating almost 20 approaches l Need a common source of data with a standard interface for basic scheduling info

38 Cluster Data l Provide data at the subcluster level u Sys admin defines a subcluster, we query one node of it to dynamically retrieve relevant data l Can also list per-host details l Interfaces to Ganglia, Hawkeye, CluMon, and Nagios available now u Other cluster monitoring systems can write into a.html file that we then scrape

39 Cluster Info l UniqueID l Benchmark/Clock speed l Processor l MainMemory l OperatingSystem l Architecture l Number of nodes in a cluster/subcluster l StorageDevice u Disk names, mount point, space available l TG specific Node properties

40 Data to collect: Queue info l LRMSType l LRMSVersion l DefaultGRAMVersion and port and host l TotalCPUs l Status (up/down) l TotalJobs (in the queue) l RunningJobs l WaitingJobs l FreeCPUs l MaxWallClockTime l MaxCPUTime l MaxTotalJobs l MaxRunningJobs l Interface to PBS (Pro, Open, Torque), LSF

41 How will the data be accessed? l Java and command line APIs to a common TG-wide Index server u Alternatively each site can be queried directly l One common web page for TG u l Query page is next!

42

43 Status l Demo system running since Autumn 05 u Queuing data from SDSC and NCSA u Cluster data using CluMon interface l All sites in process of deployment u Queue data from 7 sites reporting in u Cluster data still coming online

44 Earth Systems Grid Deployment l Supports the next generation of climate modeling research l Provides the infrastructure and services that allow climate scientists to publish and access key data sets generated from climate simulation models l Datasets including simulations generated using the Community Climate System Model (CCSM) and the Parallel Climate Model (PCM l Accessed by scientists throughout the world.

45 Who uses ESG? l In 2005 u ESG web portal issued 37,285 requests to download terabytes of data l By the fourth quarter of 2005 u Approximately two terabytes of data downloaded per month l 1881 registered users in 2005 l Currently adding users at a rate of more than 150 per month

46 What are the ESG resources? l Resources at seven sites u Argonne National Laboratory (ANL) u Lawrence Berkeley National Laboratory (LBNL) u Lawrence Livermore National Laboratory (LLNL) u Los Alamos National Laboratory (LANL) u National Center for Atmospheric Research (NCAR) u Oak Ridge National Laboratory (ORNL) u USC Information Sciences Institute (ISI) l Resources include u Web portal u HTTP data servers u Hierarchical mass storage systems u OPeNDAP system u Storage Resource Manager (SRM) u GridFTP data transfer service [ u Metadata and replica management catalogs

47

48 The Problem: l Users are 24/7 l Administrative support was not! l Any failure of ESG components or services can severely disrupt the work of many scientists The Solution l Detect failures quickly and minimize infrastructure downtime by deploying MDS4 for error notification

49 ESG Services Being Monitored Service Being MonitoredESG Location GridFTP serverNCAR OPeNDAP serverNCAR Web PortalNCAR HTTP DataserverLANL, NCAR Replica Location Service (RLS) servers LANL, LBNL, LLNL, NCAR, ORNL Storage Resource ManagersLANL, LBNL, NCAR, ORNL Hierarchical Mass Storage Systems LBNL, NCAR, ORNL

50 Index Service l Site-wide index service is queried by the ESG web portal u Generate an overall picture of the state of ESG resources displayed on the Web

51

52 Trigger Service l Site-wide trigger service collects data and sends upon errors u Information providers are polled at pre- defined services u Value must be matched for set number of intervals for trigger to occur to avoid false positives u Trigger has a delay associated for vacillating values u Used for offline debugging as well

53 1 Month of Error Messages Total error messages for May Messages related to certificate and configuration problems at LANL 38 Failure messages due to brief interruption in network service at ORNL on 5/13 2 HTTP data server failure at NCAR 5/171 RLS failure at LLNL 5/221 Simultaneous error messages for SRM services at NCAR, ORNL, LBNL on 5/23 3 RLS failure at ORNL 5/241 RLS failure at LBNL 5/311

54 1 Month of Error Messages Total error messages for May Messages related to certificate and configuration problems at LANL 38 Failure messages due to brief interruption in network service at ORNL on 5/13 2 HTTP data server failure at NCAR 5/171 RLS failure at LLNL 5/221 Simultaneous error messages for SRM services at NCAR, ORNL, LBNL on 5/23 3 RLS failure at ORNL 5/241 RLS failure at LBNL 5/311

55 l For the month of May 2006, ESGs deployment of MDS4 generated 47 failure messages that were sent to ESG system administrators. These are summarized in Table 3. The majority of these failure messages were caused by downtime throughout the month of services at LANL due to certificate expiration and service configuration problems. During this period, LANL staff members were not available to address these issues. Additional LANL failure messages would have been generated, except that we disabled the triggers from LANL during a two-week period when the responsible staff person was on vacation. l The remaining error messages indicate short-term service failures. Two failure messages were generated due to a network outage at ORNL on May 13th. Three error messages for Storage Resource Managers (SRMs) at different sites were generated on May 23rd. Since it is unlikely that all three of these services failed simultaneously, these messages were more likely due to a problem with the network, the monitoring services, or the client that checks the status of SRM services.

56 Benefits l Overview of current system state for users and system administrators u At a glance info on resources and services availability u Uniform interface to monitoring data l Failure notification u System admins can identify and quickly address failed components and services u Before this deployment, services would fail and might not be detected until a user tried to access an ESG dataset l Validation of new deployments u Verify the correctness of the service configurations and deployment with the common trigger tests l Failure deduction u A failure examined in isolation may not accurately reflect the state of the system or the actual cause of a failure u System-wide monitoring data can show a pattern of failure messages that occur close together in time can be used to deduce a problem at a different level of the system u Eg. 3 SRM failures u EG. Use of MDS4 to evaluate file descriptor leak

57 OUTLINE l Grid Monitoring and Use Cases l MDS4 u Index Service u Trigger Service u Information Providers l Deployments u Metascheduling Data for TeraGrid u Service Failure warning for ESG l Performance Numbers l MDS for You!

58 Scalability Experiments l MDS index u Dual 2.4GHz Xeon processors, 3.5 GB RAM u Sizes: 1, 10, 25, 50, 100 l Clients u 20 nodes also dual 2.6 GHz Xeon, 3.5 GB RAM u 1, 2, 3, 4, 5, 6, 7, 8, 16, 32, 64, 128, 256, 384, 512, 640, 768, 800 l Nodes connected via 1Gb/s network l Each data point is average of 8 minutes u Ran for 10 mins but first 2 spent getting clients up and running u Error bars are SD over 8 mins l Experiments by Ioan Raicu, U of Chicago, using DiPerf

59 Size Comparison l In our current TeraGrid demo u 17 attributes from 10 queues at SDSC and NCSA u Host data - 3 attributes for approx 900 nodes u 12 attributes of sub-cluster data for 7 subclusters u ~3,000 attributes, ~1900 XML elements, ~192KB. l Tests here- 50 sample entries u element count of 1113 u ~94KB in size

60

61

62 MDS4 Stability Vers.Index Size Time up (Days) Queries Processed Query Per Sec. Round- trip Time (ms) ,701, ,306, ,686, ,890, ,395,

63 Index Maximum Size Heap Size (MB) Approx. Max. Index Entries Index Size (MB)

64 Performance l Is this enough? u We dont know! u Currently gathering up usage statistics to find out what people need l Bottleneck examination u In the process of doing in depth performance analysis of what happens during a query u MDS code, implementation of WS-N, WS- RP, etc

65 MDS For You l Grid Monitoring and Use Cases l MDS4 u Information Providers u Higher-level services u WebMDS l Deployments u Metascheduling Data for TeraGrid u Service Failure warning for ESG l Performance Numbers l MDS for You!

66 How Should You Deploy MDS4? l Ask: Do you need a Grid monitoring system? l Sharing of community data between sites using a standard interface for querying and notification u Data of interest to more than one site u Data of interest to more than one person u Summary data is possible to help scalability

67 What does your project mean by monitoring? l Display site data to make resource selection decisions l Job tracking l Error notification l Site validation l Utilization statistics l Accounting data

68 What does your project mean by monitoring? l Display site data to make resource selection decisions l Job tracking l Error notification l Site validation l Utilization statistics l Accounting data MDS4 a Good Choice!

69 What does your project mean by monitoring? l Display site data to make resource selection decisions l Job tracking – generally application specific l Error notification l Site validation l Utilization statistics – use local info l Accounting data- use local info and reliable messaging – AMIE from TG is one option Think about other tools

70 What data do you need l There is no generally agreed upon list of data every site should collect l Two possible examples u What TG is deploying l u What GIN-Info is collecting l iki/GINInfoWiki iki/GINInfoWiki l Make sure the data you want is actually theoretically possible to collect! l Worry about the schema later

71 Building your own info providers l See the developer session! l Some pointers… l List of new providers u -drafts/info/providers/index.html l How to write info providers: u /rpprovider-overview.html u provider-documentation.html u DS_Index_HOWTO_Execution_Aggregator.html

72 How many Index Servers? l Generally one at each site, one for full project l Can be cross referenced and duplicated l Can also set them up for an application group or any subset

73 What Triggers? l What are your critical services?

74 What Interfaces? l Command line, Java, C, and Python come for free l WebMDS give you the simepl one out of the box l Can stylize- like TG and ESG did – very straight forward

75 What will you be able to do? l Decide what resource to submit a job to, or to transfer a file from l Keep track of services and be warned of failures l Run common actions to track performance behavior l Validate sites meet a (configuration) guideline

76 Summary l MDS4 is a WS-based Grid monitoring system that uses current standards for interfaces and mechanisms l Available as part of the GT4 release u Currently in use for resource selection and fault notification l Initial performance results arent awful – we need to do more work to determine bottlenecks

77 Where do we go next? l Extend MDS4 information providers u More data from GT4 services u Interface to other data sources l Inca, GRASP, PinGER Archive, NetLogger l Additional deployments l Additional scalability testing and development u Database backend to Index service to allow for very large indexes u Performance improvements to queries – partial result return

78 Other Possible Higher Level Services l Archiving service u The next high level service well build u Currently a design document internally, should be made external shortly l Site Validation Service (ala Inca) l Prediction service (ala NWS) l What else do you think we need? Contribute to the roadmap! u

79 Other Ways To Contribute l Join the mailing lists and offer your thoughts! u u u l Offer to contribute your information providers, higher level service, or visualization system l If youve got a complementary monitoring system – think about being an Incubator project (contact or come to the talk on Thursday)

80 Thanks l MDS4 Core Team: Mike DArcy (ISI), Laura Pearlman (ISI), Neill Miller (UC), Jennifer Schopf (ANL) l MDS4 Additional Development help: Eric Blau, John Bresnahan, Mike Link, Ioan Raicu, Xuehai Zhang l This work was supported in part by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing Research, U.S. Department of Energy, under contract W Eng-38, and NSF NMI Award SCI ESG work was supported by U.S. Depart­ment of Energy under the Scientific Discovery Through Advanced Computation (SciDAC) Program Grant DE- FC02-01ER This work also supported by DOESG SciDAC Grant, iVDGL from NSF, and others.

81

82 For More Information l Jennifer Schopf u u l Globus Toolkit MDS4 u l MDS-related events at GridWorld u MDS for Developers l Monday 4:00-5:30, 149 A/B u MDS Meet the Developers session l Tuesday 12:30-1:30, Globus Booth