Presentation is loading. Please wait.

Presentation is loading. Please wait.

CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t The Agile Infrastructure Project Monitoring Markus Schulz Pedro Andrade.

Similar presentations


Presentation on theme: "CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t The Agile Infrastructure Project Monitoring Markus Schulz Pedro Andrade."— Presentation transcript:

1 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t The Agile Infrastructure Project Monitoring Markus Schulz Pedro Andrade

2 Outline Monitoring WG and AI Today’s Monitoring in IT Architecture Vision Implementation Plan Conclusions 2

3 Markus Schulz 3 Monitoring WG and AI

4 Introduction Motivation –Several independent monitoring activities in IT similar overall approach, different tool-chains, similar limitations –High level services are interdependent combination of data from different groups necessary, but difficult –Understanding performance became more important requires more combined data and complex analysis –Move to a virtualized dynamic infrastructure comes with complex new requirements on monitoring Challenges –Find a shared architecture and tool-chain components while preserving our investment in monitoring IT Monitoring Working Group 4

5 Timeline 5 Q1 2011 Creation of Monitoring WG and mandate definition Presentations of monitoring status per IT group Q3 2011 Presentations of monitoring plans per IT group Initial discussion on a shared monitoring architecture Q4 2011 Definition of common tools and core user stories Agreement on a shared monitoring architecture Q1 2012 Preparation of MWG summary report Definition of implementation plans in the context on AI Q2 2012 Setup of infrastructure and prototype work. Import data from several sources into the Analysis Facility. Exercise messaging at expected rates and feed the storage system.

6 Pedro Andrade 6 Today’s Monitoring in IT

7 Monitoring Applications 7 GroupApplications CFLemon, LAS, SLS CISCDS, Indico CS Spectrum CA Events, Polling Value, Alarm History, Performance Analysis, Sflow/Nflow, Syslog, Wireless Monitoring DB Database monitoring, Web applications monitoring, Infrastructure Monitoring DI Central Security Logging All, Central Security Logging Logins, IP connections log, Deep Packet Inspection, DNS Logs DSSTSM, AFS, CASTOR Tape, CASTOR Stager ES Job Monitoring, Site Status Board, DDM Monitoring, Data Popularity, Hammer Cloud, Frontier, Coral GTSAM-Nagios OISSCOM PES Job Accounting, Fairshare, Job Monitoring, Real-time Job Status, Process Accounting

8 Monitoring Applications 8

9 Monitoring Data Producers –40538 Input Volume –283 GB per day Input Rate –697 M entries per min –2,4 M entries per min without PES/process accounting Query Rate –52 M queries per day –3,3 M entries per day without PES/process accounting 9

10 Analysis Monitoring in IT covers a wide range of resources –Hardware, OS, applications, files, jobs, etc Many application-specific monitoring solutions –Some are commercial solutions –Based on different technologies Limited sharing of monitoring data –Maybe no sharing, simply duplication of monitoring data All monitoring applications have similar needs –Publish metric results, aggregate results, alarms, etc 10

11 11 Architecture Vision Pedro Andrade

12 Constraints (Data) Large data store aggregating all monitoring data for storage and combined analysis tasks Make monitoring data easy to access by everyone! –Not forgetting possible security constraints Select a simple and well supported data format –Monitoring payload to be schema free Rely on a centralized metadata service(s) to discover the computer center resources information –Which is the physical node running virtual machine A –Which is the virtual machine running service B –Which is the network link used by node C –… this is becoming more dynamic in the AI 12

13 Constraints (Technology) Focus on providing well established solutions for each layer of the monitoring architecture –Transport, storage, analysis Flexible architecture where a particular technology can be easily replaced by a better one Adopt whenever possible existing tools and avoid home grown solutions Follow a tool chain approach Allow a phased transition where existing applications are gradually integrated 13

14 User Stories User stories were collected from all IT groups and commonalities between them were identified To guarantee that different types of user stories were provided three categories were established: –Fast and Furious (FF) Get metrics values for hardware and selected services Raise alarms according to appropriate thresholds –Digging Deep (DD) Curation of hardware and network historical data Analysis and statistics on batch job and network data –Correlate and Combine (CC) Correlation between usage, hardware, and services Correlation between job status and grid status 14

15 Architecture Overview 15 Application Specific Aggregation Storage Feed Analysis Storage Analysis Storage Alarm Feed Alarm Portal Report Custom Feed Publisher Sensor Publisher Sensor Portal Apollo Lemon Hadoop Oracle Splunk

16 Architecture Overview All components can be changed easily –Including the messaging system (standard protocol) Messaging and storage as central components –Tools connect either to the Messaging or Storage Publishers should be kept as simple as possible –Data produced either directly on sensor or after a first level of aggregation Scalability can be addressed either by horizontally scaling or by adding additional layers –Pre-aggregation, pre-processing –“Fractal approach” 16

17 Data Format The selected message format is JSON A simple common schema must be defined to guarantee cross-reference between the data. –Timestamp –Hardware and node –Service and applications –Payload These base elements (tag) require the availability of the metadata service(s) mentioned before –This is still under discussion 17

18 Messaging Broker Two technologies have been identified as the best candidates: Apollo and RabbitMQ –Apollo is the successor ActiveMQ –Prior positive experience in IT and the experiments Only realistic testing environments can produce reliable performance numbers. The use case of each application must be clear defined –Total number of producers and consumers –Size of the monitoring message –Rate of the monitoring message The trailblazer applications have already very demanding use cases 18

19 Central Storage and Analysis All data is stored in a common location –Makes easy the sharing of monitoring data –Promotes sharing of analysis tools –Allows feeding into the system data already processed NoSQL technologies are the most suitable solutions –Focus on column/tabular and document based solutions –Hadoop (from the Cloudera distribution) as first step 19

20 Central Storage and Analysis Hadoop is a good candidate to start with –Prior positive experience in IT and the experiments –Map-reduce paradigm is a good match for the use cases –Has been used successfully at scale –Many different NoSQL solutions use Hadoop as backend –Many tools provide export and import interfaces –Several related modules available (Hive, HBase) Document based store also considered –CouchDB/MongoDB are good candidates For some use cases a parallel relational database solution (based on Oracle) could be considered 20

21 Integrating Closed Solutions External (commercial) monitoring –Windows SCOM, Oracle EM Grid Control, Spectrum CA These data sources must be integrated –Injecting final results into the messaging layer –Exporting relevant data at an intermediate stage 21 Sensor Transport Storage Analysis Visualization/Repor ts Export Interface Messaging Integrated Product

22 22 Implementation Plan Pedro Andrade

23 Transition Plan Moving the existing production monitoring services to a new base architecture is a complex task as these services must be continuously running A transition plan was defined and foresees a staged approach where the existing applications gradually incorporate elements of the new architecture 23

24 Transition Plan 24 Aggregation Storage Feed Analysis Storage Analysis Storage Alarm Feed Alarm Portal Report Publisher OLD NEW

25 Milestones 25 Monitoring.v 1 Q1 2012 AI nodes monitored with Lemon (dependency on Quattor) Deployment of Messaging Broker and Hadoop cluster Testing of other technologies (Splunk) Monitoring.v 2 Q2 2012 AI nodes monitored with Lemon (no dependency on Quattor) Lemon data starts to be published via messaging Monitoring.v 3 Q4 2012 Several clients exploiting the messaging infrastructure Messaging consumers for real time alarms and notifications Initial data store/analysis for select use cases Monitoring.v 4 Q4 2013 Monitoring data published to the messaging infrastructure Large scale data store/analysis on Hadoop cluster

26 Monitoring v1 Several meetings organized –https://twiki.cern.ch/twiki/bin/view/AgileInfrastructure/Agil eInfraDocsMinuteshttps://twiki.cern.ch/twiki/bin/view/AgileInfrastructure/Agil eInfraDocsMinutes Short-term tasks identified and tickets created –https://agileinf.cern.ch/jira/secure/TaskBoard.jspahttps://agileinf.cern.ch/jira/secure/TaskBoard.jspa Work ongoing on four main areas: –Messaging broker deployment –Hadoop cluster deployment –Testing of Splunk with Lemon data –Lemon agents running on puppet 26

27 Monitoring v1 Deployment of the messaging broker –Based on Apollo and RabbitMQ Three SL6 nodes have been provided –2 nodes for production, 1 node for development –Each node will run Apollo and RabbitMQ Three applications have been identified to start using/testing the messaging infrastructure –OpenStack –MCollective –Lemon 27

28 Monitoring v1 Testing Splunk with Lemon data –Lemon data to be exported from DB (1 day, 1 metric) –Data exported into a JSON file and stored n AFS –This data will be imported to Splunk –Splunk functionality and scalability will be tested Started the deployment of a Hadoop cluster –Taking the Cloudera distribution –Other tools may also be deployed (HBase, Hive, etc) –Hadoop testing using Lemon data (as above) is planned 28

29 Monitoring v1/v2 AI nodes monitored with existing Lemon metrics –First step Current Lemon sensors/metrics are used for AI nodes Lemon metadata will still be taken from Quattor A solution is defined to get CDB equivalent data –Second step Current Lemon sensors/metrics are used for AI nodes Lemon metadata is not taken from Quattor Lemon agents start using the messaging infrastructure 29

30 30 Conclusions Pedro Andrade

31 Conclusions A monitoring architecture has been defined –Promotes sharing of monitoring data between apps –Based on few core components (transport, storage, etc) –Several existing external technologies identified A concrete implementation plan has been identified –It assures a smooth transition for today’s applications –It enables the new AI nodes to be monitored quickly –It allows moving towards a common system 31

32 Links Monitoring WG Twiki (new location!) –https://twiki.cern.ch/twiki/bin/view/MonitoringWG/https://twiki.cern.ch/twiki/bin/view/MonitoringWG/ Monitoring WG Report (ongoing) –https://twiki.cern.ch/twiki/bin/view/MonitoringWG/Monitori ngReporthttps://twiki.cern.ch/twiki/bin/view/MonitoringWG/Monitori ngReport Agile Infrastructure TWiki –https://twiki.cern.ch/twiki/bin/view/AgileInfrastructure/https://twiki.cern.ch/twiki/bin/view/AgileInfrastructure/ Agile Infrastructure JIRA –https://agileinf.cern.ch/jira/browse/AIhttps://agileinf.cern.ch/jira/browse/AI 32

33 33 QUESTIONS? Thanks !


Download ppt "CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t The Agile Infrastructure Project Monitoring Markus Schulz Pedro Andrade."

Similar presentations


Ads by Google