Presentation is loading. Please wait.

Presentation is loading. Please wait.

IT Monitoring Service Status and Progress

Similar presentations


Presentation on theme: "IT Monitoring Service Status and Progress"— Presentation transcript:

1 IT Monitoring Service Status and Progress
Alberto AIMAR, IT-CM-MM

2 Outline Monitoring Data Centres Experiments Dashboards
Architecture and Technologies Status and Plans

3 Monitoring Data Centre Monitoring Experiment Dashboards
Monitoring of DC at CERN and Wigner Hardware, operating system, and services Data Centres equipment (PDUs, temperature sensors, etc.) Used by service providers in IT, experiments Experiment Dashboards Sites availability, data transfers, job information, reports Used by WLCG, experiments, sites and users Both hosted by CERN IT, in different teams

4 Mandate Focus for 2016 Regroup monitoring activities hosted by CERN/IT (Data Centres, Experiment Dashboards, ETF, HammerCloud, etc) Continue existing services Uniform with CERN IT practices Management of services, communication, tools (e.g. GGUS and SNOW tickets) Starting with Merge Data Centres and Experiment Dashboards monitoring technologies Review existing monitoring usage and needs (DC, WLCG, etc) Investigate new technologies Unchanged support while collecting feedback and working

5 Data Centres Monitoring

6 Data Centres Monitoring

7 Experiment Dashboards
Operation Teams Sites Analysis + Production Real time and Accounting views Data transfer Data access Site Status Board SAM3 Google Earth Dashboard Sites Data Management Monitoring General Public Users Outreach Job Monitoring Infrastructure Monitoring Operation Teams Operation Teams Experiment Dashboard covers the full-range of experiments’ computing activities Provides information to different categories of users Sites Sites users per day

8 Experiment Dashboards
Job monitoring, sites availability, data management and transfers Used by experiments operation teams, sites, users, WLCG

9 WLCG Transfer Dashboard

10 ATLAS Distributed Computing

11 Higgs Seminar

12 Architecture and Technologies

13 Unified Monitoring Architecture
Data Sources Storage/Search Transport Data Access Data Centres Processing WLCG kafka

14 Processing & Aggregation
Current Monitoring Data Sources z Transport Storage &Search Display Access Data Centres Monitoring Metrics Manager Flume HDFS Kibana Lemon Agent AMQ ElasticSearch Jupyter XSLS Kafka Oracle Zeppelin ATLAS Rucio ElasticSearch Data mgmt and transfers Flume FTS Servers HDFS AMQ DPM Servers Oracle GLED Dashboards (ED) XROOTD Servers ElasticSearch Kibana CRAB2 Oracle Zeppelin Monitoring Job CRAB3 ElasticSearch HTTP Collector WM Agent Processing & Aggregation SQL Collector Farmout Grid Control MonaLISA Collector Spark Real Time (ED) CMS Connect Hadoop Jobs Accounting (ED) PANDA WMS GNI API (ED) ProdSys Oracle PL/SQL Nagios ESPER Infrastructure Monitoring AMQ VOFeed Spark SSB (ED) HTTP GET OIM Oracle PL/SQL SAM3 (ED) HTTP PUT ES Queries GOCDB API (ED) ESPER REBUS

15 Processing & Aggregation
Unified Monitoring Data Sources Transport z Storage &Search Data Access Metrics Manager Lemon Agent XSLS ATLAS Rucio FTS Servers Hadoop HDFS DPM Servers ElasticSearch XROOTD Servers Other Kibana CRAB2 Flume Grafana CRAB3 AMQ Jupyter WM Agent Kafka Processing & Aggregation Zeppelin Farmout Grid Control Other CMS Connect PANDA WMS Spark ProdSys Hadoop Jobs Nagios GNI VOFeed Other OIM GOCDB REBUS

16 Unified Data Sources AMQ Data Sources Flume AMQ Transport DB FTS Flume DB Flume Kafka sink Rucio XRootD HTTP feed Flume HTTP Jobs Logs Flume Log GW Lemon syslog Lemon metrics Flume Metric GW app log Decouple producer/consumer, job/stores Data is channeled via Flume, validated and modified if necessary Adding new Data Sources is documented and fairly simple 21/07/2016 ASDF meeting

17 (e.g. Enrich FTS transfer metrics with WLCG topology from AGIS/Gocdb)
Unified Processing Transport Flume Kafka sink Flume sinks Kafka cluster (buffering) * Processing (e.g. Enrich FTS transfer metrics with WLCG topology from AGIS/Gocdb) Decouple producer/consumer, job/stores Data now 100 GB/day, at scale 500 GB/day Current retention period 12 h, at scale 24 h 21/07/2016 ASDF meeting

18 Data Processing Stream processing Batch processing Data enrichment
Join information from several sources (e.g. WLCG topology) Data aggregation Over time (e.g. summary statistics for a time bin) Over other dimensions (e.g. compute a cumulative metric for a set of machines hosting the same service) Data correlation Advanced Alarming: detect anomalies and failures correlating data from multiple sources (e.g. data centre topology-aware alarms) Batch processing Reprocessing, data compression, reports Technologies: Reliable and scalable job execution (Spark), Job orchestration and scheduling (Marathon/Chronos), Lightweight and isolation deployment (Docker) 21/07/2016 ASDF meeting

19 Unified Access Storage & Search Data Access HDFS Flume sinks ElasticSearch Plots Reports Scripts Decouple producer/consumer, job/stores Others CLI, API Multiple data access methods (dashboards, notebooks, CLI) Mainstream and evolving technology 21/07/2016 ASDF meeting

20 Status and Plans

21 WLCG Monitoring Data Sources and Transport Storage and Search
Moving all data via new transport (Flume, AMQ, Kafka) Storage and Search Data in ES and Hadoop Processing Doing aggregation and processing via Spark Display and reports Using only standard features of ES, Kibana, Spark, Hadoop Introduce notebooks (e.g. Zeppelin) and data discovery General Selecting technologies, learning on the job, looking for expertise Evolve interfaces (dashboards for users, shifters, experts, managers)

22 WLCG Monitoring

23 Data Centres Monitoring (metrics)
Replacement of the Lemon Agents Mainstream technologies (e.g. collectd) Support legacy sensors Starting in 2016Q4 Update meter.cern.ch Kibana and Grafana dashboards Move to new central ES service Move to the Unified Monitoring

24 Data Centres Monitoring (logs)
Currently collecting syslog plus several application logs (e.g. EOS, Squid, HC, etc.) More requests coming for storing logs (e.g. Castor, Tapes, FTS) Update the Logs Service to the Unified Monitoring For archive in HDFS For processing in Spark For visualization in ES and Kibana

25 Participating to ES Benchmarking
ES Service is separate from Monitoring Service Looking to use "standard“ CERN solutions VMs, remote storage, etc Data flow started few days ago, will take some weeks Identical ES settings and version 3 master nodes, m2_medium sized VMs 1 search node, m2_large size 3 data nodes, providing the same amount of data. bench01 data nodes 3 full node m2 VMs (VM takes a full hypervisor) 600GB io2 sized volumes attached as storage backend (Ceph) loop back device in /var as fast storage (SSD) bench02 data nodes: 3 full node VMs running on bcache enabled hypervisors

26 Conclusions / Services Proposed
Monitor, collect, visualize, process, aggregate, alarm Metrics and Logs Infrastructure operations and scale Helping and supporting Interfacing new data sources Developing custom processing, aggregations, alarms Building dashboards and reports

27 monit.cern.ch

28 Reference and Contact Dashboard Prototypes monit.cern.ch
Feedback/Requests (SNOW) cern.ch/monit-support Early-Stage Documentation cern.ch/monitdocs 21/07/2016 ASDF meeting

29 Examples

30

31

32

33

34

35

36

37 Ildar Nurgaliev – CERN openlab
Data Centre Overview Grafana Ildar Nurgaliev – CERN openlab 11/08/2016

38 Ildar Nurgaliev – CERN openlab
FTS Monitoring Grafana Transfer Sites Dashboard Ildar Nurgaliev – CERN openlab 11/08/2016

39 Sample Plots - Zeppelin
PyPlot for report- quality plots 0-access bins Old and new datasets Most accessed datasets Interactive built-in plots for discovery Ildar Nurgaliev – CERN openlab 11/08/2016

40 Ildar Nurgaliev – CERN openlab
Zeppelin Notebooks Developer’s view User’s view Ildar Nurgaliev – CERN openlab 11/08/2016


Download ppt "IT Monitoring Service Status and Progress"

Similar presentations


Ads by Google