CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Experience in CMS with the Common Analysis Framework Ian Fisk & Maria Girone 1.

Slides:



Advertisements
Similar presentations
PanDA Integration with the SLAC Pipeline Torre Wenaus, BNL BigPanDA Workshop October 21, 2013.
Advertisements

Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Feasibility Study on a Common Analysis Framework for ATLAS & CMS.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Grid services based architectures Growing consensus that Grid services is the right concept for building the computing grids; Recent ARDA work has provoked.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Dynamics AX Technical Overview Application Architecture Dynamics AX Technical Overview.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
MAHI Research Database Project Status Report August 9, 2001.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
GRID job tracking and monitoring Dmitry Rogozin Laboratory of Particle Physics, JINR 07/08/ /09/2006.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Framework for Automated Builds Natalia Ratnikova CHEP’03.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
CERN IT Department CH-1211 Geneva 23 Switzerland t The Experiment Dashboard ISGC th April 2008 Pablo Saiz, Julia Andreeva, Benjamin.
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
PanDA Multi-User Pilot Jobs Maxim Potekhin Brookhaven National Laboratory Open Science Grid WLCG GDB Meeting CERN March 11, 2009.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
OSG Area Coordinator’s Report: Workload Management April 20 th, 2011 Maxim Potekhin BNL
ETICS All Hands meeting Bologna, October 23-25, 2006 NMI and Condor: Status + Future Plans Andy PAVLO Peter COUVARES Becky GIETZEL.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Successful Common Projects: Structures and Processes WLCG Management.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
13 May 2004EB/TB Middleware meeting Use of R-GMA in BOSS for CMS Peter Hobson & Henry Nebrensky Brunel University, UK Some slides stolen from various talks.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
Tarball server (for Condor installation) Site Headnode Worker Nodes Schedd glidein - special purpose Condor pool master DB Panda Server Pilot Factory -
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Towards a Global Service Registry for the World-Wide LHC Computing Grid Maria ALANDES, Laurence FIELD, Alessandro DI GIROLAMO CERN IT Department CHEP 2013.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
LCG Support for Pilot Jobs John Gordon, STFC GDB December 2 nd 2009.
LCG CERN David Foster LCG WP4 Meeting 20 th June 2002 LCG Project Status WP4 Meeting Presentation David Foster IT/LCG 20 June 2002.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Julia Andreeva on behalf of the MND section MND review.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
David Adams ATLAS ATLAS-ARDA strategy and priorities David Adams BNL October 21, 2004 ARDA Workshop.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
Global ADC Job Monitoring Laura Sargsyan (YerPhI).
Daniele Spiga PerugiaCMS Italia 14 Feb ’07 Napoli1 CRAB status and next evolution Daniele Spiga University & INFN Perugia On behalf of CRAB Team.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
DIRAC Project A.Tsaregorodtsev (CPPM) on behalf of the LHCb DIRAC team A Community Grid Solution The DIRAC (Distributed Infrastructure with Remote Agent.
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
WP1 Status and plans Francesco Prelz, Massimo Sgaravatto 4 th EDG Project Conference Paris, March 6 th, 2002.
INFSO-RI Enabling Grids for E-sciencE Ganga 4 Technical Overview Jakub T. Moscicki, CERN.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES The Common Solutions Strategy of the Experiment Support group.
Monitoring and Information Services Core Infrastructure (MIS-CI) Service Description Mark L. Green OSG Integration Workshop at UC Feb 15-17, 2005.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Our experience with NoSQL and MapReduce technologies Fabio Souto.
Production System 2 manpower and funding issues Alexei Klimentov Brookhaven National Laboratory Aug 19, 2013 Production System Technical Meeting CERN.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks GOCDB4 Gilles Mathieu, RAL-STFC, UK An introduction.
Regional Operations Centres Core infrastructure Centres
Blueprint of Persistent Infrastructure as a Service
POW MND section.
3D Application Tests Application test proposals
WP1 activity, achievements and plans
Module 01 ETICS Overview ETICS Online Tutorials
gLite The EGEE Middleware Distribution
Presentation transcript:

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Experience in CMS with the Common Analysis Framework Ian Fisk & Maria Girone 1

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone  To develop a common system for submitting analysis jobs to the distributed infrastructure  Leveraging on the similarities of the experiments analysis workflows  Using components from the PanDA workload management system, but integrated in the CMS analysis environment  Building on the proven scalability and stability of the PanDA system The Objective

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone  A three phase project with a defined timeline  A Feasibility Study – presented in May 2012, at CHEP, New York (1)  A Proof of Concept Phase – June December 2012  A Prototype Phase – January September 2013  Contributions from CERN IT, ATLAS and CMS  An opportunity for collaborative work The Project  (1) M Girone et al.., The common solutions strategy of the experiment support group at cern for the lhc experiments Journal of Physics: Conference Series, 396(3):032048,

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Components Overview

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone  We have integrated the CMS specific components and exposed them for validation to power users  We have deployed the CMS instance  Oracle DB cloned from the PanDA production database  10 machines installed for all services  47 CMS sites added to the APF configuration  Configuration of the APF done through the ATLAS Grid Information System  Was not easy to separate  Next step is to demonstrate the compatibility of the experiment specific components to the currently used CMS production scheduler (based on Condor and GlideInWMS)  Completing the original test-bed plan for CMS needs Status of the Test-Bed

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Components Overview

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Components Overview Job output Manager Task Manager Scheduler (PanDA) Resource Provisioning (APF) Monitoring Client UI Resources Client API

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone  Experiment specific elements are in these layers  Allowing for interface to other scheduling services  Task Manager is potentially a generic service  (Run A before 10 Bs and then run a C) Experiment Components Task Manager Client UI Client API  Job Output manager was written for CMS to avoid integrating the entire ATLAS data management solution into the workflow  It can be seen as a general component and used by other experiments Job output Manager 8

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone  Scheduling element enforces experiment policy and will have experiment specific elements  The PanDA approach to configuration/scheduling policy changes involves a new release Scheduler (PanDA) 9

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone  Resource provisioning should be entirely independent of the experiment. A generic component for specific resources  Most obvious for a common component or service Resource Provisioning and Monitoring  Monitoring is potentially a passive service and may have experiment specific views and projections, but does not obviously require experiment specific services Resource Provisioning (APF) Monitoring 10

CMS Experience with the Common Analysis Framework I. Fisk & M. Girone  CMS wants to capitalize on the architecture of the CRAB3-PanDA test-bed  Next important step is to validate the test-bed architecture with the CMS production scheduler (Condor/GlideIn  This will demonstrate that we fully understand the separation of the interface between the experiment specific services and the scheduling services  Today CMS uses the same scheduler for production and analysis  Moving to PanDA for analysis would add increased scalability but also complexity to the system (two independent schedulers)  Before CMS embarks on such a system we need to understand the concrete benefit and improvements to sustainability  A direct comparison is only possible after demonstrating the current testbed with the production scheduler, work ongoing with first results expected in November Next Steps and Plans