Ian M. Fisk Fermilab February 23, 2005 1 Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
WLCG/8 July 2010/MCSawley WAN area transfers and networking: a predictive model for CMS WLCG Workshop, July 7-9, 2010 Marie-Christine Sawley, ETH Zurich.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
1 M. Paganoni, HCP2007 Computing tools and analysis architectures: the CMS computing strategy M. Paganoni HCP2007 La Biodola, 23/5/2007.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Meeting, 5/12/06 CMS T1/T2 Estimates à CMS perspective: n Part of a wider process of resource estimation n Top-down Computing.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Stefano Belforte INFN Trieste 1 Middleware February 14, 2007 Resource Broker, gLite etc. CMS vs. middleware.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Automatic Resource & Usage Monitoring Steve Traylen/Flavia Donno CERN/IT.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
The CMS Computing System: getting ready for Data Analysis Matthias Kasemann CERN/DESY.
June 15, PMG Ruth Pordes Status Report US CMS PMG July 15th Tier-1 –LCG Service Challenge 3 (SC3) –FY05 hardware delivery –UAF support Grid Services.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Maria Girone, CERN CMS Experiment Status, Run II Plans, & Federated Requirements Maria Girone, CERN XrootD Workshop, January 27, 2015.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Computing Model José M. Hernández CIEMAT, Madrid On behalf of the CMS Collaboration XV International Conference on Computing in High Energy and Nuclear.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
T0-T1 Networking Meeting 16th June Meeting
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
Data Challenge with the Grid in ATLAS
Database Readiness Workshop Intro & Goals
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
N. De Filippis - LLR-Ecole Polytechnique
LHC Data Analysis using a worldwide computing grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto production by June 1 CMS Items ➨ CMS is rolling out a new framework, deploying new data management system and new simulation production infrastructure CMS will have 10TB of new event data at the beginning of April CMS will be able to produce 10M with the new production infrastructure in May Beginning of June CMS would like a two week functionality rerun of of goals of SC3 Demonstration and preparation for the 2006 Data Challenge CSA06 July and August CMS will produce 25M events per month (roughly 1TB per day of data for CSA06 September - October is CSA06

Ian M. Fisk Fermilab February 23, Tier-1 Transfers In 2008 CMS expects ~300MB/s being transferred from CERN to Tier-1 centers on average ➨ Assume peaks to recover from downtime and problems with a factor of two (600MB/s) ➨ Network Provisioning would suggest roughly twice that for raw capacity Demonstrate aggregate transfer rate to 300MB/s sustained on experiment data by the end of year to tape at Tier-1 ➨ 150MB/s in the spring ➨ 300MB/s after October Service Challenge At the start of the experiment, individual streams will be sent to each Tier-1 center ➨ During the transfer tests, data to each tier-1 will likely have a lot of overlap

Ian M. Fisk Fermilab February 23, Pieces Needed for CMS Phedex integration with FTS ➨ Expected middle of March CMS Data Management Prototype ➨ Version 0 is released. Version 1 expected in the middle of Match ➨ Currently has ability to define and track datasets and locations New version needed for New Event Data Model Data New Event Data Model Data ➨ Expect our first 10TB sample roughly on April 1 ➨ The ability to transfer data on files is a component, but the experiment needs to transfer defined groups of files, validate integrity and make them accessible. SC3 Rerun clearly demonstrated capabilities of transfer at this rate to disk ➨ CMS would like to replicate multiple copies of 10TB sample from Tier-0 to Tier-1 tape at a rate of 150MB/s This should coincide with tape throughput tests planned in SC4 We would also like to exercise getting them back for applications

Ian M. Fisk Fermilab February 23, Rate Goals Based on Tier-1 Pledges Tape Rates during first half of the year with experiment datasets ➨ ASGC: 10MB/s ➨ CNAF: 25MB/s ➨ FNAL: 50MB/s ➨ GridKa: 15MB/s ➨ IN2P3: 15MB/s ➨ PIC:30MB/s ➨ RAL: 10MB/s By the end of the year, we should double the achieved rates to tape Rates to Tape

Ian M. Fisk Fermilab February 23, CMS Pieces CMS Tier-1 to Tier-2 transfers in the computing model are likely to be very bursty and driven by analysis demands ➨ Network to Tier-2 are expected to be between 1Gb/s to 10Gb/s assuming 50% provisioning and a 25% scale this spring Desire is to reach ~10MB/s for worst connected Tier-2s to 100MB/s to best connected Tier-2s in the Spring of TB at all Tier-2 centers. Up to 10TB per day for well connected centers The Tier-2 to Tier-1 transfers are almost entirely fairly continuous simulation transfers ➨ The aggregate input rate into Tier-1 centers is comparable to the rate from the Tier-0. Goal should be to demonstrate 10MB/s from Tier-2s to Tier-1 centers 1TB per day The production infrastructure will be incapable of driving the system at this scale, so some pre- created data will need to be used.

Ian M. Fisk Fermilab February 23, Schedule and Rates Schedule Items ➨ March 15 FTS driven transfers from PhEDEx ➨ Starting in April CMS would like to drive continuous low level transfers between sites that support CMS On average 20MB/s (2TB per day) There is a PhEDEx heartbeat system which is almost complete Additionally CMS has identified 3 groups in opposing time zones to monitor transfers Use new EDM data sample In April we only expect to have 5 days worth of unique new EDM data In addition to low level transfers, CMS would like to demonstrate the bursting nature of Tier-1 to Tier-2 transfers ➨ Demonstrate Tier-2 centers at 50% of their available networking for hour long bursts

Ian M. Fisk Fermilab February 23, Job Submission CMS calculates roughly 200k job submissions per day in 2008 ➨ Calculation makes a lot of assumptions about the number of active users and kind of data access. Aim for 50k jobs per day during ➨ CMS will begin transitioning to gLite 3.0 in the pre-production system For EGEE sites CMS will use the gLite RB for submission (2/3rds of resources, 33k jobs per day) For OSG CMS will either use test the gLite RB for interoperability. We will maintain the direct Condor-G submission capability for scaling and speed if necessary. A larger number of application failures come from data publishing and data access problems than from problems with grid submission ➨ CMS needs the new event data model and data management infrastructure to have reasonable test Currently CMS has only basic data access applications. ➨ Data skimming and selecting applications will be available by June for new EDM

Ian M. Fisk Fermilab February 23, Job Submission Schedule By the beginning of March we expect a analysis grid job submitter capable of submitting to new event data model jobs ➨ This is a version of the CMS Remote Analysis Builder (CRAB) CMS will attempt job submission targets in the June exercise and again during the CSA06 preparation period in September ➨ During the first two weeks of June during the CMS focused section we would like to hit 25k jobs per day. First CMS scale test of production release of gLite 3.0 ➨ During September we should hit 50k jobs per day July and August we will be performing simulation at a high rate ➨ Simulation jobs are slow, so the total number of jobs will not stress the simulation infrastructure

Ian M. Fisk Fermilab February 23, Exercising Local Mass Storage The CMS data model involves access to data that it stored in large disk pools and in disk caches in front of tape ➨ Demonstrating reasonable rate from mass storage is important to success of analysis ➨ Goal in 2008 is 800MB/s at a Tier-1 and 200MB/s at a Tier Goal is 200MB/s at Tier-1 or 1MB/s per batch slot for analysis During scale testing with new EDM in April CMS will aim to measure local mass storage serving rate to analysis applications.

Ian M. Fisk Fermilab February 23, Local Catalog The current baseline solution for SC4 for CMS is the trivial file catalogue. ➨ In this mode the incoming application has logical file names and the existing mass storage name space is used to resolve the physical file names Applications can boot strap with information in the $VO_CMS_SW_DIR Small amount of setup required by the site. Mainly implementing the technique for discovering the top of the name space Currently a small script ➨ CMS has been evaluating LFC as an alternative. We will implement for larger scale testing only at specific sites. We do need a catalog at CERN to serve as the basis for the Dataset Location Service