INFSO-RI-508833 Enabling Grids for E-sciencE www.eu-egee.org The Grid Challenges in LHC Service Deployment Patricia Méndez Lorenzo CERN (IT-GD) Linköping.

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Frédéric Hemmer, CERN, IT DepartmentThe LHC Computing Grid – October 2006 LHC Computing and Grids Frédéric Hemmer IT Deputy Department Head October 10,
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Frédéric Hemmer, CERN, IT Department The LHC Computing Grid – June 2006 The LHC Computing Grid Visit of the Comité d’avis pour les questions Scientifiques.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
INFSO-RI Enabling Grids for E-sciencE Geant4 Physics Validation: Use of the GRID Resources Patricia Mendez Lorenzo CERN (IT-GD)
INFSO-RI Enabling Grids for E-sciencE Project Gridification: the UNOSAT experience Patricia Méndez Lorenzo CERN (IT-PSS/ED) CERN,
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Ian Bird LCG Deployment Manager EGEE Operations Manager LCG - The Worldwide LHC Computing Grid Building a Service for LHC Data Analysis 22 September 2006.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
CERN IT Department CH-1211 Genève 23 Switzerland Visit of Professor Karel van der Toorn President University of Amsterdam Wednesday 10 th.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
INFSO-RI Enabling Grids for E-sciencE Porting Scientific Applications on GRID: CERN Experience Patricia Méndez Lorenzo CERN (IT-PSS/ED)
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
Service, Operations and Support Infrastructures in HEP Processing the Data from the World’s Largest Scientific Machine Patricia Méndez Lorenzo (IT-GS/EIS),
SC4 Planning Planning for the Initial LCG Service September 2005.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
LHC Computing, CERN, & Federated Identities
Production Activities and Results by ALICE Patricia Méndez Lorenzo (on behalf of the ALICE Collaboration) Service Challenge Technical Meeting CERN, 15.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
The LHC Computing Grid Visit of Dr. John Marburger
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Enabling Grids for E-sciencE Experience Supporting the Integration of LHC Experiments Computing Systems with the LCG Middleware Simone.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
WLCG – Status and Plans Ian Bird WLCG Project Leader openlab Board of Sponsors CERN, 23 rd April 2010.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
The ALICE Production Patricia Méndez Lorenzo (CERN, IT/PSS) On behalf of the ALICE Offline Project LCG-France Workshop Clermont, 14th March 2007.
INFSO-RI Enabling Grids for E-sciencE File Transfer Service Patricia Mendez Lorenzo CERN (IT-GD) / CNAF Tier 2 INFN - SC3 Meeting.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
“A Data Movement Service for the LHC”
Data Challenge with the Grid in ATLAS
Database Readiness Workshop Intro & Goals
The LHC Computing Challenge
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
LHC Data Analysis using a worldwide computing grid
Overview & Status Al-Ain, UAE November 2007.
The LHCb Computing Data Challenge DC06
Presentation transcript:

INFSO-RI Enabling Grids for E-sciencE The Grid Challenges in LHC Service Deployment Patricia Méndez Lorenzo CERN (IT-GD) Linköping 18 th October th Annual Workshop on Linux Cluster for Super Computing & 2 nd User Meeting for Swedish National Infrastructure for Computing

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Outlook ◘ CERN and LHC ◘ LCG: LHC Computing Model ◘ Service Challenges: Phases ◘ Current Phase: SC3 ◘ Support to the Services ◘ Support to the Experiments ◘ Experiences of the Experiments ◘ Next Step: SC4 ◘ Summary

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo CERN The European Organization for Nuclear Research The European Laboratory for Particle Physics ◘ Fundamental research in particle physics ◘ Designs, builds & operates large accelerators ◘ Financed by 20 European countries (member states) + others (US, Canada, Russia, India, etc) 2000 staffs users from all over the world ◘ Next huge challenge: LHC (starts in 2007) experiment: 2000 physicists, 150 universities, with an operation life greater than 10 years

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo LHC- Physics Goals ◘ Higgs particle Key particle in the Standard Model that could explain the elementary particle masses ◘ Search for super-symmetric particles and possible extra dimensions Their discovery would be a serious push for Super Symmetric theories or “String Theories” aiming at the unification of the fundamental forces in the Universe ◘ Anti-matter issues Why the Universe is made of matter instead of an equal quantity of matter and antimatter ◘ Understand the early Universe ( – seconds) Soup of quarks and gluons stabilized into nucleons and then nuclei and atoms

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo The LHC Experiment The LHC: Generation of 40 million particle collisions (events) per second at the center of each for experiments Reduce by online computers that filter out a few hundred good events per sec Recorded on disk and magnetic tape at MB/sec: 15 PB/year

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo The LHC Computing Environment LCG (LHC computing Grid) has been developed to build and maintain a storage and analysis infrastructure for the entire high-energy physics community ◘ LHC is beginning the data taking in summer 2007 ➸ Enormous volume of data Few PB/year at the beginning of the machine operation Several hundred PB yearly produced for all experiments in 2012 ➸ Large amount of processing power ◘ As a solution a LCG world-wide Grid is proposed ➸ Established using a world-wide distributed federal Grid ➸ Many components, services, software, etc, to coordinate ◘ Takes place at an unprecedented scale ➸ Many institutes, experiments and people working closely together ◘ LCG must be ready at full production capacity, functionality and reliability in less than 1 year! LCG is an essential part of the chain allowing the physicsts to perform their analyses ➸ It has to be a stable, reliable and easy to use service

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo LCG: The LHC Computing Grid Tier-0 – the accelerator centre Data acquisition and initial Processing of raw data Distribution of data to the different Tier’s Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany – Forschunszentrum Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) Tier-1 – “online” to the data acquisition process  high availability Managed Mass Storage –  grid-enabled data service Data-heavy analysis National, regional support Tier-2 – ~100 centres in ~40 countries Simulation End-user analysis – batch and interactive

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Tier 1 centers ALICEATLASCMSLHCb 1GridKaKarlsruheGermanyXXXX 2CCIN2P3LyonFranceXXXX 3CNAFBolognaItalyXXXX 4NIKHEF/SARAAmsterdamNetherlandsXXX 5NDGFDistributedDk, No, Fi, SeXX 6PICBarcelonaSpainXXX 7RALDidcotUKXXXX 8TRIUMFVancouverCanadaX 9BNLBrookhavenUSX 10FNALBataviaUSX 11ASCCTaipeiTaiwanXX A US Tier1 for ALICE is also expected

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo LCG in the World May Grid sites 34 countries CPUs 8 PetaBytes 30 sites 3200 cpus 25 Universities 4 National Labs 2800 CPUs Grid3

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Current Activities Experiments and Grid teams are ramping up Service Challenges (SC): Brief introduction ◘ Testing the required services to the necessary level of functionality, reliability and scale ◘ Preparing, hardening and delivering the production of LCG environment ◘ Running in an environment as realistic as possible Data Challenges (DC): General aspects ◘ Experiments test their LCG based production chains and the performance of the Grid fabric ➸ Processing data from simulated events ➸ Emulating step by step the scale they will have to face during real data taking ◘ From 2005 experiments include SC testing as part of their DC

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo The Service Challenges ◘ The Main Purposes ➸ Understand what it means to operate a real grid service – run for days/weeks outside the Data Challenges of the experiments ➸ Trigger/encourage the Tier1 & Tier2 planning – move towards real resource planning ➸ Get the essential grid services ramped up to target levels of reliability, availability, scalability, end-to-end performance ➸ Set out milestones needed to achieve goals during the service challenges ◘ This is focused on Tier 0 – Tier1 and large Tier 2

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Service Challenge Phases ◘ SC1 (December 2004) and SC2 (March 2005) ➸ Focused on the basic infrastructure ➸ Neither experiments nor Tier-2 were involved ➸ Extremely useful feedback to build up the services and to understand the issues involved in offering stable services ◘ Until end 2005: SC3 ➸ Include increasing levels of complexity and functionality ➸ All experiments and all Tier1s are now involved! ➸ Including experiment-specific solutions ◘ Beyond 2005: SC4 ➸ Designed to demonstrate that all of the offline use cases of LHC can be handled at an appropriate scale ➸ The service that results becomes the production service of LHC

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo LCG Deployment Schedule

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo The Current Challenge: SC3 Principal Challenge of this Phase: Prototype the data movement service needed by LHC ◘ Generalities (1): In terms of Architecture ➸ The elements used in this service already existing but never interconnected ➸ Using the same architecture and Design than in previous challenges

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo The Current Challenge: SC3 ◘ Generalities (2): In terms of Setup and Service ➸ Setup phase ● Starting 1 st of July 2005 ● Includes throughput test Maintenance for one week an average throughput of 400 MB/s from disk (CERN) to tape (Tier-1) ● 150 MB/s disk (CERN) -> disk (Tier-1) ● 60 MB/s disk (CERN) -> tape (Tier-1) ➸ Service Phase ● Stating September until the end of 2005 ● Including real experiment data CMS and Alice at the beginning ATLAS and LHCb joining in October/November

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Description of the Services provided in SC3 ◘ SRM (Storage Resource Manager v1.1) ➸ General interface to manage the different storage systems placed in the sites, in an homogeneous way ➸ V2.1 foreseen for SC4 ◘ FTS (File Transfer Service provided by gLite) ➸ It is a lowest level data movement service ➸ Responsible for moving sets of files from one site to another one in a reliable way and that’s all ➸ Designed for point to point movement of physical files ➸ New versio ns (to be deployed end October) dealing with Logical File Names and C ata logs ◘ LFC (LCG File Catalog) ➸ grid catalog capable to provide local and global cataloguing functionality including file replication across different sites.

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Description of the Services provided by SC3 ServiceVOs SRM (1.1 in SC3 and 2.1 in SC4) all VOs LFCALICE, ATLAS FTSALICE, ATLAS, LHCb CENot required Site BDIINot required R-GMANot required Summary of the services requested by each LHC Experiment during the SC3 phase

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Results of SC3 in terms of Transfers ◘ The target data rates 50% higher than during SC2 ◘ All T1 have participated in this challenge ◘ Important step to gain experience with the services before SC4 SiteDaily average (MB/s) ASCC, Taiwan10 BNL, US107 FNAL, US185 GridKa, Germany42 CC-IN2P3, France40 CNAF, Italy50 NDGF, Nordic129 PIC, Spain54 RAL, UK52 SARA/NIKHEF, NL111 TRIUMF, Canada34 Best daily throughputs sustained per site

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Support for the Services ◘ During the throughput phase the experiments had the possibility to test the services at CERN ➸ To get confident with their software ➸ To gain familiarity with the services ◘ NOT intended to perform production ◘ For the FTS ➸ Set of pilot servers set at CERN per each experiment ➸ Access restricted to a small set of persons ➸ Set of channels defined between T0-T1s ➸ FTS UI setup at CERN for the tests ◘ For the LFC ➸ Each experiment had a pilot LFC server placed at CERN

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Support for the Experiments VO BOXES ◘ The experiments require a mechanism to run long-live agents per site ◘ They perform activities on behalf of the experiment ➸ Job submissions, monitoring, software installation, etc ◘ Provided for all the experiments which require it in SC3 participant sites ◘ The Base Line Services Workgroup identified the need for the experiments to run specific services on the sites Service Provided to the experiment ◘ Dedicated nodes to run such agents ◘ Delivered at each site for each experiment ➸ Direct gsissh loggin is allowed for certain set of persons per exp. ➸ Registration of a proxy for automatic renewal ➸ Delivered for the experiment software installation

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Experiences with the Experiments Main Goals of the experiments using the SC3 ◘ Test of the data transfer and storage services to work for realistic use ◘ Evaluation of the services provided ◘ Data Challenges productions using SC3 services ➸ Test of the experiment models ➸ Analysis of reconstructed data ◘ Testing the Workload Management components

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Experiences with experiments:ALICE ◘ Alice is coordinating with SC3 to run their DC05 in the SC3 framework ◘ Already beginning in September Requirements and Status: ◘ VO-managed node at each site ➸ Available in all sites: T1 and T2 participating in SC3 ➸ Alice performs the job submissions from this node ◘ LFC required as local catalog in all sites ➸ Central Catalog (Metadata Catalog) provided by Alice ➸ LFC APIs implemented in the Alice Framework (AliEn) ➸ More than entries (LFC as unique catalog) ◘ FTS available in all sites ➸ Tests among T0-T1 performed this summer ➸ FTS APIS implemented in the Alice Framework ➸ First bunches of jobs al read y submitted at CERN Full Production Mode

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Experiences with Experiments: ATLAS ◘ ATLAS is joining the Production in October Requirements and Plans: ◘ VO box deployed at all sites ➸ Re quired at all sites ◘ Deployment of FTS (Throughput phase) ➸ Comparison of FTS performance with the ATLAS framework (Don Quijote) ➸ Following the throughput phase and investigating the integration of FTS in their space ◘ For the moment RLS (previous catalog ) entries are being migrated to global LFC (Throughput phase) ➸ Using their pilot as a global copy of the RLS to run some analysis ➸ Cleaning the entries in the catalog before splitting it in many local catalogs ◘ The 2nd phase (from October) foresees tests to their new Data Management infrastructure

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Experiences with the experiments: CMS ◘ CMS is in production from middle September Requirements and Status: ◘ Vo BOX not required ◘ Transfers: Done with PhEDEx ➸ FTS is not currently used ➸ Integration with PhEDEx likely to happen later ➸ Full production already going on ◘ Catalog: Local file catalog is needed by PhEDEx ➸ Testing the different POOL backend ➸ Functionality and scalability tests

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Experiences with the Experiments: LHCb ◘ LHCb is already in production from October Requirements and Status: ◘ VO BOX not required right now ◘ LFC central catalog ➸ Implementation of the different catalogs in the general Management System ➸ Catalogs centralized at CERN ➸ Very good experiences with LFC ◘ Data Transfer: FTS ➸ Integration of the FTS client into the LHCb Data Management system already done ➸ LHCb is in full production mode

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Service support activities ◘ Service challenge Wiki Page ➸ Contact persons ➸ Experiment Requirements ➸ Configuration of the Services ➸ T0, T1 and T2 information ➸ Daily log files and tracking of the status ◘ Multi-level support structure ➸ First Level Support ▪ Box-level monitoring and interventions ➸ Second Level Support ▪ Basic Knowledge of the applications ➸ Third Level Support ▪ Experts ◘ Daily contacts with experiments ◘ One person full support for each experiment ◘ Regular Meeting with the experiments and the sites ◘ Regular visits and workshops at each site

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Next Step: SC4 The aim of SC4 is to demonstrate that the data processing requirements of the experiments can be fully handled by Grid at the full nominal data rate of LHC ◘ All T1 and most of T2 involved ◘ Triple “challenge”: Services: Successfully completed 6 months before the data taking They have to ready to a high stability level Sites: Ramp up their capacity to twice the nominal data rates expected for the production phase Experiments: Their own services will be included in the next phase → their feedback is fundamental The involvement of the experiment should enable a sizeable fraction of users to migrate to the grid and use it for their daily activity ◘ The service resulting from SC4 becomes the production model The full communities of LHC will be now supported

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo SC4 CenterALICEATLASCMSLHCbTarget Data Rate MB/s TRIUMF X50 CC-IN2P3 XXXX200 GridKa XXXX200 CNAF XXXX200 NIKHEF/SARA XXX150 NDGF XXX50 PIC XXX100 ASGC XX100 UK,RAL XXXX150 US,BNL X200 US,FNAL X200 At CERN 1600 Target data rates for CERN and T1 centres in SC4

Enabling Grids for E-sciencE INFSO-RI Lönkoping, 18 th October 2005 Patricia Méndez Lorenzo Summary ◘ LHC, the world largest collider will begin to take data in 2007 ➸ Computing infrastructure challenges ➸ Data Management Challenge ◘ The LHC Computing Model LCG has been developed to build and maintain a storage and analysis infrastructure for the entire high-energy physics community ➸ More than 140 sites distributed in 34 countries in a Tier-Model structure ◘ The Service Challenges purpose is to understand what it means to operate a real grid service and to deal in advance all the situations the experiment will have to face during the real data taking ➸ It is not an exercise, it is the real thing ➸ The current phase (SC3) is being used and tested by the HEP experiments ➸ Next step: SC4 will include all Grid communities and its results will be deployed for the real data taking