Presentation is loading. Please wait.

Presentation is loading. Please wait.

The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.

Similar presentations


Presentation on theme: "The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón."— Presentation transcript:

1 The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón National Secretary of Higher Education, Science, Technology and Innovation Republic of Ecuador

2 The LHC Computing Challenge 2  Signal/Noise: 10 -13 (10 -9 offline)  Data volume High rate * large number of channels * 4 experiments  15 PetaBytes of new data each year  Compute power Event complexity * Nb. events * thousands users  100 k of (today's) fastest CPUs  45 PB of disk storage  Worldwide analysis & funding Computing funding locally in major regions & countries Efficient analysis everywhere  GRID technology 26 June 2009Ian Bird, CERN

3 150 million sensors deliver data … … 40 million times per second

4 4 A collision at LHC 26 June 2009Ian Bird, CERN

5 5 The Data Acquisition 26 June 2009

6 Tier 0 at CERN: Acquisition, First pass reconstruction, Storage & Distribution Ian.Bird@cern.ch 1.25 GB/sec (ions) 6

7 7 WLCG data processing model Tier-1 (11 centres): Permanent storage Re-processing Analysis Tier-0 (CERN): Data recording Initial data reconstruction Data distribution Tier-2 (~130 centres): Simulation End-user analysis

8 Lyon/CCIN2P3 Barcelona/PIC De-FZK US-FNAL Ca- TRIUMF NDGF CERN US-BNL UK-RAL Taipei/ASGC Ian Bird, CERN826 June 2009 Today we have 49 MoU signatories, representing 34 countries: Australia, Austria, Belgium, Brazil, Canada, China, Czech Rep, Denmark, Estonia, Finland, France, Germany, Hungary, Italy, India, Israel, Japan, Rep. Korea, Netherlands, Norway, Pakistan, Poland, Portugal, Romania, Russia, Slovenia, Spain, Sweden, Switzerland, Taipei, Turkey, UK, Ukraine, USA. Today we have 49 MoU signatories, representing 34 countries: Australia, Austria, Belgium, Brazil, Canada, China, Czech Rep, Denmark, Estonia, Finland, France, Germany, Hungary, Italy, India, Israel, Japan, Rep. Korea, Netherlands, Norway, Pakistan, Poland, Portugal, Romania, Russia, Slovenia, Spain, Sweden, Switzerland, Taipei, Turkey, UK, Ukraine, USA. WLCG Collaboration Status Tier 0; 11 Tier 1s; 64 Tier 2 federations WLCG Collaboration Status Tier 0; 11 Tier 1s; 64 Tier 2 federations Amsterdam/NIKHEF-SARA Bologna/CNAF

9 Worldwide resources Ian.Bird@cern.ch9 Today >140 sites ~250k CPU cores ~100 PB disk Today >140 sites ~250k CPU cores ~100 PB disk We said we would have: 15 PB new data/year 100 (or 200) k “today’s fastest” CPU 45 PB dis k We said we would have: 15 PB new data/year 100 (or 200) k “today’s fastest” CPU 45 PB dis k

10 Ian.Bird@cern.ch10 Successes: We have a working grid infrastructure Experiments have truly distributed models Has enabled physics output in a very short time Network traffic close to that planned – and the network is extremely reliable Significant numbers of people doing analysis (at Tier 2s) Successes: We have a working grid infrastructure Experiments have truly distributed models Has enabled physics output in a very short time Network traffic close to that planned – and the network is extremely reliable Significant numbers of people doing analysis (at Tier 2s)

11 1 st year of LHC data Writing up to 70 TB / day to tape (~ 70 tapes per day) Writing up to 70 TB / day to tape (~ 70 tapes per day) Data written to tape (Gbytes/day) Disk Servers (Gbytes/s) Tier 0 storage: Accepts data at average of 2.6 GB/s; peaks > 7 GB/s Serves data at average of 7 GB/s; peaks > 18 GB/s CERN Tier 0 moves ~ 1 PB data per day Tier 0 storage: Accepts data at average of 2.6 GB/s; peaks > 7 GB/s Serves data at average of 7 GB/s; peaks > 18 GB/s CERN Tier 0 moves ~ 1 PB data per day Stored ~ 15 PB in 2010 >5GB/s to tape during HI ~ 2 PB/month to tape pp ~ 4 PB to tape in HI >5GB/s to tape during HI ~ 2 PB/month to tape pp ~ 4 PB to tape in HI

12 Significant use of Tier 2s for analysis – frequently-expressed concern that too much analysis would be done at CERN is not reflected Ian.Bird@cern.ch12 CPU – July Tier 0 capacity underused in general – But this is expected to change as luminosity increases

13 Data transfer capability today able to manage much higher bandwidths than expected/feared/planned Ian Bird, CERN13 Data transfer Fibre cut during STEP’09: Redundancy meant no interruption Fibre cut during STEP’09: Redundancy meant no interruption Data transfer: SW: gridftp, FTS (interacts with endpoints, recovery), experiment layer HW: light paths, routing, coupling to storage Operational: monitoring Data transfer: SW: gridftp, FTS (interacts with endpoints, recovery), experiment layer HW: light paths, routing, coupling to storage Operational: monitoring & the academic/research networks for Tier1/2!

14 Ian Bird, CERN14 Data transfers Final readiness test (STEP’09) Preparation for LHC startupLHC physics data Nearly 1 petabyte/week 2009: STEP09 + preparation for data Traffic on OPN up to 70 Gb/s! - ATLAS early reprocessing campaigns Traffic on OPN up to 70 Gb/s! - ATLAS early reprocessing campaigns LHC running: April – Sept 2010

15 Ian.Bird@cern.ch WLCG has been leveraged on both sides of the Atlantic, to benefit the wider scientific community – Europe: Enabling Grids for E-sciencE (EGEE) 2004-2010 European Grid Infrastructure (EGI) 2010-- – USA: Open Science Grid (OSG) 2006-2012 (+ extension?) Many scientific applications  15 Impact of the LHC Computing Grid Archeology Astronomy Astrophysics Civil Protection Comp. Chemistry Earth Sciences Finance Fusion Geophysics High Energy Physics Life Sciences Multimedia Material Sciences … Archeology Astronomy Astrophysics Civil Protection Comp. Chemistry Earth Sciences Finance Fusion Geophysics High Energy Physics Life Sciences Multimedia Material Sciences …

16 The ROC-LA (Regional Operations Center – Latin America) – Started September 2009, joint initiative between: Brazilian Center for Research in Physics (CBPF, Brazil), Institute of Nuclear Science of UNAM (ICN-UNAM, Mexico) and Universidad de los Andes (Uniandes, Colombia). – Latin American groups in ALICE, ATLAS, CMS, LHCb, have grid sites supported by the ROC-LA Previously, EC-funded projects (such as EELA) had collaborated with Latin American groups – Ecuador participated in 2 nd phase of EELA Ian.Bird@cern.ch16 Grid in Latin America


Download ppt "The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón."

Similar presentations


Ads by Google