May 2004Sverre Jarp1 Preparing the computing solutions for the Large Hadron Collider (LHC) at CERN Sverre Jarp, openlab CTO IT Department, CERN.

Slides:



Advertisements
Similar presentations
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
Advertisements

Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Last update: 02/06/ :05 LCG les robertson - cern-it 1 The LHC Computing Grid Project Preparing for LHC Data Analysis NorduGrid Workshop Stockholm,
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
LHC Computing Grid Project: Status and Prospects
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Fabric Management for CERN Experiments Past, Present, and Future Tim Smith CERN/IT.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Particle Physics and the Grid Randall Sobie Institute of Particle Physics University of Victoria Motivation Computing challenge LHC Grid Canadian requirements.
Frédéric Hemmer, CERN, IT DepartmentThe LHC Computing Grid – October 2006 LHC Computing and Grids Frédéric Hemmer IT Deputy Department Head October 10,
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
… where the Web was born 11 November 2003 Wolfgang von Rüden, IT Division Leader CERN openlab Workshop on TCO Introduction.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
J.J.Blaising April 02AMS DataGrid-status1 DataGrid Status J.J Blaising IN2P3 Grid Status Demo introduction Demo.
…building the next IT revolution From Web to Grid…
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Oracle Tech DayNovember 2004 Building the world’s largest Scientific Grid Jamie Shiers Database Group, CERN, Geneva, Switzerland.
SC4 Planning Planning for the Initial LCG Service September 2005.
SJ – Mar The “opencluster” in “openlab” A technical overview Sverre Jarp IT Division CERN.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
SJ – Nov CERN’s openlab Project Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
EGEE is a project funded by the European Commission under contract IST NA4/HEP work F Harris (Oxford/CERN) M.Lamanna(CERN) NA4 Open meeting.
LHC Computing, CERN, & Federated Identities
LCG LHC Computing Grid Project From the Web to the Grid 23 September 2003 Jamie Shiers, Database Group IT Division, CERN, Geneva, Switzerland
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
© 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice CERN & HP & Computing Dr John Manley.
CERN News on Grid and openlab François Fluckiger, Manager, CERN openlab for DataGrid Applications.
The ATLAS detector … … is composed of cylindrical layers: Tracking detector: Pixel, SCT, TRT (Solenoid magnetic field) Calorimeter: Liquid Argon, Tile.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
SJ – June CERN openlab for DataGrid applications Sverre Jarp CERN openlab CTO IT Department, CERN.
Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.
ALICE Computing Data Challenge VI
Grid site as a tool for data processing and data analysis
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Grid related projects CERN openlab LCG EDG F.Fluckiger
The LHC Computing Challenge
UK GridPP Tier-1/A Centre at CLRC
openLab Technical Manager
CERN, the LHC and the Grid
LHC Data Analysis using a worldwide computing grid
Gridifying the LHCb Monte Carlo production system
Presentation transcript:

May 2004Sverre Jarp1 Preparing the computing solutions for the Large Hadron Collider (LHC) at CERN Sverre Jarp, openlab CTO IT Department, CERN

May 2004Sverre Jarp2 Short overview of CERN

May 2004Sverre Jarp3 150m depth Accelerators and detectors in underground tunnels and caverns

May 2004Sverre Jarp4 CERN in numbers Financed by 20 European countries Special contributions also from other countries: USA, Canada, China, Japan, Russia, etc CHF (650 M€) budget to cover operation + new accelerators 2,200 staff (and diminishing) 6,000 users (researchers) from all over the world broad visitor and fellowship program

May 2004Sverre Jarp5 CERN User Community Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users

May 2004Sverre Jarp6 Computing at CERN

May 2004Sverre Jarp reconstruction simulation analysis interactive physics analysis batch physics analysis batch physics analysis detector event summary data raw data event reprocessing event reprocessing event simulation event simulation analysis objects (extracted by physics topic) event filter (selection & reconstruction) event filter (selection & reconstruction) processed data Data Management and Computing for Physics Analysis

May 2004Sverre Jarp8 High Energy Physics Computing Characteristics Independent events (collisions of particles) trivial (read: pleasant) parallel processing Bulk of the data is read-only versions rather than updates Meta-data in databases linking to “flat” files Compute power measured in SPECint (not SPECfp) But good floating-point is important Very large aggregate requirements: computation, data, input/output Chaotic workload – research environment - physics extracted by iterative analysis, collaborating groups of physicists  Unpredictable  unlimited demand

May 2004Sverre Jarp9 SHIFT architecture (Scalable Heterogeneous Integrated Facility) Tape server Disk server Batch server Interactive server Batch and disk SMP Network - Ethernet AFS In 2001 SHIFT won the 21st Century Achievement Award issued by Computerworld Disk server Batch server Tape server

May 2004Sverre Jarp10 CERN’s Computing Environment (today) High- throughput computing (based on reliable “commodity” technology) More than 1500 (dual Xeon processor) PCs with Red Hat Linux About 3 Petabytes of data (on disk and tapes)

May 2004Sverre Jarp11 IDE Disk servers Cost-effective disk storage: < 10 CHF/GB (mirrored)

May 2004Sverre Jarp12 The LHC Challenge

May 2004Sverre Jarp13 Large Hadron Collider A completely new particle accelerator The largest superconductor installation in the world Same tunnel as before; 27 km of magnets Super-fluid Helium cooled to 1.9°K Two counter-circulating proton beams in a field of 8.4 Tesla Collision energy: TeV Simulation tool: Sixtrack

May 2004Sverre Jarp14 CMS ATLAS LHCb Accumulating data at 10 Petabytes/year (plus replicated copies) Requirements: Storage – Raw recording rate: up to 1 GB/s per experiment 2 Petabytes of disk/year Total HEP processing needs – 50,000 (100,000) of today’s fastest processors The Large Hadron Collider (LHC) has 4 Detectors:

May 2004Sverre Jarp15 The Large Hadron Collider (LHC) goal: All charged tracks with pt > 2 GeV Reconstructed tracks with pt > 25 GeV (+30 minimum bias events) Find new physics, such as the Higgs particle, and get the Nobel price ! selectivity: 1 in person in a thousand world populations

May 2004Sverre Jarp16 LHC Computing Grid Project Goal of the project: To prepare, deploy and operate the computing environment for the experiments to analyse the data from the LHC detectors Phase 1 – development of common applications, libraries, frameworks, prototyping of the environment, operation of a pilot computing service Phase 2 – acquire, build and operate the LHC computing service The Grid is just a tool towards achieving this goal

May 2004Sverre Jarp17 RAL IN2P3 BNL FZK CNAF PIC ICEPP FNAL Computing Model (simplified!!) Tier-0 – the accelerator centre Filter  raw data Reconstruction  summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1 Tier-1: Permanent storage and management of raw, ESD, calibration data, meta-data, analysis data and databases  grid-enabled data service Data-heavy analysis Re-processing raw  ESD National, regional support “online” to the data acquisition process high availability, long-term commitment managed mass storage Tier-2: Well-managed disk storage – grid-enabled Simulation End-user analysis – batch and interactive High performance parallel analysis (PROOF ) USC NIKHEF Krakow CIEMAT Rome Taipei TRIUMF CSCS Legnaro UB IFCA IC MSU Prague Budapest Cambridge Tier-1 small centres Tier-2 desktops portables

May 2004Sverre Jarp18 RAL IN2P3 BNL FZK CNAF USC PIC ICEPP FNAL NIKHEF Krakow Taipei CIEMAT TRIUMF Rome CSCS Legnaro UB IFCA IC MSU Prague Budapest Cambridge Data distribution ~70 Gbits/sec

May 2004Sverre Jarp19 LCG Basics Getting the data from the detector to the grid requires sustained data collection and distribution keeping up with the accelerator To achieve the required levels of performance, reliability, resilience at minimal cost (people, equipment) we also have to work on scalability and performance of some of the basic computing technologies: cluster management mass storage management high performance networking  Workshop 3  Workshop 1  Workshop 2  Workshop 5

May 2004Sverre Jarp20 SW Rep Fabric Automation at CERN Node Cfg Cache SW Cache SPMA SWRep CDB OraMon NCMMSA SMS HMS LEMON LEAF Configuration Installation Fault & hardware Management Monitoring Includes technology developed by DataGrid

May 2004Sverre Jarp21 WAN connectivity Itanium-2 single stream: 5.44 Gbps 1.1 TB in 30 mins We now have to get from an R&D project (DATATAG) to a sustained, reliable service – GEANT, ESNET,.. Microsoft enters the stage: Multiple streams: 6.25 Gbps 20 April 2004

May 2004Sverre Jarp22 Preparing for 2007 The LCG installation is on a tight schedule Due to the need for deployment and development in parallel first data Initial service in operation Decisions on final core middleware Demonstrate core data handling and batch analysis Installation and commissioning

May 2004Sverre Jarp23 CERN openlab

May 2004Sverre Jarp24 openlab: The technology focus of CERN/IT Industrial Collaboration: Enterasys, HP, IBM, and Intel and Oracle are our partners Voltaire (with Infiniband switches) just joined Technology aimed at the LHC era: Network switches at 10 Gigabits ~ 100 rack-mounted HP servers 64-bit computing: Itanium-2 processors StorageTank storage system w/28 TB ~1 GB/s throughput

May 2004Sverre Jarp25 64-bit porting status Ported: Castor (data management subsystem) GPL. Certified by authors. ROOT (C++ data analysis framework) Own license. Binaries both via gcc and ecc. Certified by authors. CLHEP (class library for HEP) GPL. Certified by maintainers. GEANT4 (C++ Detector simulation toolkit) Own license. Certified by authors. CERNLIB (all of CERN’s FORTRAN software) GPL. In test. Zebra memory banks are I*4 ALIROOT (entire ALICE software framework) LCG-2 software from VDT/EDG GPL-like license. Being ported: CMS ORCA (part of CMS framework)

May 2004Sverre Jarp26 CERN “Where the Web was born…” ® CERN is busily preparing for the first arrival of LHC data in 2007 New and exciting technologies are needed to manage the data Seamlessly, around the globe Together with our partners (EU, industry, other Physics Labs, other sciences) we expect to come up with interesting proofs-of-concept and technological spin-off ! High Throughput Computing is “on the move” ! People Motivation Technology Science Innovation