26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for 2006-2010 for the CERN T0/T1 center.

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Beowulf Supercomputer System Lee, Jung won CS843.
10-Feb-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 February 2000 Les Robertson CERN/IT.
LANs and WANs Network size, vary from –simple office system (few PCs) to –complex global system(thousands PCs) Distinguish by the distances that the network.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
6/2/2015Bernd Panzer-Steindel, CERN, IT1 Computing Fabric (CERN), Status and Plans.
18. November 2003Bernd Panzer, CERN/IT1 LCG Internal Review Computing Fabric Overview.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Storage area Network(SANs) Topics of presentation
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Business Data Communications & Networking
Storage Networking Technologies and Virtualization Section 2 DAS and Introduction to SCSI1.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Bernd Panzer-Steindel, CERN/IT 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) High Througput Prototype (openlab + LCG prototype) (specific.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
IT Infrastructure Chap 1: Definition
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Planning the LCG Fabric at CERN openlab TCO Workshop November 11 th 2003 CERN.ch.
3. April 2006Bernd Panzer-Steindel, CERN/IT1 HEPIX 2006 CPU technology session some ‘random walk’
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
May PEM status report. O.Bärring 1 PEM status report Large-Scale Cluster Computing Workshop FNAL, May Olof Bärring, CERN.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Spending Plans and Schedule Jae Yu July 26, 2002.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Storage and Storage Access 1 Rainer Többicke CERN/IT.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
CERN IT Department CH-1211 Genève 23 Switzerland Introduction to CERN Computing Services Bernd Panzer-Steindel, CERN/IT.
S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
6. Juli 2015 Dietrich Liko Physics Computing 114. Vorstandssitzung.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
24. November 2003Bernd Panzer-Steindel, CERN/IT1 LCG LHCC Review Computing Fabric Overview and Status.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
EMC Proven Professional. Copyright © 2012 EMC Corporation. All Rights Reserved. NAS versus SAN NAS – Architecture to provide dedicated file level access.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
19. November 2007Bernd Panzer-Steindel, CERN/IT1 CERN Computing Fabric Status LHCC Review, 19 th November 2007.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
Virtual Server Server Self Service Center (S3C) JI July.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Lecture 1: Network Operating Systems (NOS)
GDB Meeting 12. January Bernd Panzer-Steindel, CERN/IT 1 Mass Storage at CERN GDB meeting, 12. January 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Hall D Computing Facilities Ian Bird 16 March 2001.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
ALICE Computing Data Challenge VI
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Grid site as a tool for data processing and data analysis
NL Service Challenge Plans
Network Operating Systems (NOS)
Bernd Panzer-Steindel, CERN/IT
Bernd Panzer-Steindel, CERN/IT
LHC Computing re-costing for
Bernd Panzer-Steindel CERN/IT
Development of LHCb Computing Model F Harris
Presentation transcript:

26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center

26. Juni 2003Bernd Panzer-Steindel, CERN/IT2Introduction The costing exercise focuses on the implementation of the LHC computing fabric for the CERN T0 + T1 center. This covers the phase 2 of the LCG project ( ) and the following 2 years ( ) of operation. The following 6 areas are covered :  CPU Resources  Disk Storage  Tape Storage  Local Network LAN  Outside Network WAN  System Administration more details :

26. Juni 2003Bernd Panzer-Steindel, CERN/IT3 History  October 1999 PASTA II  February 2001 Report of the steering group of the LHC computing review ‘Hoffman’ Report  February 2002 Task Force 1 Report  October 2002 PASTA III  March 2003 Re-costing exercise

26. Juni 2003Bernd Panzer-Steindel, CERN/IT4 Ingredients To estimate the cost and it’s evolution over the years several input parameters need to be taken into account :  Technology evolution (PASTA)  Today’s cost reference points of key components (e.g. 2.4 GHz processor, 120 GB disk, etc.)  The prediction for the future price development (slope) of the components  Estimation of the needed capacity and resources in 2006 and onwards from the experiments  The computing architecture and model

26. Juni 2003Bernd Panzer-Steindel, CERN/IT5Requirements  The requirements are defined by the key experiment parameters like trigger rates and event sizes  Some changes and refinements due to :  Much more experience with data challenges and productions  Better estimates and models  Different strategies e.g. ATLAS will keep one copy of the raw data at CERN while CMS will export the copy to the Tier 1 centers  Optimized model for the ‘staging’ of the equipment

26. Juni 2003Bernd Panzer-Steindel, CERN/IT6 Level of complexity Batch system, load balancing, Control software, Hierarchical Storage Systems Hardware Software CPU Physical and logical coupling Disk PC Storage tray, NAS server, SAN element Storage tray, NAS server, SAN element Motherboard, backplane, Bus, integrating devices (memory,Power supply, controller,..) Operating system, driver Network (Ethernet, fibre channel, Myrinet, ….) Hubs, switches, routers Cluster World wide cluster Grid middleware Wide area network Coupling of commodity building blocks Architecture (I)

26. Juni 2003Bernd Panzer-Steindel, CERN/IT7 tape servers disk servers application servers Generic model of a Fabric to external network network Architecture (II)

26. Juni 2003Bernd Panzer-Steindel, CERN/IT8 CPU Resources (I) Market According to Gartner/Dataquest, the Notebook share in the PC market (Q1 2003) was 33 %. Intel claims that this year the sale of Notebooks will reach 40 million units.Power The power consumption per produced SI2000 is still constant. There are plans (INTEL, TeraHertz) to reduce this in the future, but not convincing yet.  consequences for the upgrade to 2MW power and cooling in the center  Still focusing on the INTEL ‘deskside’ PC  Have to consider additional costs :  infrastructure (racks, cables, console,etc.)  efficiency (batch system, I/O wait, etc.)  market developments = difference between simple boxes and servers

26. Juni 2003Bernd Panzer-Steindel, CERN/IT9  Processor reference points and slopes from ‘street’ prices, the actual purchases in IT during the last 3 years and PASTA III CPU Resources (II) Processor price/performance evolution

26. Juni 2003Bernd Panzer-Steindel, CERN/IT10 CPU Resources (III) Year Resource [ million SI2000 ] Cost [ million CHF ]

26. Juni 2003Bernd Panzer-Steindel, CERN/IT11 Disk Storage (I)  Focusing on the commodity mass market  IDE, SATA disks  Extra costs :  a few disks are attached to a server, server costs  infrastructure (racks, console,etc.)  efficiency of space usage (OS/FS overheads,etc.)  Have to take into account the need for about 10% high-end storage = more expensive (x4)  databases

26. Juni 2003Bernd Panzer-Steindel, CERN/IT12 Disk price/performance evolution  Disk reference points and slopes from ‘street’ prices, the actual purchases in IT during the last 3 years and PASTA III Disk Storage (II)

26. Juni 2003Bernd Panzer-Steindel, CERN/IT13 Disk Storage (III) Year Resource [ PB ] Cost [ million CHF ]

26. Juni 2003Bernd Panzer-Steindel, CERN/IT14 Tape Storage (I)  Tape access performance is averaged over the year  needs dedicated, guaranteed resources during CDR of heavy-ion period  Tape storage infrastructure :  number of silos for the tapes  new building when the number of silos > 16  maintenance costs per year  Technology lifetime is about 5 years  replacement of equipment and re-copy of tapes (considerable expenses)  timing of the technology change is crucial

26. Juni 2003Bernd Panzer-Steindel, CERN/IT15 Tape Storage (II) Year Resource [ GB/s ] Cost [ million CHF ]  Tape Storage performance infrastructure (tape drives, tape server, etc.)

26. Juni 2003Bernd Panzer-Steindel, CERN/IT16 Tape Storage (III) Year Resource [ PB ] (total available tape capacity) Cost [ million CHF ]  Tape storage : tape media, silos, building, replacement

26. Juni 2003Bernd Panzer-Steindel, CERN/IT17 Gigabit Ethernet, 1000 Mbit/s Multiple 10 Gigabit Ethernet, 200 * Mbit/s Disk Server Tape Server CPU Server WAN Backbone 10 Gigabit Ethernet, Mbit/s Network Infrastructure (I)  Network architecture based on several Ethernet levels in a hierarchical structure  Implementation staged between 2005 and 2007 : 20%-50%-100%  Designed for a 8000 node fabric  completely new backbone based on 10GE equipment, still some uncertainties in this market for high end routers

26. Juni 2003Bernd Panzer-Steindel, CERN/IT18 Network Infrastructure (II) Year Resource [ GB/s ] Cost [ million CHF ]

26. Juni 2003Bernd Panzer-Steindel, CERN/IT19 System Administration (I)  The previous approached was based on the model of outsourcing the system administration part and was costed at about 1000 SFr per node.  The new model is based on an in-sourced model, where in the first years 7 administrators and 2 managers are responsible for the sysadmin part, increasing to 12 administrators later  This new model seems to be more appropriate after the experience from the last years and the fact that the amount of anticipated nodes is ‘only’ in the range of ~4000. This reduces the cost to about 400 SFR per node

26. Juni 2003Bernd Panzer-Steindel, CERN/IT20 System Administration (II) Year Resource [ FTE ] Cost [ million CHF ]

26. Juni 2003Bernd Panzer-Steindel, CERN/IT21 Wide Area Network (I)  Optimistically we should have access to 10 GBit WAN connections already in 2004  The move to 40 GBit is much more unclear  The network providers are still undergoing frequent ‘changes’ (mergers, chapter 11)  Large over-capacity available  Todays 40 GBit equipment is very expensive

26. Juni 2003Bernd Panzer-Steindel, CERN/IT22 Wide Area Network (II) Year Resource [ Gbits/s ] Cost [ million CHF ] X *

26. Juni 2003Bernd Panzer-Steindel, CERN/IT23Comparison Resource Old New New- Old Old New New - Old CPU+LAN Disk Tape WAN Sysadmin SUM Budget * * A bug in the original paper is here corrected All units in [ million CHF ]

26. Juni 2003Bernd Panzer-Steindel, CERN/IT24 Resource changes Resource change change CPU [MSI2K]19 12 %3419 % Disk [PB] % % Tape [PB]25 -1 %48-1 % We are in the process of providing a detailed description and explanation for the resource requirement changes. This will be appended to the note describing the calculations and the used parameters in the re-costing Excel sheets.

26. Juni 2003Bernd Panzer-Steindel, CERN/IT25Summary  Better than anticipated price developments for the components  Not technology changes, but market changes are the most worrying factors  Exercise needs to be repeated regularly (at least once per year)  Very fruitful and constructive collaboration between IT and the Experiments