October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.

Slides:



Advertisements
Similar presentations
RAL Tier1: 2001 to 2011 James Thorne GridPP th August 2007.
Advertisements

Computer Room Provision in Atlas and R89 Graham Robinson.
State Data Center Re-scoped Projects With emphasis on reducing load on cooling systems in OB2 April 4, 2012.
The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.
ICW Water Leak at FCC Forces a Partial Computing Outage at Feynman Computing Center J. MacNerland, M. Thomas, G. Bellendir, A. Walters May 31, 2007.
Integrating Multiple Microgrids into an Active Network Management System Presented By:Colin Gault, Smarter Grid Solutions Co-Authors:Joe Schatz, Southern.
State Data Center Re-scoped Projects With emphasis on reducing load on cooling systems in OB2 April 4, 2012.
Project 2: ATM’s & Queues
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
CERN IT Department CH-1211 Genève 23 Switzerland t Options for Expanding CERN’s Computing Capacity Without A New Building Medium term plans.
Construction Scheduling and Estimating.  Someone gets an IDEA  The IDEA inspires a PLAN to be created  The PLAN is used to build the SCHEDULE  The.
CSG Panel – May 10, 2006 Key Lessons for New Data Centers Project overview Requirements Lessons learned Going Forward.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
IT Department 29 October 2012 LHC Resources Review Board2 LHC Resources Review Boards Frédéric Hemmer IT Department Head.
CV activities on LHC complex during the long shutdown Serge Deleval Thanks to M. Nonis, Y. Body, G. Peon, S. Moccia, M. Obrecht Chamonix 2011.
Summary Device protocols tied intimately to applications. A need to significantly reduce critical data update times. Current network bandwidth consumption.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Leveraging Renewable Energy in Data Centers Ricardo Bianchini on tour 2012.
MISSION CRITICAL COLOCATION 360 Technology Center Solutions.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
1 BROOKHAVEN SCIENCE ASSOCIATES NSLS-II Project Baseline Jim Yeck NSLS-II Deputy Project Director NSLS-II PAC Meeting November 20, 2007.
Physical Infrastructure Issues In A Large Centre July 8 th 2003 CERN.ch.
Net Metering and Embedded Generation NB Climate Action Change Hub Advisory Committee Meeting – October 1, 2008 Saint John, NB Linda Berthelot.
Research & Technology Implementation TxDOT RTI OFFICE.
September 24, 2007Paying for Load Growth and New Large Loads APPA September 2007.ppt 1 Paying for Load Growth and New Large Loads David Daer Principal.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
July LEReC Review July 2014 Low Energy RHIC electron Cooling Al Pendzick LEReC Civil Construction.
LCG-France Vincent Breton, Eric Lançon and Fairouz Malek, CNRS-IN2P3 and LCG-France ISGC Symposium Taipei, March 27th 2007.
Learning, our way Daikin Europe, Consulting sales section1 Understanding Client’s Needs and best Daikin solution.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Commodity Node Procurement Process Task Force: Status Stephen Wolbers Run 2 Computing Review September 13, 2005.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Welcome Georgia Infrastructure Transformation 2010 Kick-off Meeting January 30, 2008.
1 MICE Hall and Infrastructure - Progress since CM18 Chris Nelson MICE CM19 Oct 2007.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
Nikhef/(SARA) tier-1 data center infrastructure
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
S. Claudet - 31st May 2007Power refrigeration for LHC Power refrigeration at 4.5K & 1.8K for the LHC S. Claudet, CERN AT-ACR.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
Lesson learned after our recent cooling problem Michele Onofri, Stefano Zani, Andrea Chierici HEPiX Spring 2014.
Pixel power R&D in Spain F. Arteche Phase II days Phase 2 pixel electronics meeting CERN - May 2015.
OsC mtg 24/4/2014 OsC mtg Alan Grant. 2 OsC mtg 24/4/ MICE Finances - Forward Look.
October 18, Muon Meeting B. Schmidt Outline: Installation schedule under Discussion Status of Integration activities Possible modifications to the.
Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch.
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
1. Road to the EDR 2 Requirements Safety system Transfer line concept Plant concept design Requirements Evaporator concept and performance Transfer line.
West Cambridge Data Centre Ian Tasker Information Services.
West Contra Costa USD General Obligation Bond Program Status Presentation to the Facilities Subcommittee June 11, 2013.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
The Genome Sequencing Center's Data Center Plans Gary Stiehr
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Meeting Nantucket’s demand for electricity, today and tomorrow. Upgrading Nantucket’s Electric Grid.
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
Data Center Stabilization
SR1 extension follow up 27-Mar-17 Marco Ciapetti.
Refrigerated storage systems
CERN Data Centre ‘Building 513 on the Meyrin Site’
Luca dell’Agnello INFN-CNAF
News and computing activities at CC-IN2P3
Luca dell’Agnello Daniele Cesini GDB - 13/12/2017
Maintenance at ALBA facility: Strategy for the conventional services
Proposal: LHConCRAY operations in 2017
Presentation transcript:

October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3

The "extra" time allowed by LHC delay has been used in order to strengthen the infrastructure and the critical computing services Special emphasis has been put on Storage  The price to pay was a temporary degradation of the service quality and stability  All the necessary stability is being restored now October 23rd, 2009Dominique Boutigny2

Main improvement HPSS migration TReqs development and deployment pNFS  Chimera migration AFS Still to be done: –New batch system will be deployed in 2010 –New product selection phase is currently going on –Should be transparent for Grid applications October 23rd, 2009Dominique Boutigny3

We had a big unexpected problem with CMS dedicated support during the summer –A lot of efforts from Farida Fassi and David Bouvet to try to overcome the difficulties but the manpower was not enough A new CMS support person has been hired but he will start only in December October 23rd, 2009Dominique Boutigny4

October 23rd, 2009Dominique Boutigny5

Electrical Power October 23rd, 2009Dominique Boutigny6 Slope: 500W / day 3 transformers: 1550 kW recently upgraded to 3000 kW A 800 kW rental transformer has been installed during the summer Work in 2007 / 2008:  New main patch panel  New UPS  Improvement of the battery system  880 kW Diesel generator  Electricity distribution reshuffling  New power factor corrector

Cooling System October 23rd, 2009Dominique Boutigny7 We have a 3 rd chiller (600 kW theoretical cooling power) + some work on the second chiller to decrease the noise level (currently above the environmental limit) In a week from now we will have the whole cooling capacity installed We will be at the maximum allowed cooling capacity  we will have to stay within this cooling budget until we get the new computer room Infrastructure work in 2008 / 2009 – electricity + cooling : 500 k€

New building October 23rd, 2009 Computer Room Emprise future extension New building 32 m x 28 m 8Dominique Boutigny

Needs October 23rd, 2009 Build a simple computing model with the following ingredients:  Serve ~40 groups with "standards needs"  Fulfill LHC computing commitments and provide first class analysis capability  Expect a very significant growth of the Astroparticle community needs (LSST for instance)  Deduce computing capacity  Add some capacity for network, services, etc. 9Dominique Boutigny

Needs (2) October 23rd, 2009  Assume Moore law is still valid  Extrapolate up to 2019 On top of the existing computer room (1 MW) indicated power is for computing only power for cooling has to be added Due to budget constraints the new computer room will have to start with a limited power  Modular design – Easily scalable Chilled water and electricity distribution should be designed for the 2019 target value Equipment: transformers, chillers, UPS etc… will be added later End up with: racks 600 kW racks 600 kW racks 1.5 MW racks 1.5 MW racks 3.2 MW racks 3.2 MW 10Dominique Boutigny

Design consideration October 23rd, 2009 The 2 storey building will be devoted to computing equipment (no offices)  First storey: Services (UPS, Chillers, etc.) m 2  Second Storey: Computer room m 2 We decided to give up with raised floor Every fluids will come from the top: Chilled water – Power – Network We decided to give up with raised floor Every fluids will come from the top: Chilled water – Power – Network 11Dominique Boutigny

Schedule 5 out of 15 "design and build" teams have been retained after a first selection phase Preliminary design documents received last Monday Final choice will be done by Friday October 30 Hope to begin construction work at spring time New building delivery early 2011 The main risk is related to the French environmental regulation that may delay the project (6 – 12 months) If the delay is too large, we may have to host some computing equipment outside CC-IN2P3  Expensive ! October 23rd, 2009Dominique Boutigny12