CERN - IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.

Slides:



Advertisements
Similar presentations
Computer Room Requirements for High Density Rack Mounted Servers Rhys Newman Oxford University.
Advertisements

Matt Warner Future Facilities Proactive Airflow Management in Data Centre Operation - using CFD simulation to improve resilience, energy efficiency and.
Computer Room Provision in Atlas and R89 Graham Robinson.
The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.
Cooling Product Positioning
PG&E and Altera Data Center Energy Efficiency Project.
How do you recognize an energy-efficient data center?
02/24/09 Green Data Center project Alan Crosswell.
Data centres are dynamic computer environments. In recent years the increasing mix of old and new computer technologies is causing the overall power factor.
Thermal Management Solutions from APW President Systems
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
MODULAR DATA CENTER PUE
Commodity Data Center Design
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
CERN IT Department CH-1211 Genève 23 Switzerland t Options for Expanding CERN’s Computing Capacity Without A New Building Medium term plans.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Business Continuity Overview Wayne Salter HEPiX April 2012.
Architecture for Modular Data Centers James Hamilton 2007/01/08
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
© 2010 Colt Telecom Group Limited. All rights reserved. Next Generation Data Centre Design Akber Jaffer 2.
Data Centre Power Trends UKNOF 4 – 19 th May 2006 Marcus Hopwood Internet Facilitators Ltd.
IT Department 29 October 2012 LHC Resources Review Board2 LHC Resources Review Boards Frédéric Hemmer IT Department Head.
Air Conditioning and Computer Centre Power Efficiency The Reality Christophe Martel Tony Cass.
CERN IT Department CH-1211 Genève 23 Switzerland t The new (remote) Tier 0 What is it, and how will it be used? The new (remote) Tier 0 What.
Sensor-Based Fast Thermal Evaluation Model For Energy Efficient High-Performance Datacenters Q. Tang, T. Mukherjee, Sandeep K. S. Gupta Department of Computer.
Transforming B513 into a Computer Centre for the LHC Era Tony Cass —
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS From data management to storage services to the next challenges.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Review of the Project for the Consolidation and Upgrade of the CERN Electrical Distribution System October 24-26, 2012 THE HIGH-VOLTAGE NETWORK OF CERN.
Physical Infrastructure Issues In A Large Centre July 8 th 2003 CERN.ch.
Thermal Design Project Final Report John Wallerich Principal Engineer Wallerich Group, LLC.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter / Vincent Doré.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
Dealing with Hotspots in Datacenters Caused by High-Density Computing Peter Hannaford Director of Business Development EMEA.
Computer Centre Upgrade Status & Plans Post-C5, June 27 th 2003 CERN.ch.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
Thermal Aware Data Management in Cloud based Data Centers Ling Liu College of Computing Georgia Institute of Technology NSF SEEDM workshop, May 2-3, 2011.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
1 Thermal Management of Datacenter Qinghui Tang. 2 Preliminaries What is data center What is thermal management Why does Intel Care Why Computer Science.
Environmental Energy Technologies datacenterstalk ppt Latest Measured Data on Data Center Power Use in the U.S. Jonathan Koomey, Ph.D. Lawrence Berkeley.
Energy Savings in CERN’s Main Data Centre
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter HEPiX May 2011.
Future O 2 LHC P2 U. FUCHS, P. VANDE VYVRE.
Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS Data architecture challenges for CERN and the High Energy.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN IT Facility Planning and Procurement HEPiX Fall 2010 Workshop.
1 PCE 2.1: The Co-Relationship of Containment and CFDs Gordon Johnson Senior CFD Manager at Subzero Engineering CDCDP (Certified Data Center Design Professional)
Introduction to the Data Centre Track HEPiX St Louis 7 th November 2007 CERN.ch.
©2003 PJM 1 Presentation to: Maryland Public Service Commission May 16, 2003.
Dell EMC Modular Data Centers
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
Enabling High Efficient Power Supplies for Servers
Unit 2: Chapter 2 Cooling.
MODULAR DATA CENTER PUE
Computing room PC farm UPS.
Using Heat to Increase Cooling George Hannah BEng (Hons) CEng MIMechE
CERN Data Centre ‘Building 513 on the Meyrin Site’
the high-voltage network of CERN ToMORROW
© 2016 Global Market Insights, Inc. USA. All Rights Reserved Fuel Cell Market size worth $25.5bn by 2024 Data Center Cooling Market.
Where Does the Power go in DCs & How to get it Back
Architecture for Modular Data Centers
Commodity Data Center Design
Ventilation efficiency for Horn cooling
The Greening of IT November 1, 2007.
Modular Edge-connected data centers
Door Heat Exchanger: Specification
Presentation transcript:

CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass

CERN - IT Department CH-1211 Genève 23 Switzerland Basic Issues Computing equipment is not becoming more energy efficient – Or, rather, not as rapidly as performance improves Rack power density is increasing – From 1.5kW to 8kW now with 15+kW foreseen Power demand will grow with increasing requirement for LHC computing – Conservative assumptions lead to ~20MW by 2020 – “Moore’s law” growth in capacity, as seen for LEP, leads to prediction of ~100MW by “Critical” IT loads are at planned capacity limit of 250kW now and demand is growing – “critical” => with infinite diesel backup in the event of severe power outage. – “physics” load loses power after <10 minutes if both French and Swiss power are unavailable. Housing Future Computing Equipment - 2

Evolution of power and performance INTEL and AMD processors installed at CERN INTEL and AMD processors installed at CERN 2 cores 1 GHz 2 cores 1 GHz 2 cores 2.4 GHz 2 cores 2.4 GHz 2 cores 2.8 GHz 2 cores 2.8 GHz 4 cores 2.2 GHz 4 cores 2.2 GHz 4 cores 3 GHz 4 cores 3 GHz 8 cores 2.33 GHz 8 cores 2.33 GHz

CERN - IT Department CH-1211 Genève 23 Switzerland Basic Issues Computing equipment is not becoming more energy efficient – Or, rather, not as rapidly as performance improves Rack power density is increasing – From 1.5kW to 8kW now with 15+kW foreseen Power demand will grow with increasing requirement for LHC computing – Conservative assumptions lead to ~20MW by 2020 – “Moore’s law” growth in capacity, as seen for LEP, leads to prediction of ~100MW by “Critical” IT loads are at planned capacity limit of 250kW now and demand is growing – “critical” => with infinite diesel backup in the event of severe power outage. – “physics” load loses power after <10 minutes if both French and Swiss power are unavailable. Housing Future Computing Equipment - 4

CERN - IT Department CH-1211 Genève 23 Switzerland Project Power Evolution Housing Future Computing Equipment - 5

CERN - IT Department CH-1211 Genève 23 Switzerland Basic Issues Computing equipment is not becoming more energy efficient – Or, rather, not as rapidly as performance improves Rack power density is increasing – From 1.5kW to 8kW now with 15+kW foreseen Power demand will grow with increasing requirement for LHC computing – Conservative assumptions lead to ~20MW by 2020 – “Moore’s law” growth in capacity, as seen for LEP, leads to prediction of ~100MW by “Critical” IT loads are at planned capacity limit of 250kW now and demand is growing – “critical” => with infinite diesel backup in the event of severe power outage. – “physics” load loses power after <10 minutes if both French and Swiss power are unavailable. Housing Future Computing Equipment - 6

CERN - IT Department CH-1211 Genève 23 Switzerland Follow on issues The Meyrin site (with the Computer Centre) is at ~maximum consumption – 66MVA; limited by autotransfer system (between French & Swiss supplies) and feed from Prévessin. Diesel capacity is limited to 350kW for CC – we will couple the two 300kVA UPS modules to gain headroom at the expense of redundancy; no clear growth path thereafter (redundant) local diesel capacity?? B513 is very poorly designed from a modern HVAC standpoint – cooling 2.5MW will be a struggle, although there a number of optimisations still to make. CFD simulations interesting, but hampered by lack of real data on server air flow rates. Housing Future Computing Equipment - 7

CERN - IT Department CH-1211 Genève 23 Switzerland What are we doing? Convincing ourselves air cooling is OK – Mostly done; power density of up to ~20kW/rack looks achievable within optimally designed building (long and thin, not square; unobstructed airflows; rigorous hot/cold aisle separation) – As an alternative, prefer “open” racks with heat exchanger on the back to “closed” racks with internal air flow. better able to cope with failure or, and more likely, door openings. Studying future options – using the existing computer centre, for example by installing equipment at a higher power/m 2 density; – using the “barn” [adjacent to the current machine room]; – using alternative buildings on one of the CERN sites - e.g. the former water tank (B226), the B186 assembly hall and B927 on the Prévessin site; – renting or purchasing space in a computing centre in the Geneva area; – purchasing the full computing service from a service provider (e.g. Amazon’s Computing Cloud). – Shipping container options (stop gap for 2010/11 if we don’t have a definitive solution by then???) Housing Future Computing Equipment - 8

CERN - IT Department CH-1211 Genève 23 Switzerland Summary Power demand will exceed capacity by 2010 at the latest. Considering options to deliver increased capacity – but we are behind schedule to meet the 2010 crossover so stopgap solutions may be necessary. Money will be needed – Intel consider construction cost of new centre is $6/W M€ for 20MW facility? But a modular design would spread costs. – Operation is 350k€/compute-MW/year assuming an HVAC overhead of 30% Housing Future Computing Equipment - 9