Where Does the Power go in DCs & How to get it Back Foo Camp 2008 2008-07-12 James Hamilton web: blog:

Slides:



Advertisements
Similar presentations
Demand Response: The Challenges of Integration in a Total Resource Plan Demand Response: The Challenges of Integration in a Total Resource Plan Howard.
Advertisements

CP1610: Introduction to Computer Components Computer Power Supplies.
Commodity Data Center Design James Hamilton
Increasing the Efficiency of UPS Systems – And Proving It!
Variable Frequency Drives VFD Basics
Power Reduction Techniques For Microprocessor Systems
Cooling Product Positioning
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
PG&E and Altera Data Center Energy Efficiency Project.
1 Page 1 Public Interest Energy Research (PIER) California Institute for Energy & Environment (CIEE) Lawrence Berkeley National Laboratory Bill Tschudi.
Utility-Function-Driven Energy- Efficient Cooling in Data Centers Authors: Rajarshi Das, Jeffrey Kephart, Jonathan Lenchner, Hendrik Hamamn IBM Thomas.
Application Models for utility computing Ulrich (Uli) Homann Chief Architect Microsoft Enterprise Services.
Datacenter Power State-of-the-Art Randy H. Katz University of California, Berkeley LoCal 0 th Retreat “Energy permits things to exist; information, to.
44 th Annual Conference & Technical Exhibition By Thomas Hartman, P.E. The Hartman Company Georgetown, Texas Sustainable Chilled Water.
Jim Chmielewski – HVAC Sales Manager Emerson Control Techniques
Architecture for Modular Data Centers James Hamilton 2007/01/17
24 x 7 Energy Efficiency February, 2007 William Tschudi
Commodity Data Center Design
Cutting the Green IT Hype: Fact vs. Fiction Kenneth G. Brill, Executive Director Uptime Institute Inc
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
Architecture for Modular Data Centers James Hamilton 2007/01/08
CS 423 – Operating Systems Design Lecture 22 – Power Management Klara Nahrstedt and Raoul Rivas Spring 2013 CS Spring 2013.
Data Center Consolidation & Energy Efficiency in Federal Facilities
Costs of Ancillary Services & Congestion Management Fedor Opadchiy Deputy Chairman of the Board.
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
Folklore Confirmed: Compiling for Speed = Compiling for Energy Tomofumi Yuki INRIA, Rennes Sanjay Rajopadhye Colorado State University 1.
Cloud Computing Economies of Scale MIX 2010 James Hamilton, 2010/3/15 VP & Distinguished Engineer, Amazon Web Services web: mvdirona.com/jrh/work.
Data Centers - They’re Back… E SOURCE Forum September, 2007 William Tschudi
1 VLSI Design SMD154 LOW-POWER DESIGN Magnus Eriksson & Simon Olsson.
PHY 202 (Blum)1 More basic electricity Non-Ideal meters, Power, Power supplies.
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
Lindab Solus - Simply the natural choice.... lindab | comfort Chilled beam revolution! + Save up to 45 % cooling energy!* + Installation and investment.
Overview of Data Center Energy Use Bill Tschudi, LBNL
Management and Organisation of Electricity Use Electrical System Optimisation Belgrade November 2003.
Server Virtualization
Thermal-aware Issues in Computers IMPACT Lab. Part A Overview of Thermal-related Technologies.
50th HPC User Forum Emerging Trends in HPC September 9-11, 2013
Energy Efficient Data Centers Update on LBNL data center energy efficiency projects June 23, 2005 Bill Tschudi Lawrence Berkeley National Laboratory
Electrical Systems Efficiency Bill Tschudi, LBNL
We can…. 2 GLOBAL REFERENCES Rev: 00 References :
1 CTO Challenge William Tschudi February 27, 2008.
Data Center Energy Efficiency Sonoma Mountain Village November 29, 2007 William Tschudi
Authors: William Tschudi, Lawrence Berkeley National Lab Stephen Fok, Pacific Gas and Electric Company Stephen Fok, Pacific Gas and Electric Company Presented.
#watitis2015 TOWARD A GREENER HORIZON: PROPOSED ENERGY SAVING CHANGES TO MFCF DATA CENTERS Naji Alamrony
Increasing DC Efficiency by 4x Berkeley RAD Lab
FPGA-Based System Design: Chapter 6 Copyright  2004 Prentice Hall PTR Topics n Low power design. n Pipelining.
PHY 202 (Blum)1 More basic electricity Non-Ideal meters, Kirchhoff’s rules, Power, Power supplies.
Energy Savings in CERN’s Main Data Centre
Green Server Room Construction Primary concerns when building a server room is size and cooling. Size can be diminished with the use of virtual servers.
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
CS203 – Advanced Computer Architecture
Insights into Google's PUE Results Spring Published Google PUE Results.
CS203 – Advanced Computer Architecture Warehouse Scale Computing.
Data Center Energy Efficiency SC07 Birds of a Feather November, 2007 William Tschudi
BLADE HEMAL RANA BLADE TECHNOLOGIES PRESENTED BY HEMAL RANA COMPUTER ENGINEER GOVERNMENT ENGINEERING COLLEGE,MODASA.
Dell EMC Modular Data Centers
Introduction to Electric Power System and A. C. Supply
The Data Center Challenge
Electrical Systems Efficiency
Green cloud computing 2 Cs 595 Lecture 15.
Data Center Research Roadmap
Energy Efficiency in District Coiling System
Robicon Perfect Harmony.
Where Does the Power go in DCs & How to get it Back
Architecture for Modular Data Centers
Commodity Data Center Design
The Greening of IT November 1, 2007.
Potential Benefits for Market Transformation to Inverter A/C
Presentation transcript:

Where Does the Power go in DCs & How to get it Back Foo Camp James Hamilton web: blog:

Agenda Power is the important measure – Power drives costs in that Data Center costs are 80% providing power and cooling infrastructure – Increasing concern about DC power consumption – Work done/watt Power In: Power Distribution & Optimizations Servers: Critical Load & Optimizations Heat Out: Mechanical Systems & Optimization 7/12/2008http://perspectives.mvdirona.com2

Power Distribution: Utility to CPU Power Conversions to server (each roughly 98%) – High (115kVAC) to medium(13.2kVAc) [differs by geo] – Uninterruptable Power Supply & Generators: Running at 13.2VAC UPS: can be rotary or battery – Good ones in 97% range. Much more common 93 to 94% – Common: rectify to DC, trickle to batteries, then invert to AC (~93%) – No loss at generators (please don’t start them: ~130gallons/hour * 10 or so) – 13.2kVACto 480VAC – 480VAC to 208VAC Conversions in Server to CPU & Memory: – Power Supply: 208VAC to 12VDC (80% common, ~95% affordable) – VRM: 12VDC to ~1.5VDC (80% common, 90% affordable) 7/12/2008http://perspectives.mvdirona.com3

Power Redundancy at Geo-Level Over 20% of entire DC costs is in power redundancy – Batteries able to supply up to 15 min at some facilities – N+2 generation (2.5MW) at over $2M each Instead, use more smaller, cheaper data centers Eliminate redundant power & bulk of shell costs Average UPS in the 93% range – Over 1MW wasted in 15MW facility 7/12/2008http://perspectives.mvdirona.com4

Power Distribution Optimization Rules to minimize power distribution losses: 1.Avoid conversions (Less transformer steps & efficient or no UPS) 2.Increase efficiency of conversions 3.High voltage as close to load as possible 4.Size voltage regulators (VRM/VRDs) to load & use efficient parts 5.DC distribution potentially a small win With regulatory issues Two interesting approaches: – 480VAC (or higher) to rack & 48VDC (or 12VDC) within – 480VAC to PDU and 277VAC to load 1 leg of 480VAC 3-phase distribution Common design: 44% lost in distribution – 1*.98*.98*.93*.98*.8*.8 => 56% (~4.4MW lost on 10MW total) – Affordable technology: 1*.99*.99*.95*.95 => 88% (~1.2MW total) 7/12/2008http://perspectives.mvdirona.com5

Critical Load Optimization Power proportionality is great but “off” is even better – Today: Idle server consumes ~60% power of full load – Industry secret: “good” data center server utilization around ~30% – Off requires changing workload location What limits 100% dynamic workload distribution? – Networking constraints VIPs can’t span L2 nets, ACLs are static, manual configuration, etc. – Data Locality Hard to efficiently move several TB & workload needs to be close to data – Workload management: Scheduling work over resources optimizing for power with SLA constraint Server power management – Most workloads don’t fully utilize all resources on server – Need ability to shut off or de-clock unused server resources – Very low power states recover more quickly Move from 30% utilization to 80% 7/12/2008http://perspectives.mvdirona.com6

CEMS: Thin Slice Computing Cooperating Expendable Micro-Slice Servers – Correct system balance problem with less-capable CPU Too many cores, running too fast for memory, bus, disk, … – Power consumption scales with cube of clock frequency Goal: ¼ the price & much less than ½ the power – Utilize high-volume client parts in server environment – Goal: 20 to 50W at under $500 – 1U form factor or less with service-free design Longer term goals – High-density, shared power supply & boot disk – Eliminate non-server required components – Establish viability of service free designs 7/12/20087http://msblogs/JamesRH

Conventional Mechanical Design Server fans (from components to air) CRACs: (from air to chilled water) – Air moving over long distance expensive – Air control often poor with hot/cold mixing Secondary water circuit (variable flow) Primary water circuit (fixed flow) – Water side economizer & A/C evaporator Condensate circuit – A/C condenser – Water side economizer – Cooling tower 7/12/2008http://perspectives.mvdirona.com8

Mechanical Optimization Simple rules to minimize cooling costs: 1.Raise data center temperatures 2.Tight control of airflow with short paths 3.Cooling towers rather than A/C 4.Air side economization (open the window) 5.Low grade, waste heat energy reclamation Best current designs have water close to load but don’t use direct water cooling – Lower heat densities could be 100% air cooled but density trends suggest this won’t happen Common mechanical designs: 24% lost in cooling Assume reduction to 1/3 current – 24% to 8% for 16% savings 7/12/2008http://perspectives.mvdirona.com9

Summary Some low-scale facilities incredibly bad Assuming current high-scale installation: – Power distribution savings ~32% Save 8% in power distribution to server Save further 24% power distribution losses in server – Cooling Savings: ~16% Conservatively estimate 1/3 the power using air-side economization 24% loss down to 8% for a 16% power savings – Server Utilization: ~90% Move from 30% to 80% through DC-wide workload scheduling 30% 60% of full load power to 80% 100% of full load power 2.6x work at 1.7x more power for a gain of 90% – Cooperative, Expendable, Micro-slice Servers: ~12% ½ the power but less capable server (most workloads are memory or disk I/O bound) Conservatively assume.8x work done.5x power => 30% savings 4.0x gains in work done/watt look attainable: – 1*1.32*1.16*1.90*1.30 => 3.8x (some overlap between CEMS & power dist savings) Power is #3 expense in DC behind server h/w, power distribution & cooling – Data center capital expense savings nearly 100% driven by power Reductions in power reduce capex, opex & is good for environment 7/12/200810http://perspectives.mvdirona.com

Slides These Slides: – werSavingsFooCamp08.ppt werSavingsFooCamp08.ppt Perspectives Blog: – 7/12/200811http://msblogs/JamesRH