Presentation is loading. Please wait.

Presentation is loading. Please wait.

Where Does the Power go in DCs & How to get it Back

Similar presentations


Presentation on theme: "Where Does the Power go in DCs & How to get it Back"— Presentation transcript:

1 Where Does the Power go in DCs & How to get it Back
Foo Camp 2008 James Hamilton web: blog:

2 Agenda Power is the important measure
Power drives costs in that Data Center costs are 80% providing power and cooling infrastructure Increasing concern about DC power consumption Work done/watt Power In: Power Distribution & Optimizations Servers: Critical Load & Optimizations Heat Out: Mechanical Systems & Optimization 7/12/2008

3 Power Distribution: Utility to CPU
Power Conversions to server (each roughly 98%) High (115kVAC) to medium(13.2kVAc) [differs by geo] Uninterruptable Power Supply & Generators: Running at 13.2VAC UPS: can be rotary or battery Good ones in 97% range. Much more common 93 to 94% Common: rectify to DC, trickle to batteries, then invert to AC (~93%) No loss at generators (please don’t start them: ~130gallons/hour * 10 or so) 13.2kVACto 480VAC 480VAC to 208VAC Conversions in Server to CPU & Memory: Power Supply: 208VAC to 12VDC (80% common, ~95% affordable) VRM: 12VDC to ~1.5VDC (80% common, 90% affordable) 7/12/2008

4 Power Redundancy at Geo-Level
Over 20% of entire DC costs is in power redundancy Batteries able to supply up to 15 min at some facilities N+2 generation (2.5MW) at over $2M each Instead, use more smaller, cheaper data centers Eliminate redundant power & bulk of shell costs Average UPS in the 93% range Over 1MW wasted in 15MW facility 7/12/2008

5 Power Distribution Optimization
Rules to minimize power distribution losses: Avoid conversions (Less transformer steps & efficient or no UPS) Increase efficiency of conversions High voltage as close to load as possible Size voltage regulators (VRM/VRDs) to load & use efficient parts DC distribution potentially a small win With regulatory issues Two interesting approaches: 480VAC (or higher) to rack & 48VDC (or 12VDC) within 480VAC to PDU and 277VAC to load 1 leg of 480VAC 3-phase distribution Common design: 44% lost in distribution 1*.98*.98*.93*.98*.8*.8 => 56% (~4.4MW lost on 10MW total) Affordable technology: 1*.99*.99*.95*.95 => 88% (~1.2MW total) 7/12/2008

6 Critical Load Optimization
Power proportionality is great but “off” is even better Today: Idle server consumes ~60% power of full load Industry secret: “good” data center server utilization around ~30% Off requires changing workload location What limits 100% dynamic workload distribution? Networking constraints VIPs can’t span L2 nets, ACLs are static, manual configuration, etc. Data Locality Hard to efficiently move several TB & workload needs to be close to data Workload management: Scheduling work over resources optimizing for power with SLA constraint Server power management Most workloads don’t fully utilize all resources on server Need ability to shut off or de-clock unused server resources Very low power states recover more quickly Move from 30% utilization to 80% 7/12/2008

7 CEMS: Thin Slice Computing
Cooperating Expendable Micro-Slice Servers Correct system balance problem with less-capable CPU Too many cores, running too fast for memory, bus, disk, … Power consumption scales with cube of clock frequency Goal: ¼ the price & much less than ½ the power Utilize high-volume client parts in server environment Goal: 20 to 50W at under $500 1U form factor or less with service-free design Longer term goals High-density, shared power supply & boot disk Eliminate non-server required components Establish viability of service free designs 7/12/2008

8 Conventional Mechanical Design
Server fans (from components to air) CRACs: (from air to chilled water) Air moving over long distance expensive Air control often poor with hot/cold mixing Secondary water circuit (variable flow) Primary water circuit (fixed flow) Water side economizer & A/C evaporator Condensate circuit A/C condenser Water side economizer Cooling tower 7/12/2008

9 Mechanical Optimization
Simple rules to minimize cooling costs: Raise data center temperatures Tight control of airflow with short paths Cooling towers rather than A/C Air side economization (open the window) Low grade, waste heat energy reclamation Best current designs have water close to load but don’t use direct water cooling Lower heat densities could be 100% air cooled but density trends suggest this won’t happen Common mechanical designs: 24% lost in cooling Assume reduction to 1/3 current 24% to 8% for 16% savings 7/12/2008

10 Summary Some low-scale facilities incredibly bad
Assuming current high-scale installation: Power distribution savings ~32% Save 8% in power distribution to server Save further 24% power distribution losses in server Cooling Savings: ~16% Conservatively estimate 1/3 the power using air-side economization 24% loss down to 8% for a 16% power savings Server Utilization: ~90% Move from 30% to 80% through DC-wide workload scheduling 30% 60% of full load power to 80% 100% of full load power 2.6x work at 1.7x more power for a gain of 90% Cooperative, Expendable, Micro-slice Servers: ~12% ½ the power but less capable server (most workloads are memory or disk I/O bound) Conservatively assume .8x work done .5x power => 30% savings 4.0x gains in work done/watt look attainable: 1*1.32*1.16*1.90*1.30 => 3.8x (some overlap between CEMS & power dist savings) Power is #3 expense in DC behind server h/w, power distribution & cooling Data center capital expense savings nearly 100% driven by power Reductions in power reduce capex, opex & is good for environment 7/12/2008

11 Slides Perspectives Blog: These Slides:
werSavingsFooCamp08.ppt Perspectives Blog: 7/12/2008 Microsoft


Download ppt "Where Does the Power go in DCs & How to get it Back"

Similar presentations


Ads by Google