Fred Chong 290N Green Computing

Slides:



Advertisements
Similar presentations
Challenges in optimizing data center utilization
Advertisements

Presented by Paul Almond – Datacentre UK
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Green Datacenter Initiatives at SDSC Matt Campbell SDSC Data Center Services.
UPS Topologies and Multi-Module Configurations
Consult + Engineer + Commission Creating Exceptional Environments © All Rights Reserved January 18, 2011 UNINTERRUPTIBLE POWER SYSTEM (UPS)
Computer Room Requirements for High Density Rack Mounted Servers Rhys Newman Oxford University.
Lecture 4: Power Provisioning Prof. Fred Chong 290N Green Computing.
Data Center Design Issues Bill Tschudi, LBNL
Formulas.
Green Datacenters solution March 17, 2009 By: Sanjay Sharma.
Chapter 3. HVAC Delivery Systems
Presentation Heading Air Source Heat Pumps Jordan Jeewood

Clair Christofersen Mentor – Aaron Andersen August 2, 2012
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism: Computer.
© 2007 IBM Corporation Steve Bowden Green Computing CTO IBM Systems & Technology Group UKISA Sustainable Computing in a Carbon Sensitive World Blue turning.
Water Distribution Systems – Part 1
Energy-Efficient Process Cooling
The Economics of Data Center Air Flow Containment
© 2011 Eaton Corporation. All rights reserved. Eaton 9E UPS New Product Introduction October 2011.
Session Title: Demystifying Efficiency in the Data Center Utilizing Airflow as a System Presented By: Jon deRidder Enabled Energy.
Matt Warner Future Facilities Proactive Airflow Management in Data Centre Operation - using CFD simulation to improve resilience, energy efficiency and.
© K.U.Leuven - ESAT/ELECTA Controlling HID lamps by intelligent power electronics Geert Deconinck, Peter Tant K.U.Leuven-ESAT 8 November 2007.
Lindbergh Field International Airport New Terminal Bill Mahoney LSW Engineers.
Parasol and GreenSwitch: Managing Datacenters Powered by Renewable Energy Íñigo Goiri, William Katsak, Kien Le, Thu D. Nguyen, and Ricardo Bianchini Department.
Page 1 Peng Xu, Philip Haves, James Braun, MaryAnn Piette January 23, 2004 Sponsored by the California Energy Commission and the California Institute for.
Energy in Cloud Computing and Renewable Energy
SITOP power facets A&D SE PS SITOP power Automation and Drives SITOP power flexi The power pack handles it all - from 3 to 52V SITOP power.
Improving Cooling efficiency in tomorrow's data centre
Example of a Decision Tree Problem: The Payoff Table
Energy and heat-aware metrics for data centers Jaume Salom, Laura Sisó IREC - Catalonia Institute for Energy Research Ariel Oleksiak, Mateusz.
EnvironmentEnvironnementCanada Nusa Dua, Bali, Indonesia September 5 – 7, Part 4: LFG Utilization.
1 DOE Data Center Tools Suite Data Center Energy Profiler (“DC Pro”) Lawrence Berkeley National Laboratory ANCIS EYP Mission Critical Rumsey Engineers.
Chapter 4 Ohm’s Law, Power, and Energy. 2 Ohm’s Law The current in a resistive circuit is directly proportional to its applied voltage and inversely proportional.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
CIT 470: Advanced Network and System AdministrationSlide #1 CIT 470: Advanced Network and System Administration Data Centers.
Cooling Product Positioning
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
PG&E and Altera Data Center Energy Efficiency Project.
Where Does the Power go in DCs & How to get it Back Foo Camp James Hamilton web: blog:
Datacenter Power State-of-the-Art Randy H. Katz University of California, Berkeley LoCal 0 th Retreat “Energy permits things to exist; information, to.
 Site  Requirements  Local Resources  Initial layout ideas  Brief material selection  Supply options.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CoolAir Temperature- and Variation-Aware Management for Free-Cooled Datacenters Íñigo Goiri, Thu D. Nguyen, and Ricardo Bianchini 1.
Cutting the Green IT Hype: Fact vs. Fiction Kenneth G. Brill, Executive Director Uptime Institute Inc
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
1 Copyright © 2011, Elsevier Inc. All rights Reserved. Chapter 6 Authors: John Hennessy & David Patterson.
Data Centers - They’re Back… E SOURCE Forum September, 2007 William Tschudi
Thermodynamic Feasibility 1 Anna Haywood, Jon Sherbeck, Patrick Phelan, Georgios Varsamopoulos, Sandeep K. S. Gupta.
Overview of Data Center Energy Use Bill Tschudi, LBNL
CIT 470: Advanced Network and System AdministrationSlide #1 CIT 470: Advanced Network and System Administration Data Centers.
The Data Center Challenge
Increasing DC Efficiency by 4x Berkeley RAD Lab
Data Center Energy Use, Metrics and Rating Systems Steve Greenberg Energy Management Engineer Environmental Energy Technologies Division Lawrence Berkeley.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
Insights into Google's PUE Results Spring Published Google PUE Results.
CS203 – Advanced Computer Architecture Warehouse Scale Computing.
Data Center Energy Efficiency SC07 Birds of a Feather November, 2007 William Tschudi
Dell EMC Modular Data Centers
Unit 2: Chapter 2 Cooling.
The Data Center Challenge
Lecture 20: WSC, Datacenters
CERN Data Centre ‘Building 513 on the Meyrin Site’
Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism Topic 13 Using Energy Efficiently Inside the Server Prof. Zhang.
Cloud Computing Data Centers
Where Does the Power go in DCs & How to get it Back
Cloud Computing Data Centers
Liebert DSE High efficiency thermal management
Presentation transcript:

Fred Chong 290N Green Computing Datacenter Basics Fred Chong 290N Green Computing

Figure : Storage hierarchy of a Warehouse-scale computer

Performance Variations Figure : Latency, bandwidth and capacity of a Warehouse-scale computer

Server Comparison HP Integrity Superdome – Itanium2 HP ProLiant ML350 G5 Processor 64 sockets, 128 cores (dual- threaded), 1.6GHz Itanium2, 12MB last-level cache 1 socket, quad-core, 2.66GHz X5355 CPU, 8MB last-level cache Memory 2048GB 24GB Disk storage 320,974GB, 7056 drives 3961GB, 105 drives TPC-C price/perfor mance $2.93/tpmC $0.73/tpmC price/perfor mance (server HW only) $1.28/transactions per minute $0.10/transactions per minute price/perfor mance (server HW only) (no discounts) $2.39/transactions per minute $0.12/transactions per minute

Cost proportional to Power Cost proportional to power delivered Typically $10-20/W Power delivery 60-400kV transmission lines 10-20kV medium voltage 110-480V low voltage

UPS Uninterruptible Power Supply Batteries or flywheel AC-DC-AC conversion Conditions the power feed Removes spikes or sags Removes harmonic distortions Housed in a separate UPS room Sizes range from hundreds of kW to 2MW

PDUs Power Distribution Units Breaker panels Input 200-480V Output many 110V or 220V 75-225kW in 20-30A circuits (max 6 kW) Redundancy from two independent power sources

Paralleling Multiple generators or UPSs N+1 (one failure) Feed a shared bus N+1 (one failure) N+2 (one maintenance, one failure) 2N (redundant pairs)

Cooling

Cooling Steps 12-14 C coolant 16-20 C air at CRAC (Computer Room AC) 18-22 C at server intake Then back to chiller

“Free Cooling” Pre-cool coolant before chiller Water-based cooling towers use evaporation Works in moderate climate – freezes if too cold Glycol-based radiator outside the building Works in cold climates

Cooling is Critical Datacenter would fail in minutes without cooling Cooling backed up by generators and UPSs Adds > 40% critical electrical load

Airflow 100 cfm (cubic feet per minute) per server 10 servers would require 1000 cfm from perforated tiles Typically no more than 150-200W / sq ft power density Recirculation from one server’s hot air into the intake of a neighbor Some avoid with overhead ducts

Variations In-rack cooling Container-based datacenters Water cooled coils next on the server Cost of plumbing Damage from leaks (earthquake zones!) Container-based datacenters Shipping container 8’ x 8.5’ x 40’ Similar to in-rack cooling but for the whole container Higher power densities

Power Efficiency PUE – power usage efficiency Datacenter power infrastructure

Poor PUEs 85% of datacenters PUE > 3 Only 5% PUE = 2.0 Chillers take 30-50% overhead CRAC 10-30% overhead UPS 7-12% overhead (AC-DC-AC) Humidifiers, PDUs, lighting EPA “achievable” PUE of 1.4 by 2011

Improvements Evaporative cooling Efficient air movement Eliminate power conversion losses Google PUE = 1.21 Several companies PUE = 1.3

A more comprehensive metric (b) SPUE – server power usage efficiency (c) computation energy efficiency

SPUE Power delivered to components directly involved in computation: Motherboad, disks, CPUs, DRAM, I/O cards Losses due to power supplies, fans, voltage regulators SPUE of 1.6-1.8 common Power supplies less than 80% efficient Voltage regulators less than 70% efficient EPA feasible SPUE < 1.2 in 2011

TPUE Total PUE = TPUE = PUE * SPUE Average of 3.2 today (2.2 Watts wasted for every Watt in computation) PUE 1.2 and SPUE 1.2 would give 2X benefit TPUE of 1.25 probably the limit of what is economically feasible

Computing Efficiency Area of greatest potential Hardest to measure SPECpower Joulesort Storage Network Industry Association

SPECPower Example

Server Load

Load vs Efficiency

Human Dynamic Range

Component Efficiency

CPU Voltage Scaling

Disks As much as 70% power to keep drives spinning 1000X penalty to spin up and access Multiple head, low RPM drives [Gurumurthi]

Server Power Supplies

Power Provisioning $10-22 per deployed IT Watt Given 10 year depreciation cycle $1-2.20 per Watt per year Assume $0.07 per kilowatt-hr and PUE 2.0 8766 hours in a year (8766 / 1000) * $0.07 * 2.0 = $1.22724 Up to 2X cost in provisioning eg. 50% full datacenter = 2X provisioning cost

Time at Power Level 80 servers 800 servers 8000 servers

Oversubscription Opportunity 7% for racks (80) 22% for PDUs (800) 28% for clusters (8000) Could have hosted almost 40% more machines

Underdeployment New facilities plan for growth Also discretization of capacity Eg 2.5kW circuit may have four 520W servers 17% underutilized, but can’t have one more