Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fred Chong 290N Green Computing

Similar presentations


Presentation on theme: "Fred Chong 290N Green Computing"— Presentation transcript:

1 Fred Chong 290N Green Computing
Datacenter Basics Fred Chong 290N Green Computing

2 Figure : Storage hierarchy of a Warehouse-scale computer

3 Performance Variations
Figure : Latency, bandwidth and capacity of a Warehouse-scale computer

4 Server Comparison HP Integrity Superdome – Itanium2
HP ProLiant ML350 G5 Processor 64 sockets, 128 cores (dual- threaded), 1.6GHz Itanium2, 12MB last-level cache 1 socket, quad-core, 2.66GHz X5355 CPU, 8MB last-level cache Memory 2048GB 24GB Disk storage 320,974GB, 7056 drives 3961GB, 105 drives TPC-C price/perfor mance $2.93/tpmC $0.73/tpmC price/perfor mance (server HW only) $1.28/transactions per minute $0.10/transactions per minute price/perfor mance (server HW only) (no discounts) $2.39/transactions per minute $0.12/transactions per minute

5

6 Cost proportional to Power
Cost proportional to power delivered Typically $10-20/W Power delivery 60-400kV transmission lines 10-20kV medium voltage V low voltage

7 UPS Uninterruptible Power Supply Batteries or flywheel
AC-DC-AC conversion Conditions the power feed Removes spikes or sags Removes harmonic distortions Housed in a separate UPS room Sizes range from hundreds of kW to 2MW

8 PDUs Power Distribution Units Breaker panels
Input V Output many 110V or 220V 75-225kW in 20-30A circuits (max 6 kW) Redundancy from two independent power sources

9 Paralleling Multiple generators or UPSs N+1 (one failure)
Feed a shared bus N+1 (one failure) N+2 (one maintenance, one failure) 2N (redundant pairs)

10 Cooling

11 Cooling Steps 12-14 C coolant 16-20 C air at CRAC (Computer Room AC)
18-22 C at server intake Then back to chiller

12 “Free Cooling” Pre-cool coolant before chiller
Water-based cooling towers use evaporation Works in moderate climate – freezes if too cold Glycol-based radiator outside the building Works in cold climates

13 Cooling is Critical Datacenter would fail in minutes without cooling
Cooling backed up by generators and UPSs Adds > 40% critical electrical load

14 Airflow 100 cfm (cubic feet per minute) per server
10 servers would require 1000 cfm from perforated tiles Typically no more than W / sq ft power density Recirculation from one server’s hot air into the intake of a neighbor Some avoid with overhead ducts

15 Variations In-rack cooling Container-based datacenters
Water cooled coils next on the server Cost of plumbing Damage from leaks (earthquake zones!) Container-based datacenters Shipping container 8’ x 8.5’ x 40’ Similar to in-rack cooling but for the whole container Higher power densities

16 Power Efficiency PUE – power usage efficiency
Datacenter power infrastructure

17 Poor PUEs 85% of datacenters PUE > 3 Only 5% PUE = 2.0
Chillers take 30-50% overhead CRAC 10-30% overhead UPS 7-12% overhead (AC-DC-AC) Humidifiers, PDUs, lighting EPA “achievable” PUE of 1.4 by 2011

18

19 Improvements Evaporative cooling Efficient air movement
Eliminate power conversion losses Google PUE = 1.21 Several companies PUE = 1.3

20 A more comprehensive metric
(b) SPUE – server power usage efficiency (c) computation energy efficiency

21 SPUE Power delivered to components directly involved in computation:
Motherboad, disks, CPUs, DRAM, I/O cards Losses due to power supplies, fans, voltage regulators SPUE of common Power supplies less than 80% efficient Voltage regulators less than 70% efficient EPA feasible SPUE < 1.2 in 2011

22 TPUE Total PUE = TPUE = PUE * SPUE
Average of 3.2 today (2.2 Watts wasted for every Watt in computation) PUE 1.2 and SPUE 1.2 would give 2X benefit TPUE of 1.25 probably the limit of what is economically feasible

23 Computing Efficiency Area of greatest potential Hardest to measure
SPECpower Joulesort Storage Network Industry Association

24 SPECPower Example

25 Server Load

26 Load vs Efficiency

27 Human Dynamic Range

28 Component Efficiency

29 CPU Voltage Scaling

30 Disks As much as 70% power to keep drives spinning
1000X penalty to spin up and access Multiple head, low RPM drives [Gurumurthi]

31 Server Power Supplies

32 Power Provisioning $10-22 per deployed IT Watt
Given 10 year depreciation cycle $ per Watt per year Assume $0.07 per kilowatt-hr and PUE 2.0 8766 hours in a year (8766 / 1000) * $0.07 * 2.0 = $ Up to 2X cost in provisioning eg. 50% full datacenter = 2X provisioning cost

33 Time at Power Level 80 servers 800 servers 8000 servers

34 Oversubscription Opportunity
7% for racks (80) 22% for PDUs (800) 28% for clusters (8000) Could have hosted almost 40% more machines

35 Underdeployment New facilities plan for growth
Also discretization of capacity Eg 2.5kW circuit may have four 520W servers 17% underutilized, but can’t have one more


Download ppt "Fred Chong 290N Green Computing"

Similar presentations


Ads by Google