9Warehouse Scale Computers WSC a type of datacenterWSC host hardware and software for multiple organizational units or even different companies on shared resourcesTraditional datacenters typically host a large number of relatively small- or medium-sized applications, each running on a dedicated hardware infrastructure that is de-coupled and protected from other systems in the same facility.
12Energy usage in WCSPeak power usage of one generation of WSCs deployed at Google in 2007CPU is NOT the sole energy consumerNo one subsystem dominates
13Data Center Cooling CRAC unit: computer room air conditioning Datacenter raised floor with hot–cold aisle setup
14Data Center Energy Efficiency DCPE: Data Center Performance EnergyDefined by Green Grid
15Data Center Energy Efficiency PUE (Power Usage Effectiveness): Relates to the facilitythe ratio of total building power to IT power, i.e. the power consumed by the actual computing equipment (servers, network equipment, etc.)85% of current datacenters were estimated in 2006 to have a PUE of greater than 3.0the building’s mechanical and electrical systems consume twice as much power as the actual computing load;only 5% have a PUE of 2.0
17Improving PUE (Google, 2008) Careful air flow handling: The hot air exhausted by servers is not allowed to mix with cold air, and the path to the cooling coil is very short so that little energy is spent moving cold or hot air long distances.Elevated cold aisle temperatures: The cold aisle of the containers is kept at about 27°C rather than 18–20°C.Use of free cooling: Several cooling towers dissipate heat by evaporating water, greatly reducing the need to run chillers. In most moderate climates, cooling towers can eliminate the majority of chiller runtime. Google’s datacenter in Belgium even eliminates chillers altogether, running on “free” cooling 100% of the time.Per-server 12-V DC UPS: Each server contains a mini-UPS, essentially a battery that floats on the DC side of the server’s power supply and is 99.99% efficient. These per-server UPSs eliminate the need for a facility-wide UPS, increasing the efficiency of the overall power infrastructure from around 90% to near 99%.
18Data Center Energy Efficiency SPUE (Server Power Usage Effectiveness): A server’s energy conversion, ratio of total server input power to its useful poweruseful power: power consumed by the electronic components directly involved in the computation: motherboard, disks, CPUs, DRAM, I/O cards, excluding all losses in power supplies, VRMs, and fans.SPUE ratios of 1.6–1.8 are common in today’s servers; many power supplies are less than 80% efficient, and many motherboards use (voltage regulator modules)VRMs that are similarly inefficient, losing more than 30% of input power in electrical conversion losses.SPUE should be less than 1.2
19Data Center Energy Efficiency Efficiency of computationHardest to measureUse of benchmarksIn HPC, they use LINPACK to rank supercomputers (Green 500)No similar wide-initiative for internet servicesJoulesortmeasures the total system energy to perform an out-of-core sortSPECpower_ssj2008For server- class systems, find performance-to-power ratio of a typical business application on an enterprise Java platform
20SPEC Power results (2008)Performance-to-power ratio drops as the target load decreasesthe system power decreases much more slowly than does performanceE.g. energy efficiency at 30% load is less than half the efficiency at 100%At idle, still consuming 175Wover half of the peak power consumption of the serverQuad-core Xeon, 4GB RAM, 7.2K harddrive
21Load vs EfficiencyRelative efficiency of the top 10 entries in the SPEC power benchmarkRunning at 30% load vs. the efficiency at 100%ratio of idle vs. peak power
22Activity ProfileAverage CPU utilization of 5,000 Google servers during a 6-month periodServers spend relatively little aggregate time at high load levelsMost of the time is spent within the 10–50% CPU utilization rangePerfect mismatch with the energy efficiency profile of modern servers (most of time being most inefficient)
23Absence of Idle Servers Result high-performance, robust distributed systems software designEfficient load distribution when load is lighter we have a lower load in multiple serversMigrate workloads and their corresponding state to fewer machines during periods of low activitycan be easy when using simple replication models, when servers are mostly stateless (i.e., serving data that resides on a shared NAS or SAN storage system)Comes at a cost in terms of software complexity and energy for more complex data distribution models or those with significant state and aggressive exploitation of data localityNeed for resilient distributed storage. GFS & HDFS achieve higher resilience by distributing data chunk replicas on entire cluster
24Poor Energy Proportionality in SPUE The CPU contribution to system power is 50% at peak, drops to less than 30% at low activity levels the most energy-proportionalDynamic range of 3.5XDynamic range of memory systems, disk drives, and networking equipment is lower: 2X for memory, 1.3X for disks, and less than 1.2X for networking switches
25Poor Energy Proportionality in Data Center Efficiency curves derived from tests of several server power suppliesLack of proportionality at less than 30% of the peak-rated powerpower supplies have significantly greater peak capacity than their corresponding computing equipment usual to operate way below peak
26OversubsribingCan increase the overall utilization and efficiency of the datacenterOpportunities at Rack, container (PDU) and data center levelsSpecially at the container and cluster levels
27Energy from Mobile Aspect Cloud Computing can save energy for mobile clientsE.g. Game of chessState of the game can fit in a few bytesRequires much more computation