Presentation is loading. Please wait.

Presentation is loading. Please wait.

Energy in Cloud Computing and Renewable Energy Reza Farivar.

Similar presentations


Presentation on theme: "Energy in Cloud Computing and Renewable Energy Reza Farivar."— Presentation transcript:

1 Energy in Cloud Computing and Renewable Energy Reza Farivar

2 Energy in Cloud Computing Data Center as a Computer – E-book by google by Luiz André Barroso and Urs Hölzle Cloud Computing – the IT solution for the 21 st century – Carbon Disclosure Project Study 2011

3 Introduction Energy Consumption of data centers is considerable – 3% of total annual energy consumption in US – Double from 2006 (1.5%) – Growing even more – Estimated by Environmental Protection Agency EPA

4 Cloud Computing Adoption in IT

5 Carbon Emission Model

6 Energy Savings due to Clouds CO 2 savings of cloud computing compared to no cloud computing

7 Energy Savings due to Clouds Net Energy Savings due to Cloud Computing adoption,

8 Energy infrastructure of a data center

9 Warehouse Scale Computers WSC a type of datacenter WSC host hardware and software for multiple organizational units or even different companies on shared resources

10 Storage in WCS

11

12 Energy usage in WCS Peak power usage of one generation of WSCs deployed at Google in 2007 CPU is NOT the sole energy consumer No one subsystem dominates

13 Data Center Cooling CRAC unit: computer room air conditioning Datacenter raised floor with hot–cold aisle setup

14 Data Center Energy Efficiency DCPE: Data Center Performance Energy – Defined by Green Grid

15 Data Center Energy Efficiency PUE (Power Usage Effectiveness): Relates to the facility – the ratio of total building power to IT power, i.e. the power consumed by the actual computing equipment (servers, network equipment, etc.) 85% of current datacenters were estimated in 2006 to have a PUE of greater than 3.0 – the building’s mechanical and electrical systems consume twice as much power as the actual computing load; only 5% have a PUE of 2.0

16 PUE of 24 Data Centers (2007)

17 Improving PUE (Google, 2008) Careful air flow handling: The hot air exhausted by servers is not allowed to mix with cold air, and the path to the cooling coil is very short so that little energy is spent moving cold or hot air long distances. Elevated cold aisle temperatures: The cold aisle of the containers is kept at about 27°C rather than 18–20°C. Use of free cooling: Several cooling towers dissipate heat by evaporating water, greatly reducing the need to run chillers. In most moderate climates, cooling towers can eliminate the majority of chiller runtime. Google’s datacenter in Belgium even eliminates chillers altogether, running on “free” cooling 100% of the time. Per-server 12-V DC UPS: Each server contains a mini-UPS, essentially a battery that floats on the DC side of the server’s power supply and is 99.99% efficient. These per-server UPSs eliminate the need for a facility-wide UPS, increasing the efficiency of the overall power infrastructure from around 90% to near 99%.

18 Data Center Energy Efficiency SPUE (Server Power Usage Effectiveness): A server’s energy conversion, ratio of total server input power to its useful power – useful power: power consumed by the electronic components directly involved in the computation: motherboard, disks, CPUs, DRAM, I/O cards, excluding all losses in power supplies, VRMs, and fans. SPUE ratios of 1.6–1.8 are common in today’s servers; many power supplies are less than 80% efficient, and many motherboards use (voltage regulator modules)VRMs that are similarly inefficient, losing more than 30% of input power in electrical conversion losses. SPUE should be less than 1.2

19 Data Center Energy Efficiency Efficiency of computation – Hardest to measure Use of benchmarks – In HPC, they use LINPACK to rank supercomputers (Green 500) – No similar wide-initiative for internet services – Joulesort measures the total system energy to perform an out-of-core sort – SPECpower_ssj2008 For server- class systems, find performance-to-power ratio of a typical business application on an enterprise Java platform

20 SPEC Power results (2008) Performance-to-power ratio drops as the target load decreases – the system power decreases much more slowly than does performance E.g. energy efficiency at 30% load is less than half the efficiency at 100% At idle, still consuming 175W – over half of the peak power consumption of the server Quad-core Xeon, 4GB RAM, 7.2K harddrive

21 Load vs Efficiency Relative efficiency of the top 10 entries in the SPEC power benchmark Running at 30% load vs. the efficiency at 100% ratio of idle vs. peak power

22 Activity Profile Average CPU utilization of 5,000 Google servers during a 6-month period Servers spend relatively little aggregate time at high load levels Most of the time is spent within the 10–50% CPU utilization range – Perfect mismatch with the energy efficiency profile of modern servers (most of time being most inefficient)

23 Absence of Idle Servers Result high-performance, robust distributed systems software design Efficient load distribution  when load is lighter we have a lower load in multiple servers Migrate workloads and their corresponding state to fewer machines during periods of low activity – can be easy when using simple replication models, when servers are mostly stateless (i.e., serving data that resides on a shared NAS or SAN storage system) – Comes at a cost in terms of software complexity and energy for more complex data distribution models or those with significant state and aggressive exploitation of data locality Need for resilient distributed storage. GFS & HDFS achieve higher resilience by distributing data chunk replicas on entire cluster

24 Poor Energy Proportionality in SPUE The CPU contribution to system power is 50% at peak, drops to less than 30% at low activity levels  the most energy- proportional – Dynamic range of 3.5X Dynamic range of memory systems, disk drives, and networking equipment is lower: 2X for memory, 1.3X for disks, and less than 1.2X for networking switches

25 Poor Energy Proportionality in Data Center Efficiency curves derived from tests of several server power supplies Lack of proportionality at less than 30% of the peak-rated power power supplies have significantly greater peak capacity than their corresponding computing equipment  usual to operate way below peak

26 Oversubsribing Can increase the overall utilization and efficiency of the datacenter Opportunities at Rack, container (PDU) and data center levels Specially at the container and cluster levels

27 Energy from Mobile Aspect Cloud Computing can save energy for mobile clients E.g. Game of chess – State of the game can fit in a few bytes – Requires much more computation


Download ppt "Energy in Cloud Computing and Renewable Energy Reza Farivar."

Similar presentations


Ads by Google