Presentation is loading. Please wait.

Presentation is loading. Please wait.

Green Computing Omer Rana

Similar presentations


Presentation on theme: "Green Computing Omer Rana"— Presentation transcript:

1 Green Computing Omer Rana o.f.rana@cs.cardiff.ac.uk

2 The Need Bill St. Arnaud (CANARIE, Inc)

3 Impact of ICT Industry Bill St. Arnaud (CANARIE, Inc)

4 Virtualization Techniques Bill St. Arnaud (CANARIE, Inc)

5 Virtual Machine Monitors (IBM 1960s) A thin software layer that sits between hardware and the operating system— virtualizing and managing all hardware resources IBM Mainframe IBM VM/370 CMSMVS CMS App Ed Bugnion, VMWare

6 Old idea from the 1960s IBM VM/370 – A VMM for IBM mainframe –Multiple OS environments on expensive hardware –Desirable when few machine around Popular research idea in 1960s and 1970s –Entire conferences on virtual machine monitor –Hardware/VMM/OS designed together Interest died out in the 1980s and 1990s. –Hardware got cheap –Operating systems got more more powerful (e.g multi- user) Ed Bugnion, VMWare

7 A return to Virtual Machines Disco: Stanford research project (1996-): –Run commodity OSes on scalable multiprocessors –Focus on high-end: NUMA, MIPS, IRIX Hardware has changed: –Cheap, diverse, graphical user interface –Designed without virtualization in mind System Software has changed: –Extremely complex –Advanced networking protocols –But even today : Not always multi-user With limitations, incompatibilities, … Ed Bugnion, VMWare

8 The Problem Today Intel Architecture Operating System Ed Bugnion, VMWare

9 The VMware Solution Intel Architecture Operating System Intel Architecture Ed Bugnion, VMWare

10 VMware ™ MultipleWorlds ™ Technology A thin software layer that sits between Intel hardware and the operating system— virtualizing and managing all hardware resources Intel Architecture VMware MultipleWorlds Win 2000 Win NT Linux Win 2000 App Ed Bugnion, VMWare

11 MultipleWorlds Technology A world is an application execution environment with its own operating system World Intel Architecture VMware MultipleWorlds Win 2000 Win NT Linux Win 2000 App Ed Bugnion, VMWare

12 Virtual Hardware Floppy Disks Parallel Ports Serial/Com Ports Ethernet Keyboard Mouse Monitor (VMM) IDE ControllerSCSI Controller Sound Card Ed Bugnion, VMWare

13 Attributes of MultipleWorlds Technology Software compatibility –Runs pretty much all software Low overheads/High performance –Near “raw” machine performance Complete isolation –Total data isolation between virtual machines Encapsulation –Virtual machines are not tied to physical machines Resource management Ed Bugnion, VMWare

14 Hosted VMware Architecture VMware achieves both near-native execution speed and broad device support by transparently switching* between Host Mode and VMM Mode. Guest OS Applications Guest Operating System Host OS Apps Host OS PC Hardware DisksMemory CPUNIC VMware AppVirtual Machine VMware Driver Virtual Machine Monitor Host ModeVMM Mode VMware, acting as an application, uses the host to access other devices such as the hard disk, floppy, or network card The VMware Virtual machine monitor allows each guest OS to directly access the processor (direct execution) *VMware typically switches modes 1000 times per second Ed Bugnion, VMWare

15 Hosted VMM Architecture Advantages: –Installs and runs like an application –Portable – host OS does I/O access –Coexists with applications running on the host Limits: –Subject to Host OS: Resource management decisions OS failures Usenix 2001 paper: J. Sugerman, G. Venkitachalam and B.-H. Lim, “Virtualizing I/O on VMware Workstation’s Hosted Architecture”. Ed Bugnion, VMWare

16 Virtualizing a Network Interface Host OS PC Hardware Physical NIC VMApp VMDriver Guest OS VMM Physical Ethernet NIC Driver Virtual Bridge Virtual Network Hub Ed Bugnion, VMWare

17 The rise of data centers Single place for hosting servers and data ISP’s now take machines hosted at data centers Run by large companies – like BT Manage –Power –Computation + Data –Cooling systems –Systems Admin + Network Admin

18 Data Centre in Tokyo From: Satoshi Matsuoka

19 http://www.attokyo.co.jp/eng/facility.html

20 Martin J. Levy (Tier1 Research) and Josh Snowhorn (Terremark)

21

22

23 Requirements Power an important design constraint: –Electricity costs –Heat dissipation Two key options in clusters – enable scaling of: –Operating frequency (square relation) –Supply voltage (cubic relation) Balance QoS requirements – e.g.fraction of workload to process locally – with power management

24 From: Salim Hariri, Mazin Yousif

25 From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

26 Martin J. Levy (Tier1 Research) and Josh Snowhorn (Terremark)

27 The case for power management in HPC Power/energy consumption a critical issue –Energy = Heat; Heat dissipation is costly –Limited power supply –Non-trivial amount of money Consequence –Performance limited by available power –Fewer nodes can operate concurrently Opportunity: bottlenecks –Bottleneck component limits performance of other components –Reduce power of some components, not overall performance Today, CPU is: –Major power consumer (~100W), –Rarely bottleneck and –Scalable in power/performance (frequency & voltage) Power/performance “gears”

28 Is CPU scaling a win? Two reasons: 1.Frequency and voltage scaling Performance reduction less than Power reduction 2.Application throughput Throughput reduction less than Performance reduction Assumptions –CPU large power consumer –CPU driver –Diminishing throughput gains performance (freq) power application throughput performance (freq) (1) (2) CPU power P = ½ CVf 2

29 AMD Athlon-64 x86 ISA 64-bit technology Hypertransport technology – fast memory bus Performance –Slower clock frequency –Shorter pipeline (12 vs. 20) –SPEC2K results 2GHz AMD-64 is comparable to 2.8GHz P4 P4 better on average by 10% & 30% (INT & FP) Frequency and voltage scaling –2000 – 800 MHz –1.5 – 1.1 Volts From: Vincent W. Freeh (NCSU)

30 LMBench results LMBench –Benchmarking suite –Low-level, micro data Test each “gear” Gear Frequency (Mhz) Voltage 020001.5 118001.4 216001.3 314001.2 412001.1 68000.9 From: Vincent W. Freeh (NCSU)

31 Operating system functions From: Vincent W. Freeh (NCSU)

32 Communication From: Vincent W. Freeh (NCSU)

33 The problem Peak power limit, P –Rack power –Room/utility –Heat dissipation Static solution, number of servers is –N = P/P max –Where P max maximum power of individual node Problem –Peak power > average power (P max > P average ) –Does not use all power – N * (P max - P average ) unused –Under performs – performance proportional to N –Power consumption is not predictable From: Vincent W. Freeh (NCSU)

34 Safe over provisioning in a cluster Allocate and manage power among M > N nodes –Pick M > N Eg, M = P/P average –MP max > P –P limit = P/M Goal –Use more power, safely under limit –Reduce power (& peak CPU performance) of individual nodes –Increase overall application performance time power P max P average P(t) time power P limit P average P(t) P max From: Vincent W. Freeh (NCSU)

35 Safe over provisioning in a cluster Benefits –Less “unused” power/energy –More efficient power use More performance under same power limitation –Let P be performance –Then more performance means: M P * > N P –Or P * / P > N/M or P * / P > P limit /P max time power P max P average P(t) time power P limit P average P(t) P max unused energy From: Vincent W. Freeh (NCSU)

36 When is this a win? When P * / P > N/M or P * / P > P limit /P max In words: power reduction more than performance reduction Two reasons: 1.Frequency and voltage scaling 2.Application throughput performance (freq) power application throughput P * / P < P average /P max P * / P > P average /P max performance (freq) (1) (2) From: Vincent W. Freeh (NCSU)

37 Feedback-directed, adaptive power control Uses feedback to control power/energy consumption –Given power goal –Monitor energy consumption –Adjust power/performance of CPU Several policies –Average power –Maximum power –Energy efficiency: select slowest gear (g) such that From: Vincent W. Freeh (NCSU)

38 A more holistic approach: Managing a Data Center From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs) CRAC: Computer Room Air Conditioning units

39 From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

40 Location of Cooling Units six CRAC units are serving 1000 servers, consuming 270 KW of power out of a total capacity of 600 KW http://blogs.zdnet.com/BTL/?p=4022

41 From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

42

43

44

45

46 From: Satoshi Matsuoka

47 From: Satoshi Matsuoka

48 From: Satoshi Matsuoka

49 From: Satoshi Matsuoka

50 From: Satoshi Matsuoka

51 From: Satoshi Matsuoka


Download ppt "Green Computing Omer Rana"

Similar presentations


Ads by Google