Conserving Disk Energy in Network Servers ACM 17th annual international conference on Supercomputing Presented by Hsu Hao Chen.

Slides:



Advertisements
Similar presentations
Computer-System Structures Er.Harsimran Singh
Advertisements

Live migration of Virtual Machines Nour Stefan, SCPD.
Cache Storage For the Next Billion Students: Anirudh Badam, Sunghwan Ihm Research Scientist: KyoungSoo Park Presenter: Vivek Pai Collaborator: Larry Peterson.
SE-292 High Performance Computing
Reducing Energy Consumption of Disk Storage Using Power Aware Cache Management Qingbo Zhu, Francis M. David, Christo F. Deveraj, Zhenmin Li, Yuanyuan Zhou.
ISCSI guides and suggestions. For most implementations.
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms (ACM SIGMETRIC 05 ) ACM International Conference on Measurement & Modeling.
1 Sizing the Streaming Media Cluster Solution for a Given Workload Lucy Cherkasova and Wenting Tang HPLabs.
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Energy Efficiency through Burstiness Athanasios E. Papathanasiou and Michael L. Scott University of Rochester, Computer Science Department Rochester, NY.
Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
1 Storage-Aware Caching: Revisiting Caching for Heterogeneous Systems Brian Forney Andrea Arpaci-Dusseau Remzi Arpaci-Dusseau Wisconsin Network Disks University.
Kernel memory allocation
1 Magnetic Disks 1956: IBM (RAMAC) first disk drive 5 Mb – Mb/in $/year 9 Kb/sec 1980: SEAGATE first 5.25’’ disk drive 5 Mb – 1.96 Mb/in2 625.
1 Conserving Energy in RAID Systems with Conventional Disks Dong Li, Jun Wang Dept. of Computer Science & Engineering University of Nebraska-Lincoln Peter.
Flash: An efficient and portable Web server Authors: Vivek S. Pai, Peter Druschel, Willy Zwaenepoel Presented at the Usenix Technical Conference, June.
MSN 2004 Network Memory Servers: An idea whose time has come Glenford Mapp David Silcott Dhawal Thakker.
OS2-1 Chapter 2 Computer System Structures. OS2-2 Outlines Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection.
Multiprocessing Memory Management
Energy Efficient Prefetching – from models to Implementation 6/19/ Adam Manzanares and Xiao Qin Department of Computer Science and Software Engineering.
Energy Efficient Prefetching with Buffer Disks for Cluster File Systems 6/19/ Adam Manzanares and Xiao Qin Department of Computer Science and Software.
CS 550 Amoeba-A Distributed Operation System by Saie M Mulay.
Web-Conscious Storage Management for Web Proxies Evangelos P. Markatos, Dionisios N. Pnevmatikatos, Member, IEEE, Michail D. Flouris, and Manolis G. H.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
DISKS IS421. DISK  A disk consists of Read/write head, and arm  A platter is divided into Tracks and sector  The R/W heads can R/W at the same time.
Modularizing B+-trees: Three-Level B+-trees Work Fine Shigero Sasaki* and Takuya Araki NEC Corporation * currently with 1st Nexpire Inc.
CSNB123 coMPUTER oRGANIZATION
Disk Access. DISK STRUCTURE Sector: Smallest unit of data transfer from/to disk; 512B 2/4/8 adjacent sectors transferred together: Blocks Read/write heads.
PARAID: The Gear-Shifting Power-Aware RAID Charles Weddle, Mathew Oldham, An-I Andy Wang – Florida State University Peter Reiher – University of California,
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
1 Web Server Administration Chapter 2 Preparing For Server Installation.
IT253: Computer Organization
1 Distributed Energy-Efficient Scheduling for Data-Intensive Applications with Deadline Constraints on Data Grids Cong Liu and Xiao Qin Auburn University.
A Measurement Based Memory Performance Evaluation of High Throughput Servers Garba Isa Yau Department of Computer Engineering King Fahd University of Petroleum.
1 PARAID: A Gear-Shifting Power-Aware RAID Charles Weddle, Mathew Oldham, Jin Qian, An-I Andy Wang – Florida St. University Peter Reiher – University of.
PARAID: A Gear-Shifting Power-Aware RAID Charles Weddle, Mathew Oldham, Jin Qian, An-I Andy Wang – Florida St. University Peter Reiher – University of.
Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection Network Structure.
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
Increasing Web Server Throughput with Network Interface Data Caching October 9, 2002 Hyong-youb Kim, Vijay S. Pai, and Scott Rixner Rice Computer Architecture.
CSE 241 Computer Engineering (1) هندسة الحاسبات (1) Lecture #3 Ch. 6 Memory System Design Dr. Tamer Samy Gaafar Dept. of Computer & Systems Engineering.
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
Improving Disk Throughput in Data-Intensive Servers Enrique V. Carrera and Ricardo Bianchini Department of Computer Science Rutgers University.
We will focus on operating system concepts What does it do? How is it implemented? Apply to Windows, Linux, Unix, Solaris, Mac OS X. Will discuss differences.
1 PARAID: A Gear-Shifting Power-Aware RAID Charles Weddle, Mathew Oldham, Jin Qian, An-I Andy Wang – Florida St. University Peter Reiher – University of.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Capacity Planning - Managing the hardware resources for your servers.
© 2006 EMC Corporation. All rights reserved. The Host Environment Module 2.1.
CSE 451: Operating Systems Section 9: Storage; networks.
Energy Efficient Prefetching and Caching Athanasios E. Papathanasiou and Michael L. Scott. University of Rochester Proceedings of 2004 USENIX Annual Technical.
Ensieea Rizwani An energy-efficient management mechanism for large-scale server clusters By: Zhenghua Xue, Dong, Ma, Fan, Mei 1.
Improving the Reliability of Commodity Operating Systems Michael M. Swift, Brian N. Bershad, Henry M. Levy Presented by Ya-Yun Lo EECS 582 – W161.
Bigtable : A Distributed Storage System for Structured Data Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach Mike Burrows,
Providing Differentiated Levels of Service in Web Content Hosting J ussara Almeida, Mihaela Dabu, Anand Manikutty and Pei Cao First Workshop on Internet.
Motivation Energy costs are rising –An increasing concern for servers –No longer limited to laptops Energy consumption of disk drives –24% of the power.
PARAID: A Gear-Shifting Power-Aware RAID
Performance directed energy management using BOS technique
Memory COMPUTER ARCHITECTURE
Chapter 11: File System Implementation
Storage Virtualization
PARAID: A Gear-Shifting Power-Aware RAID
OS Virtualization.
Web Server Administration
MICROPROCESSOR MEMORY ORGANIZATION
Cloud computing mechanisms
Qingbo Zhu, Asim Shankar and Yuanyuan Zhou
UNIT IV RAID.
IP Control Gateway (IPCG)
Presentation transcript:

Conserving Disk Energy in Network Servers ACM 17th annual international conference on Supercomputing Presented by Hsu Hao Chen

Outline Introduction Conserving disk energy Idle Replace Combined Multi-speed Evaluation Combined Multi-speed Conclusions

Introduction(1/2) The Google search engine 15K servers These large clusters consume a significant amount of energy Energy costs can reach 60% of the operational cost of data center Evaluating four approaches to solving it Idle Replace Combined Multi-speed

Introduction(2/2) Multi-speed Combined

Idle Most of them are based on powering disks down during periods of idleness Break-even threshold cost of powering the disk down and up (on the next access) Testing Assuming that load peaks reach only 50% of this maximum throughput A large memory cache Memory cache miss rate that is lower than 0.03% Result Average idle time of 15.2 seconds In summary, not appropriate for network servers

Replace(1/2) Replace each high-performance disk with one or more lower power disks 1-to-1 ratio for laptop disk problem Storage capacity Performance: access latency 1-to-2 ratio for low power SCSI disk Energy consuming Two low power disks > high-performance disk

Replace(2/2) 1-to-n ratio for laptop disk reliability problem We would need at least four (RAID) laptop disks for each HP

Combined(1/4) The idea is to associate each high- performance disk with a lower power disk, called a secondary disk. The disks should have the same size and mirror each other Coherence actions updates while the set of disks coming up was powered off

Combined(2/4) Implementation

Combined(3/4) Linux module Allows the creation of multiple virtual devices Each virtual device is mapped to a pair of disks Module is inserted at a low level, all disk traffic (including metadata accesses) is visible to it Module intercepts all calls to the ll_rw_block() kernel routine

Combined(4/4) The module has three key components A translation table per virtual device specifies which physical disk drive to use on each access ll_rw_block() Monitors the load on the disks and measure of the load offered EWMA α=0.875 Selects which disk to use depending on the load on the disk subsystem A bitmap per disk specifying all the blocks that have been written since the disks of the corresponding virtual device were last made coherent. A bit is set in a bitmap when an intercepted ll_rw_block() call produces a disk write

Multi-speed This approach does not require mutiple disks, coherence, bitmap etc.. Switching threshold Decides to changes speeds Emulation Because multi-speed disks are not available in the market The emulation keeps our two SCSI disks powered on all the time All write accesses are immediately directed to both disks The emulation also assigns performance and energy costs to the speed transitions

Evaluation(1/7) Network server hardware P4-1.9GHz 512MB RAM OS: Linux Storage disks the SCSI Ultrastar 36Z15 disk the SCSI Ultrastar 73LZX disk (when evaluating Multi-speed) the laptop Travelstar 40GNX disk (when evaluating Combined) a Gigabit Ethernet network interface Web server is able to service a maximum of 2520 requests/second for the Clarknet trace ClarkNet:These two traces contain two week's worth of all HTTP requests to the ClarkNet WWW server Proxy server can service up to 335 requests/second for the Hummingbird trace

Evaluation(2/7) Combined Power-saving 1%

Evaluation(3/7) Combined In summary, we do not consider very realistic for network servers. Power-saving 41%

Evaluation(4/7) Multi-speed for web-server Power-saving 16%Power-saving 22%

Evaluation(5/7) Multi-speed for proxy-server Power-saving 15% Power-saving 17%

Evaluation(6/7) Multi-speed web-serverproxy-server Throughput Degradation 1%Throughput Degradation 3%

Evaluation(7/7) Multi-speed In summary, the two-speed disk should perform well in a wide range of scenarios.

Conclusions Two-speed disk techniques energy savings without performance degradation in network servers Our results suggest that this technique should be carefully considered by disk manufacturers The other techniques we studied cannot provide any disk energy savings