Performance Evaluation of Load Sharing Policies on a Beowulf Cluster James Nichols Marc Lemaire Advisor: Mark Claypool.

Slides:



Advertisements
Similar presentations
Live migration of Virtual Machines Nour Stefan, SCPD.
Advertisements

Unix Systems Performance Tuning Project of COSC 513 Name: Qinghui Mu Instructor: Prof. Anvari.
Module 13: Performance Tuning. Overview Performance tuning methodologies Instance level Database level Application level Overview of tools and techniques.
Live Migration of Virtual Machines Christopher Clark, Keir Fraser, Steven Hand, Jacob Gorm Hansen, Eric Jul, Christian Limpach, Ian Pratt, Andrew Warfield.
Institute of Computer Science Foundation for Research and Technology – Hellas Greece Computer Architecture and VLSI Systems Laboratory Exploiting Spatial.
XENMON: QOS MONITORING AND PERFORMANCE PROFILING TOOL Diwaker Gupta, Rob Gardner, Ludmila Cherkasova 1.
Efficient Autoscaling in the Cloud using Predictive Models for Workload Forecasting Roy, N., A. Dubey, and A. Gokhale 4th IEEE International Conference.
Lincoln University Canterbury New Zealand Evaluating the Parallel Performance of a Heterogeneous System Elizabeth Post Hendrik Goosen formerly of Department.
Introduction to Operating Systems CS-2301 B-term Introduction to Operating Systems CS-2301, System Programming for Non-majors (Slides include materials.
An Adaptable Benchmark for MPFS Performance Testing A Master Thesis Presentation Yubing Wang Advisor: Prof. Mark Claypool.
Transparent Process Migration for Distributed Applications in a Beowulf Cluster Mark Claypool and David Finkel Computer Science Department Worcester Polytechnic.
Scheduler Activations Effective Kernel Support for the User-Level Management of Parallelism.
MASPLAS ’02 Creating A Virtual Computing Facility Ravi Patchigolla Chris Clarke Lu Marino 8th Annual Mid-Atlantic Student Workshop On Programming Languages.
1 Introduction to Load Balancing: l Definition of Distributed systems. Collection of independent loosely coupled computing resources. l Load Balancing.
ENFORCING PERFORMANCE ISOLATION ACROSS VIRTUAL MACHINES IN XEN Diwaker Gupta, Ludmila Cherkasova, Rob Gardner, Amin Vahdat Middleware '06 Proceedings of.
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.1 Introduction.
1 Performance Evaluation of Load Sharing Policies with PANTS on a Beowulf Cluster James Nichols Mark Claypool Worcester Polytechnic Institute Department.
1 stdchk : A Checkpoint Storage System for Desktop Grid Computing Matei Ripeanu – UBC Sudharshan S. Vazhkudai – ORNL Abdullah Gharaibeh – UBC The University.
Adaptive Content Delivery for Scalable Web Servers Authors: Rahul Pradhan and Mark Claypool Presented by: David Finkel Computer Science Department Worcester.
Memory Management April 28, 2000 Instructor: Gary Kimura.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
CSCI2413 Lecture 6 Operating Systems Memory Management 2 phones off (please)
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
Toolbox for Dimensioning Windows Storage Systems Jalil Boukhobza, Claude Timsit 12/09/2006 Versailles Saint Quentin University.
Offline Performance Monitoring for Linux Abhishek Shukla.
CIT 470: Advanced Network and System AdministrationSlide #1 CIT 470: Advanced Network and System Administration Performance Monitoring.
OPTIMAL SERVER PROVISIONING AND FREQUENCY ADJUSTMENT IN SERVER CLUSTERS Presented by: Xinying Zheng 09/13/ XINYING ZHENG, YU CAI MICHIGAN TECHNOLOGICAL.
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines System.
CS533 Concepts of Operating Systems Jonathan Walpole.
Location-aware MapReduce in Virtual Cloud 2011 IEEE computer society International Conference on Parallel Processing Yifeng Geng1,2, Shimin Chen3, YongWei.
Operating System Support for Virtual Machines Samuel T. King, George W. Dunlap,Peter M.Chen Presented By, Rajesh 1 References [1] Virtual Machines: Supporting.
Profiling Grid Data Transfer Protocols and Servers George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA.
Guide to Linux Installation and Administration, 2e1 Chapter 10 Managing System Resources.
Sensitivity of Cluster File System Access to I/O Server Selection A. Apon, P. Wolinski, and G. Amerson University of Arkansas.
1 Wenguang WangRichard B. Bunt Department of Computer Science University of Saskatchewan November 14, 2000 Simulating DB2 Buffer Pool Management.
An I/O Simulator for Windows Systems Jalil Boukhobza, Claude Timsit 27/10/2004 Versailles Saint Quentin University laboratory.
1 University of Maryland Linger-Longer: Fine-Grain Cycle Stealing in Networks of Workstations Kyung Dong Ryu © Copyright 2000, Kyung Dong Ryu, All Rights.
Heavy and lightweight dynamic network services: challenges and experiments for designing intelligent solutions in evolvable next generation networks Laurent.
VIPIN VIJAYAN 11/11/03 A Performance Analysis of Two Distributed Computing Abstractions.
VTurbo: Accelerating Virtual Machine I/O Processing Using Designated Turbo-Sliced Core Embedded Lab. Kim Sewoog Cong Xu, Sahan Gamage, Hui Lu, Ramana Kompella,
Design of a Modification to an Ethernet Driver Introduction The purpose of this project is to modify the Ethernet device driver so that it will not block.
Security Architecture and Design Chapter 4 Part 1 Pages 297 to 319.
Design Issues of Prefetching Strategies for Heterogeneous Software DSM Author :Ssu-Hsuan Lu, Chien-Lung Chou, Kuang-Jui Wang, Hsiao-Hsi Wang, and Kuan-Ching.
The IEEE International Conference on Cluster Computing 2010
International Conference on Autonomic Computing Governor: Autonomic Throttling for Aggressive Idle Resource Scavenging Jonathan Strickland (1) Vincent.
Department of Computer Sciences, University of Wisconsin Madison DADA – Dynamic Allocation of Disk Area Jayaram Bobba Vivek Shrivastava.
Parallel IO for Cluster Computing Tran, Van Hoai.
Operating Systems: Summary INF1060: Introduction to Operating Systems and Data Communication.
An Introduction to GPFS
Virtual University of Pakistan Distributed database Management Systems Lecture 03.
CSE 340 Computer Architecture Summer 2016 Understanding Performance.
15/02/2006CHEP 061 Measuring Quality of Service on Worker Node in Cluster Rohitashva Sharma, R S Mundada, Sonika Sachdeva, P S Dhekne, Computer Division,
Lecture 2: Performance Evaluation
Processes and threads.
Linux203Training Module System Mgmt.
Threads vs. Events SEDA – An Event Model 5204 – Operating Systems.
Introduction to Load Balancing:
OpenMosix, Open SSI, and LinuxPMI
Operating System (OS) QUESTIONS AND ANSWERS
Andy Wang COP 5611 Advanced Operating Systems
Modeling Page Replacement Algorithms
Chapter 4: Multithreaded Programming
Auburn University COMP7500 Advanced Operating Systems I/O-Aware Load Balancing Techniques (2) Dr. Xiao Qin Auburn University.
An Introduction to Device Drivers
CIT 470: Advanced Network and System Administration
Modeling Page Replacement Algorithms
A Simulator to Study Virtual Memory Manager Behavior
Presentation transcript:

Performance Evaluation of Load Sharing Policies on a Beowulf Cluster James Nichols Marc Lemaire Advisor: Mark Claypool

Outline Introduction Methodology Results Conclusions

Introduction What is a Beowulf cluster? Cluster of computers networked together via Ethernet Load Distribution Share load, decreasing response times and increasing overall throughput Need for expertise in a particular load distribution mechanism Load Historically use CPU as the load metric. What about disk and memory load? Or system events like interrupts and context switches? PANTS Application Node Transparency System Removes the need for expertise required by other load distribution mechanisms

PANTS PANTS Application Node Transparency System Developed in previous MQP: CS-DXF Enhanced the following year to use DIPC in: CS-DXF Intercepts execve() system calls Uses /proc files system to calculate CPU load to classify node as “busy” or “free” Any workload which does not generate CPU load will not be distributed Near linear speedup for computationally intensive applications  New load metrics and polices!

Outline Introduction Methodology Results Conclusions

Methodology Identified load parameters Implemented ways to measure parameters Improved PANTS implementation Built micro benchmarks which stressed each load metric Built macro benchmark which stressed a more realistic mix of metrics Selected real world benchmark

Methodology: Load Metrics Acquired via /proc/stat CPU – Totals jiffies (1/100ths of a second) that the processor spent on user, nice, and system processes. Obtain a percentage of total. I/O – Blocks read/written to disk per second. Memory – Page operations per second. Example: a virtual memory page fault requiring a page to be loaded into memory from disk Interrupts – System interrupts per second. Example: incoming Ethernet packet. Context Switches – How many times the processor switched between processes per second.

Methodology: Micro & Macro Benchmarks Implemented micro and macro benchmarks Helped refine understanding of how the system was performing, tested our load metrics, etc. (Not enough time to present)

Real world benchmark: Linux kernel compile Distributed compilation of the Linux kernel Executed by the standard Linux program make Loads I/O and memory resources Details: Kernel version files Mean source file size: 19KB Needed to expand relative path names to full paths Thresholds Pants default: CPU: 95% New policy: CPU: 95%, I/O: 1000 Blocks/sec, Memory: 4000 Page faults/sec, IRQ: interrupts/sec, Context Switches: 6000 switches/sec  Compare PANTS default policy to our new load metrics and policy

Outline Introduction Methodology Results Conclusions

Results: CPU

Results: Memory

Results: Context Switches

Results: Summary

Conclusions Achieve better throughput and more balanced load distribution when metrics include I/O, memory, interrupts, and context switches. Future Work: Use preemptive migration? Include network usage load metric. For more information visit: Questions?