Load Balancing : The Goal Given a collection of tasks comprising a computation and a set of computers on which these tasks may be executed, find the mapping.

Slides:



Advertisements
Similar presentations
Algorithm Design Methods Spring 2007 CSE, POSTECH.
Advertisements

Hadi Goudarzi and Massoud Pedram
Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
Agent-based sensor-mission assignment for tasks sharing assets Thao Le Timothy J Norman WambertoVasconcelos
Starting Parallel Algorithm Design David Monismith Based on notes from Introduction to Parallel Programming 2 nd Edition by Grama, Gupta, Karypis, and.
Design of Algorithms by Induction Part 2 Bibliography: [Manber]- Chap 5.
CISC October Goals for today: Foster’s parallel algorithm design –Partitioning –Task dependency graph Granularity Concurrency Collective communication.
Reference: Message Passing Fundamentals.
CS 584. Review n Systems of equations and finite element methods are related.
©TheMcGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 11: Sorting and Searching  Searching Linear Binary  Sorting.
Chapter 11 Sorting and Searching. Topics Searching –Linear –Binary Sorting –Selection Sort –Bubble Sort.
1 Friday, September 29, 2006 If all you have is a hammer, then everything looks like a nail. -Anonymous.
Parallel Simulation etc Roger Curry Presentation on Load Balancing.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Fault-tolerant Adaptive Divisible Load Scheduling Xuan Lin, Sumanth J. V. Acknowledge: a few slides of DLT are from Thomas Robertazzi ’ s presentation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
SMART: A Scan-based Movement- Assisted Sensor Deployment Method in Wireless Sensor Networks Jie Wu and Shuhui Yang Department of Computer Science and Engineering.
1 Introduction to Load Balancing: l Definition of Distributed systems. Collection of independent loosely coupled computing resources. l Load Balancing.
1 Tuesday, September 26, 2006 Wisdom consists of knowing when to avoid perfection. -Horowitz.
Mahapatra-Texas A&M-Fall'001 Partitioning - I Introduction to Partitioning.
Strategies for Implementing Dynamic Load Sharing.
Customized Dynamic Load Balancing for a Network of Workstations Taken from work done by: Mohammed Javeed Zaki, Wei Li, Srinivasan Parthasarathy Computer.
Evaluating Performance for Data Mining Techniques
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Self-Organizing Agents for Grid Load Balancing Junwei Cao Fifth IEEE/ACM International Workshop on Grid Computing (GRID'04)
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
Explicit, Summative, and Recursive
Load Balancing and Termination Detection Load balance : - statically before the execution of any processes - dynamic during the execution of the processes.
Secure Incremental Maintenance of Distributed Association Rules.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
© Copyright 1992–2004 by Deitel & Associates, Inc. and Pearson Education Inc. All Rights Reserved. C How To Program - 4th edition Deitels Class 05 University.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
Scheduling Many-Body Short Range MD Simulations on a Cluster of Workstations and Custom VLSI Hardware Sumanth J.V, David R. Swanson and Hong Jiang University.
Data Reduction. 1.Overview 2.The Curse of Dimensionality 3.Data Sampling 4.Binning and Reduction of Cardinality.
LECTURE 13. Course: “Design of Systems: Structural Approach” Dept. “Communication Networks &Systems”, Faculty of Radioengineering & Cybernetics Moscow.
CS 584. Load Balancing Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
Dynamic Load Balancing in Charm++ Abhinav S Bhatele Parallel Programming Lab, UIUC.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
Adaptive Mesh Applications Sathish Vadhiyar Sources: - Schloegel, Karypis, Kumar. Multilevel Diffusion Schemes for Repartitioning of Adaptive Meshes. JPDC.
Partitioning using Mesh Adjacencies  Graph-based dynamic balancing Parallel construction and balancing of standard partition graph with small cuts takes.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
Data Structures and Algorithms in Parallel Computing Lecture 7.
Static Process Scheduling
CDP Tutorial 3 Basics of Parallel Algorithm Design uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Lecture 3: Designing Parallel Programs. Methodological Design Designing and Building Parallel Programs by Ian Foster www-unix.mcs.anl.gov/dbpp.
Hierarchical Load Balancing for Large Scale Supercomputers Gengbin Zheng Charm++ Workshop 2010 Parallel Programming Lab, UIUC 1Charm++ Workshop 2010.
Sequences and Series Explicit, Summative, and Recursive.
Dynamic Load Balancing Tree and Structured Computations.
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dr. Xiao Qin Auburn University
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dynamic Mapping Dr. Xiao Qin Auburn University
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
Gantenbein & Sung CAINE Task Scheduling in Distributed Data Mining for Medical Applications Rex E. Gantenbein, University of Wyoming, Laramie WY.
Auburn University
2D AFEAPI Overview Goals, Design Space Filling Curves Code Structure
Root Finding Methods Fish 559; Lecture 15 a.
Introduction to Load Balancing:
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Mapping Techniques Dr. Xiao Qin Auburn University.
CS 584.
Channel Allocation Problem/Multiple Access Protocols Group 3
Channel Allocation Problem/Multiple Access Protocols Group 3
Data Transformations targeted at minimizing experimental variance
Adaptive Mesh Applications
Adaptivity and Dynamic Load Balancing
Gengbin Zheng, Esteban Meneses, Abhinav Bhatele and Laxmikant V. Kale
Parallel Programming in C with MPI and OpenMP
Presentation transcript:

Load Balancing : The Goal Given a collection of tasks comprising a computation and a set of computers on which these tasks may be executed, find the mapping of tasks to computers that results in each computer having an approximate equal amount of work

Static Vs. Dynamic Static: If the computation time and behavior of a given task can be determined a priori, task mapping can be performed before beginning of computation. Dynamic: If the workload for a task can change over the course of a computation and cannot be estimated beforehand, task mapping must change during the computation.

Approach to Dynamic Load Balancing Load Evaluation Profitability Determination Work Transfer Vector Calculation Task Selection Task Migration

Load Evaluation Application-based approach System-based approach Hybrid approach

Profitability Determination Load balance of a computation is the ratio of the average computer load to the maximum computer load Load of computation can be measured locally or globally.

Work Transfer Calculation Calculate how much work should be transferred from computer to another –Hierarchical Balancing Method –Generalized Dimensional Exchange –Diffusive Techniques Work transferring while preserving communication locality

Hierarchical Balancing (HB) Global and recursive approach –A set of computers is divided in half –The load is calculated for each partition –Transfer vector is derived so that those partitions have roughly the same load per computer. –Each partition is self divided and balanced recursively, taken into account transfers calculated at higher level

Dimensional Exchange (DE) For a network of max degree = Nmax, Computers exchange with its neighbor in each dimension, times their load difference. The process is repeated until balanced state is reach. for each neighbor j in N i

Diffusive Techniques Based on the solution of diffusion equation Uses implicit differencing scheme to solve the heat equation on a hyper-cube

Task Selection (1) Determined which tasks should be moved to meet the calculated transfer vectors –Move tasks from one computer to another –Exchange tasks between two computers Basic idea is to find the subset of sum (0-1 Knapsack) Rules: –Try to avoid moving large numbers of tasks or large quantities of data. –Try best to preserve communication locality.

Task Selection (2) Each task, i has a load L i a cost C i. For a given transfer, we wish to find the minimum-cost set of tasks whose exchange achieves that transfer. The selection algorithm cannot satisfy a transfer vector in a single attempt. Multiple passes will be required. To avoid store-and-forward situation, the transfer of tokens that contain load/cost information should be performed until task selection is complete.

Task Migration (1) In addition to task selection, load balancing system should also provides mechanisms to actually move the tasks from one computer to another. Task movement must preserve the task’s state and pending communication.

Task Migration (2) Studies show that reducing the size of the task transfer is more important than reducing the number of tasks transferred. Task transfer should not disrupt the communication locality. If communication costs are significantly increased, it may be better not to load balance at all.

Load Balancing System (SCPLib) Library Routines Comm. List Application Routines State userLibrary

Conclusion Dynamic load balancing is more flexible than static load balancing and can support irregular/unpredictable problems. Approaches to dynamic load balancing includes: load evaluation, profit determination, work transfer calculation, task selection and migration. Load balancing should be applied only if it does not disrupt the communication locality.