Scheduling in Distributed Systems

Slides:



Advertisements
Similar presentations
Scheduling Introduction to Scheduling
Advertisements

Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Part IV: Memory Management
Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
CPU Scheduling Tanenbaum Ch 2.4 Silberchatz and Galvin Ch 5.
Chapter 2: Processes Topics –Processes –Threads –Process Scheduling –Inter Process Communication (IPC) Reference: Operating Systems Design and Implementation.
OS Spring ’ 04 Scheduling Operating Systems Spring 2004.
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (2) Process Management 10/03/2008 Yang Song (Prepared by Yang Song and.
Chapter 11 Operating Systems
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
Simulation of Memory Management Using Paging Mechanism in Operating Systems Tarek M. Sobh and Yanchun Liu Presented by: Bei Wang University of Bridgeport.
CPU-Scheduling Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short term scheduler.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
Processes and Threads.
1 Process States (1) Possible process states –running –blocked –ready Transitions between states shown.
More Scheduling cs550 Operating Systems David Monismith.
CHP-4 QUEUE.
 Introduction to Operating System Introduction to Operating System  Types Of An Operating System Types Of An Operating System  Single User Single User.
1. Memory Manager 2 Memory Management In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Scheduling Chap 2. Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods of I/O wait –a CPU-bound process –an I/O bound.
6 Memory Management and Processor Management Management of Resources Measure of Effectiveness – On most modern computers, the operating system serves.
Prepare by : Ihab shahtout.  Overview  To give an overview of fixed priority schedule  Scheduling and Fixed Priority Scheduling.
Chapter 2 Processes and Threads Introduction 2.2 Processes A Process is the execution of a Program More specifically… – A process is a program.
Data Structures and Algorithms in Parallel Computing
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
Cs431-cotter1 Processes and Threads Tanenbaum 2.1, 2.2 Crowley Chapters 3, 5 Stallings Chapter 3, 4 Silberschaz & Galvin 3, 4.
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
Scheduling.
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
Copyright ©: Nahrstedt, Angrave, Abdelzaher
ITEC 202 Operating Systems
Component 2 6G, H, I, J, K.
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Operating Systems (CS 340 D)
OPERATING SYSTEMS CS3502 Fall 2017
Multithreaded Programming in Java
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Process Description and Control
Unit OS4: Scheduling and Dispatch
Introduction to Operating System (OS)
Parallel Algorithm Design
Chapter 2.2 : Process Scheduling
Main Memory Management
Operating Systems (CS 340 D)
Module IV Memory Organization.
System Structure and Process Model
System Structure and Process Model
MODERN OPERATING SYSTEMS Third Edition ANDREW S
Economics, Administration & Information system
System Structure B. Ramamurthy.
Introduction What is an operating system bootstrap
Processor Fundamentals
Scheduling Adapted from:
TDC 311 Process Scheduling.
Operating systems Process scheduling.
CPU SCHEDULING.
Operating System Concepts
Chapter 10 Operating Systems.
Process Scheduling Decide which process should run and for how long
Lesson Objectives Aims Key Words
Memory Management (1).
Operating System Introduction.
Processor Scheduling Hank Levy 1.
Uniprocessor scheduling
Virtual Memory: Working Sets
Operating System Overview
COMP755 Advanced Operating Systems
Light-Weight Process (Threads)
IIS Progress Report 2016/01/18.
Presentation transcript:

Scheduling in Distributed Systems By Arlington Wilson

Presentation content Problem Definition What is Distributed system scheduling? What conditions does it apply under? Ousterhout 3 Co-Scheduling algorithms Matrix Algorithm Continuous Algorithm Undivided Algorithm Co-scheduling effectiveness analysis Implicit co-scheduling References

Problem definition Normally each processor does it own local scheduling. Processes A & B run on processor 0 and processes C & D run on processor 1. Processors Time slot 1 1 2 3 4 5 A C B D

Problem definition How to improve performance when a group of heavily interactive processes run on different processors? In 1982, John Ousterhout proposed 3 scheduling techniques based on a concept he called co-scheduling.

Co-Scheduling Assumptions Related processes are started together. In general intra-group communication is more likely than inter-group communication. Sufficiently large numbers of processors are available to handle the largest group. Each processor is multi-programmed with N process slots (N-way multiprogramming)

Algorithm 1 - The Matrix Method Processors P = # of processors, Q = # of processes/group Allocation If Pavail/row 0 >= Q, assign to row 0. If not check row 1, etc... Scheduling A round robin mechanism Time Slots 1 2 3 4 5 0 1 2 3 4 5 6 7 X

Algorithm 1 - The Matrix Method Processors Alternate selection If an executing row has a empty or blocked process slot, scan for next slot in column to execute. Time Slots 1 2 3 4 5 0 1 2 3 4 5 6 7 X

Algorithm 1 – The Matrix Method Advantages: Simple allocation and scheduling algorithms Disadvantages: A group cannot be spread over multiple rows, thus many processor slots are left Empty. Alternate selection causes opportunities for co- scheduling to me missed.

The continuous Algorithm Allocation Start a window of size P at the left end of the process sequence If # of empty slot >= Q, allocate the group. Else, slide window to the right until left most process slot is empty.

The continuous Algorithm Scheduling Start a window of size P at the left end of the process sequence At the beginning of each time slice, until the start of a new process group. When the window has empty process slots or processes that aren’t running, use the alternate selection mechanism.

The Continuous Algorithm Advantages Better utilization of processor slots Disadvantages When the process sequence becomes populated with non-continuous empty slots, during allocation groups become fragmented. Fragmentation leads to poor performance.

The Undivided Algorithm Same as the continuous algorithm, except that during allocation all new groups are required to be contiguous in the linear process sequence. (No dividing between slots) Less efficient at packing than continuous. Substantial better system behavior under heavy loads due to reduced fragmentation.

Analysis of the Algorithms Results presented in terms of co-scheduling effectiveness. The ability to co-schedule grouped processes not the performance. Co-scheduling effectiveness is the mean of the ratio of the total # of processors executing co-scheduled processes to the total # of processors with runnable processes. (Ideal is and effectiveness of 1)

What lowers co-scheduling effectiveness? Straddling – The continuous and undivided algorithm, would straddle the right end of the scheduling window. Thus causing the processes inside the window to execute as a fragment, lowering co-scheduling effectiveness. Alternate Selection – Creates fragments and will degrade the system co-scheduling effectiveness.

Analysis of the Algorithms For average system loads (<1.0) All three algorithms perform about the same due to low straddling and alternate selection. For system loads above 1.0 both straddling and alternate selection occur cause effectiveness to degrade. For heavy loads, the undivided algorithm performs 5-10% better then the continuous algorithm which performs 5-10 percent better than the matrix algorithm.

Implicit Co-Scheduling [1] If a process sends a message, then waits a period of time, if no response, assume other group processes are not executing and relinquish the processor. If a process receives a message, it can infer that other processes in the application are executing and this process should be scheduled. It uses no global coordinator.

References Scheduling with implicit Information in Distributed Systems - Andrea C. Arpaci-Dusseau, David E. Culler, Alan Mainwaring – Sigmetric ’98 Scheduling Techniques for Concurrent Sytems – John K. Ousterhout ’80 Distributed Operating Systems – Andrews Tanenbaum