Presentation is loading. Please wait.

Presentation is loading. Please wait.

Challenges of Process Allocation in Distributed System Presentation 1 Group A4: Syeda Taib, Sean Hudson, Manasi Kapadia.

Similar presentations


Presentation on theme: "Challenges of Process Allocation in Distributed System Presentation 1 Group A4: Syeda Taib, Sean Hudson, Manasi Kapadia."— Presentation transcript:

1

2 Challenges of Process Allocation in Distributed System Presentation 1 Group A4: Syeda Taib, Sean Hudson, Manasi Kapadia

3 OUTLINE Definitions Related to Distributed Systems Problem Definition Allocation Strategies Classifications of Process Allocation Process Allocation Algorithms Summary References

4 Definitions Related to Distributed Systems Multiprocessor Systems A collection of tightly-coupled processors that share common memory. Distributed Systems (the hardware perspective) Several different, conflicting definitions in literature. A collection of loosely-coupled, independent processors that do not share common memory. Linked by some type of communication network. Can be geographically dispersed processors e.g. a LAN, or clustered tightly. Sometimes called multi-computers.

5 Definitions Related to Distributed Systems (cont.) Distributed Systems (the software perspective) Software for these systems are called Distributed Operating Systems (DOS’s) DOS’s presents a single, unified view of all system resources regardless of physical location. Services provided by DOS’s are completely transparent to users.

6 Problem Definition In a collection of independent processor nodes, how do we decide on which processor to run a particular process? Possible Optimization Goals : Maximize overall CPU utilization Minimize average response time Minimize response time ratio Assumptions : All processors are the same (clock speeds may be different). All processors are fully connected to each other.

7 Characteristics of Process Allocation Algorithms Static vs. Dynamic. Non-migratory vs. Migratory. Deterministic vs. Heuristic. Centralized vs. Distributed. Optimal vs. Sub optimal. Local vs. Global. Sender–initiated vs. Receiver-initiated.

8 Static vs. Dynamic Allocation Static Allocation: Initial processor assignment is fixed at compile time. Dynamic Allocation: Initial processor assignment is determined at run time.

9 Non-migratory vs. Migratory Allocation Non-migratory Allocation: After a process is assigned to a processor it cannot move until it terminates. Simple to implement, but unfair load balancing. Migratory Allocation: Processes are allowed to move to other processors during execution. Can be used to achieve better load balancing, but greatly adds to complexity to the system design.

10 Deterministic vs. Heuristic Allocations Deterministic Allocation: Information of process behavior is known in advance. Information can include: a list of processes, computing requirements, file requirements and communication requirements, etc. Provides an “optimal” assignment of processes. Heuristic Allocation: Realistically, process behavior is not known in advance because of unpredictable loads on the system. Uses an approximation of the real load. Also known as ad-hoc technique.

11 Centralized vs. Distributed Allocations Centralized Allocation: Collects all necessary information in one central location. Central location allows better decisions to be made. Less robust due to single point of failure. Performance bottleneck occurs under heavy loads. Distributed Allocation: Collects all necessary information locally. Process assignment is made locally. Increases communication overhead and complexity.

12 Optimal vs. Sub-optimal Allocations Optimal Allocation: Obvious Sub-optimal Allocation: An acceptable, approximate solution.

13 Local vs. Global Allocations Relates to transfer policy: Transfer policy determines when to transfer a process. Local Allocation: Transfer decision of process is made based on local load information. If machine load is below some threshold, then keep the process; otherwise transfer it to somewhere else. Simple, but no an optimal solution. Global Allocation : Transfer decision of process is made based on global load information. Provides slightly better results, but expensive to implement.

14 Sender–Initiated vs. Receiver-Initiated Allocations Relates to Location policy: Location policy determines where to transfer a process. Sender–Initiated Allocation: Overloaded(Sender) machine sends out requests for help to other machines, to offload its new processes. Sender locates more CPU cycles in other machines.

15 Sender–Initiated vs. Receiver-Initiated Allocations (cont.) Receiver–Initiated Allocation: Under-loaded (receiver) machines announces to other machines to take up extra work. Sender-Initiated I’m overloaded ! Receiver-Initiated I’m free tonight !

16 Some Design Issues How to measure load? # of processes on each machine, but it doesn’t include background processes, daemons, etc. # of running or ready processes. CPU utilization using timer-interrupts. Is the communication cost justified? Transferring the process may not improve performance. Is complex better? Performance of an algorithm does not increase with the complexity of the algorithm.

17 Allocation Algorithms Graph-Theoretic Deterministic Algorithm Up-Down Algorithm Sender–Initiated Distributed Heuristic Algorithm Receiver–Initiated Distributed Heuristic Algorithm

18 Graph-Theoretic Deterministic Algorithm Given: CPU and memory requirements Average amount of traffic between each pair of processes. System represented as a weighted graph. Each node represents a process and each arc represents a flow of messages between pairs of processes. Partition the graph into K disjoint sub-graphs. e.g. limit CPU and memory requirements for each sub- graph.

19 Graph-Theoretic Deterministic Algorithm (cont.) Arcs that go from one sub-graph to another represent network traffic. Total network traffic = is the sum of the arcs intersected by the dotted cut lines. (e.g. 30 units) Goal is to find the partitioning that minimizes the network traffic while meeting all the constraints. A H G F E I BCD 3 1 1 8 5 5 4 32 4 3 6 2 4 2 2 CPU 1CPU 2CPU 3

20 Graph-Theoretic Deterministic Algorithm (cont.) Look for clusters of processes that are tightly coupled (high inter-cluster traffic flow) but which interact little with other clusters (low inter-cluster traffic flow). Disadvantage: limited applicability because it must know information about the system in advance.

21 Up-Down Algorithm Centralized usage table: Maintained by a coordinator Each workstation w has an entry U(w) in the table, initially 0. Update the table whenever significant event occurs. When workstation w is running processes on another workstation, U(w) accumulates positive penalty points over time. When workstation w has unsatisfied requests pending, U(w) accumulates negative penalty points over time. When no requests are pending and processors are idle, U(w) is moved a certain point closer to zero, until it gets there. Hence, the score goes up and down.

22 Up-Down Algorithm (cont.) Centralized usage table (cont.): Positive points indicate the workstation is a net user of system resources. Negative point means that it needs resources. Zero is neutral. Allocation decisions are based on the usage table whenever scheduling events occur. When a process is created, the machine ask the usage table coordinator to allocate it a processor. If not available then the request is temporarily denied and a note is made of the request. When a processor becomes free, the pending request whose owner has the lowest score wins. It allocates capacity fairly. Disadvantages: central node soon becomes bottleneck.

23 Sender-Initiated Distributed Heuristic Algorithm Process creator polls the other nodes. A process is transferred if the load on the polled node is below some threshold. If no suitable node is found, the process is run on originating node. Random polling policy: avoid thrashing by imposing limit on the transfer. Under high system loads, constant polling will consume CPU cycles.

24 Receiver-Initiated Distributed Heuristic Algorithm After process completion, nodes poll for more work. Transfer decisions are made at process completion time based on CPU queue length. Randomly poll a limited # of nodes until one is found that has an acceptable load level. If search fails, then wait for some pre-determined time interval, before polling again. It does not put extra load on the system at critical times.

25 Summary

26 References Casvant, Thomas and Singhal, Mukesh. Distributed Computing Systems. IEEE Press. Mullender, Sape. Distributed Systems. ACM Press. Stallings, William. Operating System. Tanenbaum, Andrew. Distributed Operating System.


Download ppt "Challenges of Process Allocation in Distributed System Presentation 1 Group A4: Syeda Taib, Sean Hudson, Manasi Kapadia."

Similar presentations


Ads by Google