Presentation is loading. Please wait.

Presentation is loading. Please wait.

Divisible Load Scheduling A Tutorial Thomas Robertazzi University at Stony Brook.

Similar presentations


Presentation on theme: "Divisible Load Scheduling A Tutorial Thomas Robertazzi University at Stony Brook."— Presentation transcript:

1 Divisible Load Scheduling A Tutorial Thomas Robertazzi University at Stony Brook

2 What is a Divisible Load? b A computational & networkable load that is arbitrarily partitionable (divisible) amongst processors and links. b There are no precedence relations.

3 Simple Application Example b Problem: Sum 1000 trillion numbers b Approach: Partition the numbers among 100 processors b But how?

4 Simple Application Example b To optimize solution time (maximize speedup) one needs to take into account heterogeneous link and processor speeds, computation and communication intensities, interconnection topology and scheduling policy. b Divisible Load Scheduling Theory Can Do This!

5 Applications (Generic) b Grid Computing/Meta-computing b Data Intensive Computing b Sensor Processing b Image Processing b Scientific/Engineering Computing b Financial Computing

6 Applications (Specific) b Pattern Searching b Database Computation b Matrix-Vector Computation b E&M Field Calculation (CAD) b Edge Detection

7 DLT Modeling Advantages b Linear and Deterministic Modeling b Tractable Recursive/Linear Equation Solution b Schematic Language b Equivalent Elements b Many Applications

8 Interconnection Topologies b Linear Daisy Chain b Bus b Single Level and Multilevel Trees b Mesh b Hypercube

9 Directions: Scalability 1 3 2 1 1 1 Sequential Distribution (Saturation) Simultaneous Distribution (Scalable) Hung & RobertazziCheng & Robertazzi

10 An Example b Model Specifications: A star network( single level tree network), and multi-level tree.A star network( single level tree network), and multi-level tree. Computation and transmission time is a linear function of the size of load.Computation and transmission time is a linear function of the size of load. Level to Level: Store and Forward SwitchingLevel to Level: Store and Forward Switching Same Level: Concurrent Load Distribution.Same Level: Concurrent Load Distribution.

11 b Children without Front End: b After receiving the assigned data, each child proceeds to process the data.

12 b Timing Diagram (single level tree) : b Children without Front End

13 m+1 unknows vs. m+1 Eqs. b Recursive equations: b Normalization equation:

14 b Distribution Solution:

15 b The load distribution solution is similar to the solution of the state-dependent M/M/1 queuing system.

16 Similarities to Queueing Theory b Linear model and tractable solutions b Schematic Language b Equivalent Elelements b Infinite Size Networks

17 b Speedup Analysis

18 b Speedup Analysis (continued)

19 b Tree Network b (Children without Front Ends)

20 Collapsing single level trees

21 Bandwidth of Fat Tree b Definition: The bandwidth of level j in a fat tree can be defined as p j-1 z.

22 Directions: Sequencing and Installments b Daisy Chain Surprise b Efficiency Rule Ghose, Mani & Bharadwaj

23 Directions: Sequencing and Installments b Multi-installment for Sequential Distribution 1 2 3 4 5 6 Ghose, Mani & Bharadwaj

24 Directions: Sequencing and Installments Diminishing returns in using multi-installment distribution. Ghose, Mani & Bharadwaj

25 Directions: Sequencing and Installments Drozdowski

26 Directions: Time Varying Modeling Can be solved with integral calculus. Sohn & Robertazzi

27 Directions: Monetary Cost Optimization  Min C Total  n c n w n T cp n=1 N Bus Processors Optimal Sequential Distribution if: c n-1 w n-1 less than c n w n for all n Sohn, Luryi & Robertazzi

28 Directions: Monetary Cost Optimization b 2 US Patents: Patent 5,889,989 (1999): Processor Cost Patent 5,889,989 (1999): Processor Cost Patent 6,370,560 (2001): Processor and Patent 6,370,560 (2001): Processor and Link Cost Link Cost Enabling technology for an open e-commerce market in leased proprietary computing. Sohn, Charcranoon, Luryi & Robertazzi

29 Directions: Database Modeling Expected time to find multiple records in flat file database Ko & Robertazzi

30 Directions: Realism Finite Buffers (Bharadwaj) Finite Buffers (Bharadwaj) Job Granularity (Bharadwaj) Job Granularity (Bharadwaj) Queueing Model Integration Queueing Model Integration

31 Directions: Experimental Work Database Join (Drozdowski)

32 Directions: Future Research b Operating Systems: Incorporate divisible load scheduling Incorporate divisible load scheduling into (distributed) operating systems into (distributed) operating systems b Measurement Process Modeling: Integrate measurement process Integrate measurement process modeling into divisible scheduling modeling into divisible scheduling

33 Directions: Future Research b Pipelining (Dutot) Concept: Distribute load to Concept: Distribute load to further processors first for further processors first for speedup improvement speedup improvement Improvement reported for daisy chains

34 Directions: Future Research b System Parameter Estimation (Ghose): Concept: Send small “probing” loads across links and to processors to estimate links and to processors to estimate available effort available effort Challenge: Rapid change in link & processor state state

35 DLT has a Good Future b Many Applications including wireless sensor networks wireless sensor networks b Tractable (Modeling & Computation) b Rich Theoretical Basis

36 Thank you! Questions??? Questions???


Download ppt "Divisible Load Scheduling A Tutorial Thomas Robertazzi University at Stony Brook."

Similar presentations


Ads by Google