GHS: A Performance Prediction and Task Scheduling System for Grid Computing Xian-He Sun Department of Computer Science Illinois Institute of Technology SC/APART Nov. 22, 2002
Outline Introduction Concept and challenge The Grid Harvest Service (GHS) System –Design methodology –Measurement system –Scheduling algorithms –Experimental testing Conclusion Scalable Computing Software Laboratory
Parallel Processing –Two or more working entities work together toward a common goal for a better performance Grid Computing –Use distributed resources as a unified compute platform for a better performance New Challenges of Grid Computing –Heterogeneous system, Non-dedicated environment, Relative large data access delay Introduction
Degradations of Parallel Processing Unbalanced Workload Communication Delay Overhead Increases with the Ensemble Size
Degradations of Grid Computing Unbalanced Computing Power and Workload Shared Computing and Communication Resource Uncertainty, Heterogeneity, and Overhead Increases with the Ensemble Size
Performance Evaluation (Improving performance is the goal) Performance Measurement –Metric, Parameter Performance Prediction –Model, Application-Resource, Scheduling Performance Diagnose/Optimization –Post-execution, Algorithm improvement, Architecture improvement, State-of-the-art
Parallel Performance Metrics (Run-time is the dominant metric) Run-Time (Execution Time) Speed: mflops, mips, cpi Efficiency: throughput Speedup Parallel Efficiency Scalability: The ability to maintain performance gain when system and problem size increase Others: portability, programming ability,etc
Parallel Performance Models (Predicting Run-time is the dominant goal) PRAM (parallel random-access model) –EREW, CREW, CRCW BSP (bulk synchronous parallel) Model –Supersteps, phase parallel model Alpha and Beta Model – comm. startup time, data trans. time per byte Scalable Computing Model –Scalable speedup, scalability Log(P) Model –L-latency, o-overhead, g-gap, P-the number of processors Others
Research Projects and Tools Parallel Processing –Paradyn, W3 (why, when, and where) –TAU, tuning and analysis utilities –Pablo, Prophesy, SCALEA, SCALA, etc –for dedicated systems – instrumentation, post-execution analysis, visualization, prediction, application performance, I/O performance
Research Projects and Tools Grid Computing –NWS (Network Weather Service) monitors and forecasts resource performance –RPS (Resource Prediction System) predicts CPU availability of a Unix system –AppLeS (Application-Level Scheduler) A application-level scheduler extended to non- dedicated environment based on NWS –Short-term system-level prediction
New Metric for Computation Grid ? –???? New Model for Computation Grid ? –Yes –Application-level performance prediction New Model for other Technical Advance? – Yes –Date access in hierarchical memory systems Do We Need
The Grid Harvest Service (GHS) System A long-term application-level performance prediction and scheduling system for non-dedicated (Grid) environments A new prediction model derived by probability analysis and simulation Non-intrusive measurement and scheduling algorithms Implementation and testing Sun/Wu 02
Performance Model (Gong,Sun,Watson,02) Remote job has low priority Local job arriving and service time based on extensive monitoring and observation ws(k)t
Predication Formula U k (S)|S k >0 Gamma distribution Arrival of local jobs follow a Poisson distribution with rate Execution time of the owner job follows a general distribution with mean and standard deviation Simulate the distribution of the local service rate, approaches with a know distribution
Prediction Formula Parallel task completion time Homogeneous parallel task completion time Mean time balancing partition
Measurement Methodology A parameter has a population with a mean and a standard deviation, a confidence interval for the population mean is given The smallest sample size n with a desired confidence interval and a required accuracy r is given
Measurement and Prediction of Parameters Utilization Job Arrival Standard Deviation of Service Rate Least-Intrusive Measurement
Select previous days, in the system measurement history; For each day, where means the set of measured during the time interval beginning from the day ; End For Select previous continuous time interval before, calculate where means the set of measured during ; output while and
List a set of lightly loaded machines ; List all possible sets of machines, such as For each machine set, Use mean time balancing partition to partition the task Use the formula to calculate the mean and coefficient of variation If >, then ; End For Assign parallel task to the machine set ; Scheduling Algorithm Scheduling with a Given Number of Sub-tasks
List a set of lightly loaded machines ; While do Scheduling with Sub-tasks If >, then ; End If End while Assign parallel task to the machine set. Optimal Scheduling Algorithm
List a set of lightly loaded machines ; Sort the machines in a decreasing order with ; Use the task ratio to find the upper limit q ; Use bi-section search to find the p such as is minimum Heuristic Scheduling Algorithm
Embedded in Grid Run-time System
Application-level Prediction Remote task completion time on single machine Experimental Testing
Prediction of parallel task completion time Prediction of a multi-processor with local scheduler
Partition and Scheduling Comparison of three partition approaches
Performance Gain with Scheduling Execution time with different scheduling strategies
Cost and Gain Measurement reduces when system steady
The calculation time of the prediction component Node Number Time (s)
The GHS System A Good Sample and Successful Story –Performance modeling –Parameter measurement and prediction schemes –Application-level performance prediction –Partition and Scheduling It has its limitation too –Communication and data access delay
What We Know, What We Do Not We know there is no deterministic prediction in a non-deterministic shared environment. We do not know how to reach a fussy engineering solution Heuristic algorithms Rule of thumb Stochastic AI Data Mining Statistic etc Innovative method etc
Conclusion Application-level Performance Evaluation –Code-machine versus machine, alg., alg.-machine New Requirement under New Environments We know we are making progress. We do not know if we can keep up with the technology improvement