Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment (then and now) EECS 750 – Spring 2006 Presented by: Shane Santner, TJ Staley,

Similar presentations


Presentation on theme: "Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment (then and now) EECS 750 – Spring 2006 Presented by: Shane Santner, TJ Staley,"— Presentation transcript:

1 Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment (then and now) EECS 750 – Spring 2006 Presented by: Shane Santner, TJ Staley, Ilya Tabakh

2 Agenda Intro The Paper Current state of the art Differences between then and now Shortcomings of Rate Monotonic Analysis Conclusion

3 Agenda Intro The Paper Current state of the art Differences between then and now Shortcomings of Rate Monotonic Analysis Conclusion

4 Intro Scheduling is a problem which has been worked on for many years Hard real-time scheduling presents its own set of unique problems Rate Monotonic Scheduling presents one approach to addressing the problem of Hard real-time scheduling

5 Agenda Intro The Paper Current state of the art Differences between then and now Shortcomings of Rate Monotonic Analysis Conclusion

6 The Paper Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment C. L. Liu and James W. Layland © 1973

7 Introduction Hard Real-Time – Tasks must be guaranteed to complete within a predefined amount of time Soft Real-Time – Statistical distribution of response-time of tasks is acceptable

8 Background Rate Monotonic refers to assigning priorities as a monotonic function of the rate (frequency of occurrence) of those processes. Rate Monotonic Scheduling (RMS) can be accomplished based upon rate monotonic principles. Rate Monotonic Analysis (RMA) can be performed statically on any hard real- time system concept to decide if the system is schedulable.

9 Background We will examine three different types of algorithms –Fixed Priority Assignment (Static) –Deadline Driven Priority Assignment (Dynamic) –Mixed Priority Assignment

10 Environment Paper makes five assumptions about tasks within a hard real-time system: –Requests for all tasks with hard deadlines are periodic with a constant interval between requests –Each task must complete before the next request for it occurs –Tasks are independent of each other –Run-time for each task is constant –All non-periodic tasks are special Initialization tasks Failure routines

11 Environment The restrictions previously mentioned can be applied to a controlled environment such as an assembly line where: –Tasks are periodic –Tasks are sequential, therefore they complete before the next request is issued. –Tasks are dependent on each other, but in a controlled environment this can be handled. –Run-time for each task is constant –Power-on initialization would be the only non-periodic tasks

12 Fixed Priority Scheduling Algorithm Assign the priority of each task according to its period, so that the shorter the period the higher the priority. Tasks are defined based on: –Task is denoted as: t 1 –Request Period: T 1, T 2,...,T m –Run-Time: C 1, C 2,...,C m

13 RMA Example Figure 1. Both possible outcomes for static-priority scheduling with two tasks (T1=50, C1=25, T2=100, C2=40) Case 1: Priority(t1) > Priority(t2) Case 2: Priority(t2) > Priority(t1)

14 RMA Example Some task sets are not schedulable Figure 2. Some task sets aren't schedulable (T1=50, C1=25, T2=70, C2=30)

15 Achievable Processor Utilization Processor utilization is defined as the fraction of processor time spent in the execution of the task set.

16 Relaxing the Utilization Bound Utilization bound is directly proportional to the number of tasks in the system. Under fixed priority scheduling the upper bound is % (ln(2)) processor utilization. By incorporating dynamic priority assignment of tasks in a hard real-time system we can achieve 100% processor utilization This method is optimal in the sense that if a set of tasks can be scheduled by any algorithm, it can also be scheduled by the deadline driven scheduling algorithm. This implies that for a set of tasks that are schedulable under the fixed priority scheduling algorithm, that this same set of tasks is schedulable under dynamic priority scheduling with a processor utilization approaching 100%.

17 The Deadline Driven Scheduling Algorithm Priorities are assigned to tasks according to the deadlines of their current requests. A task will be assigned the highest priority if the deadline of its current request is the nearest Conversely, a task will be assigned the lowest priority if the deadline of its current request is the furthest. At any instant, the task with the highest priority and yet unfulfilled request will be executed.

18 Deadline Driven Scheduling Algorithm Figure 3. (T1=50, C1=25, T2=70, C2=30) Deadline driven algorithm can schedule this task set that was not schedulable with the fixed priority algorithm.

19 A Mixed Scheduling Algorithm Tasks with small execution periods are scheduled using the fixed priority algorithm. All other tasks are scheduled dynamically and can only run on the CPU when all fixed priority tasks have completed. O riginally motivated by interrupt hardware limitations –Interrupt hardware acted as a fixed priority scheduler –Did not appear to be compatible with a hardware dynamic scheduler

20 Comparison and Comments The fixed scheduling algorithm has a least upper- bound processor utilization of around 70%. This can be quite restrictive when processor utilization is critical. Therefore, the deadline driven scheduling algorithm was developed to increase this bound to a theoretical limit of 100%. Finally, the mixed scheduling algorithm will not be able to achieve 100% processor utilization, however it will be significantly better than the limitation of 70% for the fixed scheduling algorithm

21 Conclusion Five key assumptions were made to support the ensuing analytical work. –Least defensible are: All tasks have periodic requests Run times are constant These two assumptions should be design goals for any real-time tasks which must receive guaranteed service

22 Conclusion A scheduling algorithm which assigns priorities to tasks in a monotonic relation to their request rates was shown to be optimum among the class of all fixed priority scheduling algorithms. The least upper bound of processor utilization for this algorithm is on the order of 70% for large task sets. The dynamic deadline driven scheduling algorithm was then shown to be globally optimum and capable of achieving full processor utilization. A combination of the two scheduling algorithms appears to provide most of the benefits of the deadline driven scheduling algorithm, and yet may be readily implemented in existing computers with interrupt limitations.

23 Agenda Intro The Paper Current state of the art Differences between then and now Shortcomings of Rate Monotonic Analysis Conclusion

24 Current State of the art Rate Monotonic Analysis is the methodology which has evolved from the paper and is defined as: –A collection of quantitative methods and algorithms that allow engineers to specify, understand, analyze and predict the timing behavior of read-time software systems.

25 Agenda Intro The Paper Current state of the art Differences between then and now Shortcomings of Rate Monotonic Analysis Conclusion

26 What has changed? The Original paper was very restrictive Lots of work has been done in order to extend capabilities of RMA Rules are made to be broken!

27 Assumption 1 - Problem Original Assumption 1: The requests for all tasks for which hard deadlines exist are periodic, with a constant interval between requests. Limitation: eliminates most real-time systems from consideration by eliminating sporadic events Solution: periodic task to poll aperiodic events, priority exchange protocol, deferrable server protocol, sporadic server protocol

28 Priority Exchange Protocol Periodic task created to process sporadic events If no sporadic events, server exchanges priority with periodic task Lets task run until complete or until sporadic task encountered When sporadic task encountered it runs at servers current priority level Replenishment of the servers execution time occurs periodically

29 Priority Exchange Protocol Pros: Fast average response times Cons: Implementation complexity Unnecessary accruement of run-time overhead

30 Deferrable Server Protocol Similar to priority exchange protocol If no sporadic requests are pending, suspend server until request arrives When sporadic task arrives, if server has allotted time left, task executing is preempted

31 Deferrable Server Protocol Pros: Low implementation complexity Accrues little run-time overhead Cons: Violates fourth assumption (oops)

32 Sporadic Server Protocol Fundamental difference between the deferrable server and the sporadic server is how the servers replenish Deferrable server gets full amount replenished at beginning of each period Sporadic server replenishes the amount of exec time consumed by each task one period after it is consumed

33 Sporadic Server Protocol Pros: Does not accrue as much run-time overhead Does not require as much implementation complexity Replenishment scheme does not periodically require maintenance processing Cons: Still violates fourth assumption but schedulability has been demonstrated

34 Relaxing Assumption 1 With sporadic server, all tasks for which hard deadlines are periodic part of assumption is relaxed Remaining part of assumption, can be addressed by either taking the minimum possible task period or employing mode change (talked about under assumption 4)

35 Assumption 2 – No problem Original Assumption 2: Each task must complete before the next request for it occurs Solution: Run-time doesnt have to monitor periods of all task. Could also buffer requests and feed one per period

36 Assumption 3 – Big Problem Original Assumption 3: The tasks are independent of each that requests for a certain task do not depend on the initiation or the completion of requests for other tasks Limitation: No inter-task communication, no semaphores, no i/o Solution: priority inheritance protocol, priority ceiling protocol

37 Priority Inheritance Applies if a lower priority task blocks a higher priority task Lower priority task inherits the priority of higher task for the duration of its critical section

38 Priority Inheritance Used in order to address unbounded priority inversion Unbounded priority inversion is introduced along with the introduction of semaphores

39 Priority Ceiling Every semaphore is given a ceiling priority which is at least the priority of the highest priority task that can lock the semaphore This ensures that the process running will be hoisted to the higher priority and allowed to run

40 Priority Ceiling Developed in order to eliminate deadlocking

41 Assumption 4 Original Assumption 4: Run-time for each task is constant upper bound for that task and does not vary with time Limitation: cannot add or remove tasks from the task set, nor dramatically alter time required to process currently existing task Solution: ceiling priority protocol (under certain conditions)

42 Assumption 4 conditions 1)For every unlocked semaphore S whose priority ceiling needs to be raised, Ss ceiling is raised immediately 2)For every locked semaphore S whose priority ceiling needs to be raised, Ss ceiling is raised immediately 3)For every semaphore S whose priority ceiling needs to be lowered, S's priority ceiling is lowered when all the tasks which may lock S, and which have priorities greater than the new priority ceiling of S, are deleted 4)If task Ts priority is higher than the priority ceilings of locked semaphores S 1,…,S k, which It may lock, the priority ceilings of the given semaphores must first be raised before before adding T 5)A task T, which needs to be deleted, can be deleted immediately upon the initiation of a mode change. If T has initiated, then it may be deleted only after completion 6)A task may be added into the system only if sufficient processor capacity exists

43 Assumption 5 Original Assumption 5: Any aperiodic tasks in the system are special … and do not themselves have hard, critical deadlines Limitation: no interrupts Solution: can be relaxed by utilizing techniques in relaxing the 1st and 3rd assumptions

44 Agenda Intro The Paper Current state of the art Differences between then and now Shortcomings of Rate Monotonic Analysis Conclusion

45 Shortcomings of RMA Tends to be pessimistic Scheduling overhead never taken into account Not good if case doesnt conform with assumption: –If most of workload is aperiodic –If worst case is realized very infrequently –If there is no minimum inter-arrival interval between thread invocations

46 Agenda Intro The Paper Current state of the art Differences between then and now Shortcomings of Rate Monotonic Analysis Conclusion

47 RMS was a good start, but had many drawbacks –Overly restrictive RMA extends RMS, relaxing many restrictions RMA is a good scheduling algorithm, but should only be considered as a guideline Not appropriate for all situation

48

49 Sources Heidmann, Paul. Rate Monotonic Analysis Paper. 8 Apr Klein, Mark. "Rate Monotonic Analysis." Software Technology Roadmap. 14 Dec Carnegie Mellon Software Engineering Institute. 8 Apr Liu, C. L. & Layland, J. W. "Scheduling Algorithms for Multi- Programming in a Hard Real-Time Environment." Journal of the Association for Computing Machinery 20, 1 (January 1973):


Download ppt "Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment (then and now) EECS 750 – Spring 2006 Presented by: Shane Santner, TJ Staley,"

Similar presentations


Ads by Google