Presentation is loading. Please wait.

Presentation is loading. Please wait.

Real-time operating systems: II

Similar presentations


Presentation on theme: "Real-time operating systems: II"— Presentation transcript:

1 Real-time operating systems: II
Theoretical foundations Task characteristics of a real workload Round-robin scheduling Cyclic executives Rate-monotonic scheduling Earliest deadline first Intertask communication and synchronization © Copyright 2004 Dr. Phillip A. Laplante

2 Task characteristics of a real workload
Precedence constraints: specifies if any task(s) needs to precede other tasks. Release or arrival time ri,j: the release time of the jth instance of task . Phase: Φi the release time of the first instant of task . Response time: Time span between the task activation and its completion. Absolute deadline di: is the instant of time by which the task must complete. Relative deadline Di: is the maximum allowable response time of the task. Laxity type: Notion of urgency or leeway in a task's execution. Period pi: is the minimum length of intervals between the release times of consecutive tasks. Execution time ei: is the (maximum) amount of time required to complete the execution of a task i when it executes alone and has all the resources it requires. © Copyright 2004 Dr. Phillip A. Laplante

3 Task characteristics of a real workload
Mathematically, some of the parameters listed above are related as follows: and di,j: the absolute deadline of the jth instance of task is as follows If the relative deadline of a periodic task is equal to its period pi, then Where k is some positive integer greater than or equal to one, corresponding to the kth instance of that task. © Copyright 2004 Dr. Phillip A. Laplante

4 Task characteristics of a real workload
A simple task model has the following simplifying assumptions: All tasks in the task set are strictly periodic. The relative deadline of a task is equal to its period/frame. All tasks are independent; there are no precedence constraints. No task has any non-preemptible section, and the cost of preemption is negligible. Only processing requirements are significant; memory, and I/O requirements are negligible. For real-time systems, it is of utmost importance that the scheduling algorithm produces a predictable schedule Many real-time operating systems use round-robin scheduling policy because it is simple and predictable. © Copyright 2004 Dr. Phillip A. Laplante

5 Real-time operating systems: II
Theoretical foundations Task characteristics of a real workload Round-robin scheduling Cyclic executives Rate-monotonic scheduling Earliest deadline first Intertask communication and synchronization © Copyright 2004 Dr. Phillip A. Laplante

6 Round-robin scheduling
In a round-robin system several processes are executed sequentially to completion, often in conjunction with a cyclic executive. In round-robin systems with time slicing, each executable task is assigned a fixed-time quantum called a time slice in which to execute. A fixed-rate clock is used to initiate an interrupt at a rate corresponding to the time slice. The task executes until it completes or its execution time expires, as indicated by the clock interrupt. If the task does not execute to completion, its context must be saved. The task is the placed at the end of the executable list. The context of the next executable task in the list is restored, and it resumes execution. Round-robin systems can be combined with pre-emptive priority systems, yielding a kind of mixed system. © Copyright 2004 Dr. Phillip A. Laplante

7 Round-robin scheduling
Mixed scheduling of three tasks. Here process A and C are of the same priority, whereas B is of higher priority. Process A is executing for some time when it is preempted by task B, which executes until completion. When process A resumes, it continues until its time slice expires, at which time context is switched to process C, which begins executing. © Copyright 2004 Dr. Phillip A. Laplante

8 Real-time operating systems: II
Theoretical foundations Task characteristics of a real workload Round-robin scheduling Cyclic executives Rate-monotonic scheduling Earliest deadline first Intertask communication and synchronization © Copyright 2004 Dr. Phillip A. Laplante

9 © Copyright 2004 Dr. Phillip A. Laplante
Cyclic executives A simple approach that generates a complete and highly predictable schedule. The CE refers to a scheduler that deterministically interleaves and sequentializes the execution of periodic tasks on a processor according to a pre-run-time schedule. In general terms, the CE is a table of procedure calls, where each task is a procedure, and it executes a single do loop. Scheduling decisions are made periodically, rather than at arbitrary times. Time intervals during scheduling decision points are referred to as frames or minor cycles, and every frame has a length f called the frame size. © Copyright 2004 Dr. Phillip A. Laplante

10 © Copyright 2004 Dr. Phillip A. Laplante
Cyclic executives The major cycle is the minimum time required to execute tasks allocated to the processor ensuring the deadlines and periods of all processes are met. The major cycle or the hyperperiod is equal to the least common multiple of the periods, i.e., lcm(p1, … pn). As scheduling decisions are made only at the beginning of every frame, there is no preemption within each frame. The phase of each periodic task is a nonnegative integer multiple of the frame size. The scheduler carries out monitoring and enforcement actions at the beginning of each frame. © Copyright 2004 Dr. Phillip A. Laplante

11 © Copyright 2004 Dr. Phillip A. Laplante
Cyclic executives © Copyright 2004 Dr. Phillip A. Laplante

12 © Copyright 2004 Dr. Phillip A. Laplante
Cyclic executives © Copyright 2004 Dr. Phillip A. Laplante

13 Real-time operating systems: II
Theoretical foundations Task characteristics of a real workload Round-robin scheduling Cyclic executives Rate-monotonic scheduling Earliest deadline first Intertask communication and synchronization © Copyright 2004 Dr. Phillip A. Laplante

14 Rate-monotonic scheduling
Theorem (Rate-monotonic) [Liu and Layland ‘73] Given a set of periodic tasks and preemptive priority scheduling, then by assigning priorities such that the tasks with shorter periods have higher priorities (rate monotonic) yields an optimal scheduling algorithm. © Copyright 2004 Dr. Phillip A. Laplante

15 Rate-monotonic scheduling
© Copyright 2004 Dr. Phillip A. Laplante

16 Rate-monotonic scheduling
© Copyright 2004 Dr. Phillip A. Laplante

17 Rate-monotonic scheduling
Upper bound on utilization in a rate-monotonic system as a function of the number of tasks. Notice how it rapidly converges to 0.69. © Copyright 2004 Dr. Phillip A. Laplante

18 Rate-monotonic scheduling
Not every system is appropriate as rate-monotonic. Consider nuclear plant control system – should visual display have highest priority? What about aperiodic and sporadic tasks? In these cases RM used a feasibility check. © Copyright 2004 Dr. Phillip A. Laplante

19 Real-time operating systems: II
Theoretical foundations Task characteristics of a real workload Round-robin scheduling Cyclic executives Rate-monotonic scheduling Earliest deadline first Intertask communication and synchronization © Copyright 2004 Dr. Phillip A. Laplante

20 Earliest deadline first
In contrast to fixed-priority algorithms, in dynamic priority schemes the priority of the task with respect to that of the other tasks changes as tasks are released and completed. One of the most well-known dynamic algorithms, earliest-deadline-first (EDF), deals with deadlines rather than execution times. The ready task with the earliest deadline has the highest priority at any point of time. © Copyright 2004 Dr. Phillip A. Laplante

21 Earliest deadline first
© Copyright 2004 Dr. Phillip A. Laplante

22 Earliest deadline first
Consider the schedule for the following task set: © Copyright 2004 Dr. Phillip A. Laplante

23 Earliest deadline first
© Copyright 2004 Dr. Phillip A. Laplante

24 Earliest deadline first
EDF is optimal for a uniprocessor with task preemption being allowed. That is, if a feasible schedule exists, then the EDF policy will also produce a feasible schedule. There is never processor idling prior to a missed deadline. © Copyright 2004 Dr. Phillip A. Laplante

25 Earliest deadline first
EDF is more flexible and achieves better utilization than RM. However, the timing behavior of a system scheduled according to a fixed-priority algorithm is more predictable than that of a system scheduled according to a dynamic-priority-algorithm. In case of overloads, RM is stable in the presence of missed deadlines; the same lower priority tasks miss deadlines every time. There is no effect on higher priority tasks. When tasks are scheduled using EDF, it is difficult to predict which tasks will miss their deadlines during overloads. A good overrun management scheme is thus needed for such dynamic priority algorithms employed in systems where overload conditions cannot be avoided. © Copyright 2004 Dr. Phillip A. Laplante

26 Real-time operating systems: II
Theoretical foundations Task characteristics of a real workload Round-robin scheduling Cyclic executives Rate-monotonic scheduling Earliest deadline first Intertask communication and synchronization © Copyright 2004 Dr. Phillip A. Laplante

27 Inter-task Communication and Synchronization
In this discussion, the terms “task” and “process” are used interchangeably. © Copyright 2004 Dr. Phillip A. Laplante

28 The Critical Section Problem
Code that is executed by a process for the purpose of accessing and modifying shared data is called a critical section. Only one process at a time must be allowed to enter its critical section. In other words, mutual exclusion must be enforced at the entry to a critical section. The critical-section problem involves finding a protocol that allows processes to cooperate in the required manner. © Copyright 2004 Dr. Phillip A. Laplante

29 The Critical Section Problem (continued)
The requirements that must be met are: Mutual exclusion Progress Bounded waiting © Copyright 2004 Dr. Phillip A. Laplante

30 © Copyright 2004 Dr. Phillip A. Laplante
Evolution of Solutions to the Critical Section Problem (and the Implementation of Mutual Exclusion) Software only implementations appeared first. Several unsuccessful attempts were tried. The successful implementation became known as Dekker’s Algorithm. All software only implementations require a busy wait. © Copyright 2004 Dr. Phillip A. Laplante

31 Evolution (continued)
Once a successful software implementation was demonstrated, computer designers considered the assertion that hardware and software are logically equivalent. They implemented a new machine instruction called Testandset (or an equivalent one called Swap) Use of the Testandset still requires a busy wait. The final and most elegant solution is the semaphore (developed by Dijkstra) Use of the semaphore does not require a busy wait. © Copyright 2004 Dr. Phillip A. Laplante

32 Synchronization Hardware
The testandset instruction tests and modifies the contents of a word atomically. Function Test-and-Set (var target.boolean): boolean; begin Test-and-Set :=target; target:= true; end; © Copyright 2004 Dr. Phillip A. Laplante

33 © Copyright 2004 Dr. Phillip A. Laplante
Semaphores The most elegant solution is the semaphore (developed by Dijkstra) A semaphore S is a memory location that acts as a lock to protect critical sections. Two operations, wait and signal are used either to set or to reset the semaphore. Traditionally, one denotes the wait operation as P(S) and the signal operations V(S). © Copyright 2004 Dr. Phillip A. Laplante

34 Semaphores void P(boolean S) { while (S == TRUE); S=TRUE; }
void V(boolean S) S=FALSE; Traditional semaphore primitive implementation in pseudo-code. © Copyright 2004 Dr. Phillip A. Laplante

35 © Copyright 2004 Dr. Phillip A. Laplante
Semaphores void P(int S) { int KEY=0; pend(KEY,S); } void V(int S) { int KEY=0; post(KEY,S); } Mailboxes can be used to implement semphores – the advantage is that they avoid the busy-wait condition. © Copyright 2004 Dr. Phillip A. Laplante

36 © Copyright 2004 Dr. Phillip A. Laplante
Semaphore Conceptually, semaphore operations may be viewed in the following manner For semaphore S: Wait(S) can be defined logically as: if S > 0 then S = S - 1 else wait in Queue S Signal(S): if any task currently waits in Queue S then awaken first task in the queue else S = S + 1 Both of the above operations are atomic. © Copyright 2004 Dr. Phillip A. Laplante

37 Semaphore (continued)
May be used for enforcing mutual exclusion and for signaling among different tasks For enforcing mutual exclusion at the entry to a critical section: Semaphore “mutex” has an initial value of 1. For two tasks t1 and t2 accessing the same data: t t2 … … wait(mutex) wait(mutex) <critical section> <critical section> signal(mutex) signal(mutex) © Copyright 2004 Dr. Phillip A. Laplante

38 Semaphores (continued)
For signaling between 2 tasks t1 and t2: Semaphore “sem” has initial value of 0. t2 waits for a signal from t1: t1 t2 … … <generate data wait(sem) needed by t2> signal(sem) <use data generated by t1> © Copyright 2004 Dr. Phillip A. Laplante

39 © Copyright 2004 Dr. Phillip A. Laplante
The Paradigm of Intertask (process) Communication and Synchronization: The Producer-Consumer Problem The producer task produces information that is consumed by a consumer task. A buffer is used to hold data between the two tasks. The producer & consumer must be synchronized; that is, a producer must wait if it attempts to put data into a full buffer whereas a consumer must wait if it attempts to extract data from an empty buffer. This represents the basis for intertask communication and can take two forms: Message passing by way of a separate “mailbox” or”message queue” The operating system usually provides this structure and the corresponding functions SEND and RECEIVE A producer SEND’s to the mailbox, while the consumer RECEIVE’s from the mailbox Message passing by way of a shared-memory buffer Usually implemented directly with semaphores Assumes a fixed buffer size. © Copyright 2004 Dr. Phillip A. Laplante

40 Message Passing by way of a Mailbox, or Message Queue
Convenient to the programmer, because the level of abstraction is higher (via SEND and RECEIVE) Typically exhibits higher overhead, because data has to be moved more (sender process to mailbox and mailbox to receiver process) © Copyright 2004 Dr. Phillip A. Laplante

41 Message Passing by way of a Shared-Memory Buffer
Lower level of abstraction, requiring the use of semaphores More effort for the programmer Greater risk of mistakes in the use of semaphores Better performance due to less movement of data © Copyright 2004 Dr. Phillip A. Laplante

42 Message Passing by way of a Shared-Memory Buffer (continued)
Two possible design approaches: Traditional Bounded Buffer, in which both sender and receiver process can access the shared buffer if it is not completely full and not completely empty Sender and receiver processes separated by double buffers While one buffer is being filled by the sender process, the other buffer is being emptied by the receiver process Once one buffer is filled and the other emptied, the sender and receiver processes swap buffers and continue © Copyright 2004 Dr. Phillip A. Laplante

43 Other synchronization mechanisms
Use of synchronized types (monitors) in languages such as Java. Monitor objects queue all threads waiting to execute synchronized methods. Threads are enqueued when another thread is already executing in a synchronized method of that object. A thread also gets enqueued if the thread calls wait. Threads that explicitly invoked wait can only proceed when notified via a call by another thread to notify or notifyall. When it is acceptable for an enqueued thread to proceed, the scheduler selects the thread with highest priority. (Ref. Deitel & Deitel, Java: How to Program, Prentice-Hall 1997.) © Copyright 2004 Dr. Phillip A. Laplante

44 Other synchronization mechanisms
C provides for synchronization using event flags via the raise and signal operations, which are typically implemented as macros. These allow for the specification of an event that causes the setting of some flag. A second process is designed to react to this flag. Event flags represent simulated interrupts. Raising the event flag transfers flow-of-control to the operating system, which can then invoke the appropriate handler. Tasks that are waiting for the occurrence of an event are blocked. © Copyright 2004 Dr. Phillip A. Laplante

45 © Copyright 2004 Dr. Phillip A. Laplante
Deadlock Illustration Four necessary conditions Issues © Copyright 2004 Dr. Phillip A. Laplante

46 © Copyright 2004 Dr. Phillip A. Laplante
Illustration task task 2 p (s) . critical region 1 . . P(r) . critical region 2 P(r) . critical region . P (s) critical region 1 V(r) V(s) V(s) V(r) © Copyright 2004 Dr. Phillip A. Laplante

47 © Copyright 2004 Dr. Phillip A. Laplante
Illustration Deadlock realization in a resource diagram (can be viewed as DFD or illustrated with Petri net). © Copyright 2004 Dr. Phillip A. Laplante

48 Four necessary conditions
Four conditions are necessary for deadlock: mutual exclusion circular wait hold and wait no preemption Eliminating any one of the four necessary conditions will prevent deadlock from occurring. © Copyright 2004 Dr. Phillip A. Laplante

49 Four necessary conditions
Solution Possible Adverse Consequences mutual exclusion allow sharing, use spoolers starvation circular wait device ordering hold and wait pre-allocate resources, no hold and wait resources not always known a priori, poor resource utilization no preemption allow preemption © Copyright 2004 Dr. Phillip A. Laplante

50 © Copyright 2004 Dr. Phillip A. Laplante
Issues Best way to deal with deadlock is to avoid it. If semaphores protecting critical resources are implemented by mailboxes with time-outs, then deadlocking cannot occur. But starvation of one or more tasks is possible. Then the following approach is recommended can help avoid deadlock. Minimize the number of critical regions as well as minimizing their size. All processes must release any semaphore before returning to the calling function. Do not suspend any task while it controls a critical region All critical regions must be error free. Do not lock devices in interrupt handlers. Always perform validity checks on pointers used within critical regions. Pointer errors are common in certain languages, like C, and can lead to serious problems within the critical regions. © Copyright 2004 Dr. Phillip A. Laplante


Download ppt "Real-time operating systems: II"

Similar presentations


Ads by Google