Presentation is loading. Please wait.

Presentation is loading. Please wait.

2-1 The critical section –A piece of code which cannot be interrupted during execution Cases of critical sections –Modifying a block of memory shared by.

Similar presentations


Presentation on theme: "2-1 The critical section –A piece of code which cannot be interrupted during execution Cases of critical sections –Modifying a block of memory shared by."— Presentation transcript:

1 2-1 The critical section –A piece of code which cannot be interrupted during execution Cases of critical sections –Modifying a block of memory shared by multiple kernel services Process table Ready queue, waiting queue, delay queue, etc. –Modifying global variables used by kernel Entering a critical section –Disable global interrupt - disable() Leaving a critical section –Enable global interrupt - enable() Chapter 2 Real-Time Systems Concepts

2 2-2

3 2-3 Resource –EX: I/O, CPU, Memory, Printer, … Shared Resource –The resources which are shared among the tasks –Each task should gain exclusive access to the shared resource to prevent data corruption  Mutual exclusion Task and Thread (herein both terminology represent a same thing) –A simple program that thinks it has the CPU all to itself –Each task is an infinite loop –It has five state: Dormant, Ready, Running, Waiting, and ISR (Interrupted)

4 2-4

5 2-5

6 2-6 Context Switch (or Task Switch) Each task has its stack to store the associated information The purpose of context switch –To make sure that the task, which is forced to give up CPU, can resume operation later without lost of any vital information or data The procedure of interrupt  MP_addr(n) : instruction(n) MP_addr(n+1) : instruction(n+1) MP_addr(n+2) : instruction(n+2)  Push MP_addr(n+1) PushCPU registers   ; isr codes  popCPU registers return ; jump back to ; MP_addr(n+1) Interrupt {push flags push ret-address } return from interrupt

7 2-7 Foreground/background programming –Context (CPU registers and interrupted program address) save and restore using one stack Multi-tasking context switch –Using each task’s own stack Return address (optional) Task1’s local variables Current CPU stack pointer Return address (optional) Task 2’s local variables when it was switched Task 2’s code address when it was switched CPU registers when Task 2 was switched Task 1 (to be switched from) stackTask 2 (to be switched to) stack

8 2-8 During context switch Return address (optional) Task1’s local variables Current Program address Return address (optional) Task 2’s local variables when it was switched Task 2’s code address when it was switched CPU registers when Task 2 was switched Task 1 stackTask 2 stack Current CPU registers Current CPU stack pointer Stack pointer ‧‧‧ Task 1 TCB Stack pointer ‧‧‧ Task 2 TCB

9 2-9 After context switch Return address (optional) Task1’s local variables Current CPU stack pointer Task 2 (current) stack Return address (optional) Task 1’s local variables when it was switched Task 1’s code address when it was switched CPU registers when Task 1 was switched Task 1 (suspended) stack

10 2-10 The Operations of Context Switch Rely Interrupt (hardware or software) to do context switch –Push return address –Push FLAGS register ISR (context switch routine) –Push all register –Store sp to TCB (task control block) –Select a ready task which has the highest priority (Scheduler) –Restore the sp from the TCB of the new selected task –Pop all register –iret (interrupt return which will pop FLAGS and return address) Switch to new task

11 2-11 Basic components of a task –A set of dynamic properties Task slices (optional) Stack pointer Task status (current, suspended, etc.) Task priority –A portion of support memory Task stack All basic components except the support memory are stored in a data structure called Task Control Block (TCB)

12 2-12 A task definition example /* Task class */ class far Task : public _node{ public: void far (*StartAdd)(void far*arg); /* starting address */ void far *Arg;/* argument fo initialization */ unsigned SP,BP,SS;/* stack pointer */ unsigned char far *Stack;/* stack starting address */ unsigned StackSize;/* stack size */ unsigned tid;/* task ID */ int slice,slice_left;/* task slice */ int status;/* task status */ int type;/* task type, optional*/

13 2-13 void far setup(/* task initialization */ void far (*_start_add)(void far*arg), void far*arg, void far *_stack, unsigned stack_size, unsigned prio, int _type,/* optional */ unsigned rate,/* optional */ void far (*ret_add)(void) /* optional */ ); };

14 2-14 Task Creation Setting up static attributes to TCB –Assigning a task ID –Assigning task type (optional) –Assigning task slices (optional) –Assigning initialization arguments Setting up initial value of the dynamic properties –Assigning initial task priority –Assigning initial task slices –Assigning “suspended” to task status

15 2-15 Task stack creation –Allocating a memory segment for task stack according to the given stack length –Assigning the stack starting address to the bottom of the stack Task stack assignment –Task stack content is assigned such that resuming the task is like entering a subroutine load CPU stack pointer (SP) with the stack starting address Push return address into the stack (optional) Push task starting address to the stack Push CPU registers to the stack Storing current CUP stack pointer to the stack pointer in TCB

16 2-16 Task return address (optional) Task starting address ‧‧‧ CPU registers ‧‧‧ A typical initial task stack Stack pointer stored in TCB Stack starting address

17 2-17 Context Switch The purpose of context switch –To make sure that the task, which is forced to give up CPU, can resume operation later without lost of any vital information or data The procedure of interrupt  MP_addr(n) : instruction(n) MP_addr(n+1) : instruction(n+1) MP_addr(n+2) : instruction(n+2)  Push MP_addr(n+1) PushCPU registers   ; isr codes  popCPU registers return ; jump back to ; MP_addr(n+1) interrupt return from interrupt

18 2-18 Foreground/background programming –Context (CPU registers and interrupted program address) save and restore using one stack Multi-tasking context switch –Using each task’s own stack Return address (optional) Task1’s local variables Current CPU stack pointer Return address (optional) Task 2’s local variables when it was switched Task 2’s code address when it was switched CPU registers when Task 2 was switched Task 1 (to be switched from) stackTask 2 (to be switched to) stack

19 2-19 During context switch Return address (optional) Task1’s local variables Current Program address Return address (optional) Task 2’s local variables when it was switched Task 2’s code address when it was switched CPU registers when Task 2 was switched Task 1 stackTask 2 stack Current CPU registers Current CPU stack pointer Stack pointer ‧‧‧ Task 1 TCB Stack pointer ‧‧‧ Task 2 TCB

20 2-20 After context switch Return address (optional) Task1’s local variables Current CPU stack pointer Task 2 (current) stack Return address (optional) Task 1’s local variables when it was switched Task 1’s code address when it was switched CPU registers when Task 1 was switched Task 1 (suspended) stack

21 2-21 Scheduler Also called the dispatcher Determine which task will run next In a priority-based kernel, control of the CPU is always given to the highest priority task ready to run Two types of priority-based kernels –Non-preemptive –preemptive

22 2-22 Non-preemptive Task auto gives up the control of the CPU Also called cooperative multitasking –Tasks cooperate with each other to share the CPU advantages –Can use non-reentrant function –Without fear of corruption by another task (less need to guard shared data through the use of semaphores) –Interrupt latency is typically low –Task-level response time much lower than the foreground/background Worst case is the longest task time

23 2-23 Figure 2.4 Non-preemptive kernel

24 2-24 Preemptive Kernel Used in the high system responsiveness The highest priority task ready to run is always given control of the CPU –When a task make a higher priority task ready to run, the current task is preempted and the higher priority task is immediately given control of the CPU –If an ISR makes a higher priority task ready, when the ISR completes, the interrupted task is suspended and the new higher priority task is resumed The use of non-reentrant functions requires to cooperate with mutual exclusion semaphores

25 2-25 Figure 2.5 Preemptive kernel

26 2-26 Reentrancy Reentrant function –Can be used by more than one task in concurrent without fear of data corruption –Can be interrupted at any time and resumed at a later time without loss of data –Use local variable void strcpy(char *dest, char *src) { while (*dest++ = *src++) { ; } *dest = NUL; } int Temp; void swap(int *x, int *y) { Temp = *x; *x = *y; *y = Temp; } Listing 2.1 Reentrant functionListing 2.2 Non-reentrant function

27 2-27 Figure 2.6 Non-reentrant function

28 2-28 Task Priority Static priorities –The priority of each task does not change during the application’s execution Dynamic priorities –The priority of tasks can be changed during the application’s execution

29 2-29 Figure 2.7 Priority Inversion problem

30 2-30 Figure 2.8 Kernel that supports priority inheritance

31 2-31 Assigning Task Priorities Rate Monotonic Scheduling (RMS) –The highest rate of execution are given the highest priority RMS makes a number of assumptions: –All tasks are periodic (they occur at regular intervals). –Tasks do not synchronize with one another, share resources, or exchange data. –The CPU must always execute the highest priority task that is ready to run. In other words, preemptive scheduling must be used. If meet the following inequality equation, all task HARD real-time deadlines will be met CPU utilization of all time- critical tasks should be less than 70% Other 30% can be used by non- time0critical tasks

32 2-32 Mutual Exclusion Multiple tasks access same area (critical section) must ensure that each task has exclusive access to the data to avoid contention and data corruption The method of exclusive access –Disabling interrupts –Performing test-and-set operations –Disabling scheduling –Using semaphores

33 2-33 Disabling and enabling interrupts uc/OS-II provides two macros to disable/enable interrupt Disable interrupts; Access the resource (read/write from/to variables); Reenable interrupts; void Function (void) { OS_ENTER_CRITICAL();.. /* You can access shared data in here */. OS_EXIT_CRITICAL(); }

34 2-34 Test-and-Set Disable interrupts; if (‘Access Variable’ is 0) { Set variable to 1; Reenable interrupts; Access the resource; Disable interrupts; Set the ‘Access Variable’ back to 0; Reenable interrupts; } else { Reenable interrupts; /* You don’t have access to the resource, try back later; */ }

35 2-35 Disabling and Enabling the Scheduler –If no shared variables or data structures with an ISR, we can disable and enable scheduling –Two or more tasks can share data without contention –While the scheduler is locked, interrupts are enable If the ISR generates a new event which enables a higher priority, the higher priority will be run when the OSSchedunlock() is called. void Function (void) { OS_ENTER_CRITICAL();.. /* You can access shared data in here */. OS_EXIT_CRITICAL(); }

36 2-36 Semaphores Semaphores are used to –Control access to a shared resource (mutual exclusion) –Signal the occurrence of an event –Allow two tasks to synchronize their activities Two types of semaphores –Binary semaphores –Counting semaphores Three operations of a semaphore –INITIALIZE (called CREATE) –WAIT (called PEND) –SIGNAL (called POST) --Release semaphore Highest priority task waiting for the semaphore is activized (uCOS-II supports this one) First task that requested the semaphore is activized (FIFO)

37 2-37 Accessing shared data by obtaining a semaphore OS_EVENT *SharedDataSem; void Function (void) { INT8U err; OSSemPend(SharedDataSem, 0, &err);.. /* You can access shared data in here (interrupts are recognized) */. OSSemPost(SharedDataSem); }

38 2-38 Control of shared resources (multual exclusion) –e.g., single display device task1(... ) {... printf("This is task 1.");... } task2(... ) {... printf("This is task 2.");... } ThiThsi siis ttasaks k12.. Results may be: Exclusive usage of certain resources (e.g., shared memory)

39 2-39 Solution: using a semaphore and initialized it to 1 Each task must know about the existence of semaphore in order to access the resource –Some situations may encapsulate the semaphore is better

40 2-40 INT8U CommSendCmd(char *cmd, char *response, INT16U timeout) { Acquire port's semaphore; Send command to device; Wait for response (with timeout); if (timed out) { Release semaphore; return (error code); } else { Release semaphore; return (no error); } } Figure 2.11 Hiding a semaphore from tasks

41 2-41 Counting semaphore BUF *BufReq(void) { BUF *ptr; Acquire a semaphore; Disable interrupts; ptr = BufFreeList; BufFreeList = ptr->BufNext; Enable interrupts; return (ptr); } void BufRel(BUF *ptr) { Disable interrupts; ptr->BufNext = BufFreeList; BufFreeList = ptr; Enable interrupts; Release semaphore; }

42 2-42 Deadlock To avoid a deadlock the tasks is –Acquire all resources before proceeding –Acquire the resources in the same order –Release the resources in the reverse order Using timeout when acquiring a semaphore –When a timeout occur, a return error code prevents the taks form thinking it has obtained the resource. Deadlocks generally occur in large multitasking systems, not in embedded systems

43 2-43 Synchronization A task can be synchronized with an ISR or a task Tasks synchronizing their activities

44 2-44 Event Flags (uCOS-II does not support) Used in a task needs to synchronize with the occurrence of multiple events

45 2-45 Common events can be used to signal multiple tasks

46 2-46 Intertask Communication A task or an ISR to communicate information to another task There are two ways of intertask communication –Through global data (disable/enable interrupts or using semaphore) Task can only communicate information to an ISR by using global variables Task can not aware any global variable is changed (unless using semaphore or task periodical polling) –Sending messages Message mailbox or message queue

47 2-47 Message Mailboxes A task desiring a message from an empty mailbox is suspended and placed on the waiting list until a message is received Kernel allows the task waiting for a message to specify a timeout When a message is deposited into the mailbox –Priority based –FIFO Waiting list

48 2-48 Message queues Is used to send one or more messages to a task Is basically an array of mailboxes The ifrst message inserted in the queue will be the first message extracted from the queue (FIFO) or Last-In_first- Out (LIFO)

49 2-49 Interrutps When an interrupt is recognized, the CPU saves –Return address (interrupted task) –Flags Jump to Interrupt Service Routine (ISR) Upon completion of the ISR, the program returns to –Foreground/background –The interrupted task for a non-preemptive kernel –The highest priority task ready to run for a preemptive kernel Interrupt Nesting

50 2-50 Interrupt latency, Response, and Recovery Interrupt latency –Maximum amount of time interrupts are disabled + Time to start executing the first instruction in the ISR Interrupt response –Foreground/background –Non-preemptive kernel –Preemptive kernel Interrupt recovery –Foreground/background –Non-preemptive kernel –Preemptive kernel Interrupt latency + Time to save the CPU's context Interrupt latency + Time to save the CPU's context + Execution time of the kernel ISR entry function Time to restore the CPU's context + Time to execute the return from interrupt instruction Time to determine if a higher priority task is ready + Time to restore the CPU's context of the highest priority task + Time to execute the return from interrupt instruction

51 2-51 Figure 2.20 foreground/background

52 2-52 Figure 2.21 non-preemptive kernel

53 2-53 Figure 2.22 preemptive kernel

54 2-54 Nonmaskable Interrupts (NMIs) NMI cannot be disabled –Interrupt latency, response, and recovery are minimal –Interrupt latency –Interrupt response –Interrupt recovery Time to execute longest instruction +Time to start executing the NMI ISR Interrupt latency +Time to save the CPU's context Time to restore the CPU's context +Time to execute the return from interrupt instruction Disableing Nonmaskable interrupts Signaling a task from a nonmaskable interrupt Every 150 us Every 150us*40 = 6ms

55 2-55 Clock Tick A special interrupt that occurs periodically Allows kernel to delay task for an interal number of clock ticks To provide timeout when task are waiting for event to occur The faster the tick rate, the higher the overhead imposed on the system

56 2-56 Figure 2.25 delaying a task for one tick (case 1)

57 2-57 Figure 2.26 Delaying a task for one tick (case 2)

58 2-58 Figure 2.27 delaying a task for one tick (case 3)

59 2-59 Reduce the execution jitter of the task –Increase the clock rate of your microprocessor. –Increase the time between tick interrupts. –Rearrange task priorities. –Avoid using floating-point math (if you must, use single precision). –Get a compiler that performs better code optimization. –Write time-critical code in assembly language. –If possible, upgrade to a faster microprocessor in the same family, e.g., 8086 to 80186, 68000 to 68020, etc.


Download ppt "2-1 The critical section –A piece of code which cannot be interrupted during execution Cases of critical sections –Modifying a block of memory shared by."

Similar presentations


Ads by Google