Presentation is loading. Please wait.

Presentation is loading. Please wait.

20.01.05Lecture 2, CS52701 The Real Time Computing Environment I CS 5270 Lecture 2.

Similar presentations


Presentation on theme: "20.01.05Lecture 2, CS52701 The Real Time Computing Environment I CS 5270 Lecture 2."— Presentation transcript:

1 20.01.05Lecture 2, CS52701 The Real Time Computing Environment I CS 5270 Lecture 2

2 20.01.05Lecture 2, CS52702 A Conceptual Framework The outside view: –Closed system :  verification –Open system :  Synthesis, schdulability analysis, … The inside view –Architecture –Interrupts –Task scheduling….. –Embedded software Modeling, analysis, verification are important at all layers!

3 20.01.05Lecture 2, CS52703 The Closed System View Computing system Plant Sense Actuate Extract a model (Timed Automaton) ; Verify if this closed system meets a given specification.

4 20.01.05Lecture 2, CS52704 The Open System View Plant Sense Actuate For this open system, synthesize a controller (real time computing system) such that the closed system meets a given specification.

5 20.01.05Lecture 2, CS52705 The Open System View Actuate For this open system, synthesize a controller (real time computing system) such that the closed system meets a given specification. Plant Sense Computing system

6 20.01.05Lecture 2, CS52706 The Outside View Timed Automata : – outside view  closed system  Verification Model extraction Specification of properties Verification methods/tools.

7 20.01.05Lecture 2, CS52707 The Inside View Plant Sense Computing system Actuate What is inside the black box?

8 20.01.05Lecture 2, CS52708 Sense Computing system Actuate Sense Actuate

9 20.01.05Lecture 2, CS52709 Distributed Architecture

10 20.01.05Lecture 2, CS527010 A Node

11 20.01.05Lecture 2, CS527011 A Node Often, multiple instances of the above for fault tolerance!

12 20.01.05Lecture 2, CS527012 A Node

13 20.01.05Lecture 2, CS527013 The Host Computer DSPProcessor ASIC Timer Memory Bus

14 20.01.05Lecture 2, CS527014 The Host Computer DSPProcessor ASIC Timer Memory

15 20.01.05Lecture 2, CS527015 Tasks DATA SETS TASK1 TASK2 TASK3 TASK4 RT !mages!

16 20.01.05Lecture 2, CS527016 RT Images RT entity: – Some item of interest whose value changes over time. –Pressure, temperature, valve position … Continuous RT entity: –Can be observed at any point in time  pressure Discrete RT entity –Can be observed only between specified occurrences of interesting events

17 20.01.05Lecture 2, CS527017 RT Images RT Image: –Current picture of an RT entity. – Accuracy: –Value –Temporal is  -accurate if the value of N was v at some time in the interval (t- , t).

18 20.01.05Lecture 2, CS527018 RT Images Suppose is observed at time t and used at time t’. Then the maximum error (v’ – v) depends on the temporal accuracy (  ) and the maximum gradient of N during this interval. If the gradient is high then  must be small and tasks using N must be scheduled often! (this is a fair but crude statement)

19 20.01.05Lecture 2, CS527019 Accuracy RT ImageMax.ChangeV-AccuracyT-accuracy Piston Position 6000 RPM0.1 degrees 3  secs Acc. pedal100%/sec1%10 msecs Eng. Load50%/sec1%20 msecs Oil temp10%/min1%6 seconds

20 20.01.05Lecture 2, CS527020 The Design Challenge Derive a model of the closed system (external). –Specification/requirements –Timing –Notion of physical time Design and implement –a distributed, fault-tolerant, optimal- real time computing system so that the closed system meets the specification/requirements.

21 20.01.05Lecture 2, CS527021 The Structural Elements Each computing node will be assigned a set of tasks to perform the intended functions. Task : – Execution of a (simple) sequential program.  Read the input data  The internal state of the task (include RT profiles)  Terminate with production of results and updating internal state of the task.

22 20.01.05Lecture 2, CS527022 Tasks The (real time) operating system provides the control signal for each initiation of the task. Stateless task: no internal state at the time of initiation. Task with state

23 20.01.05Lecture 2, CS527023 Tasks Simple task: –No synchronization point within the task. –Does not block due to lack of progress by other tasks in the system. –But can get interrupted (preempted) by the operating system. –Total execution time can be computed in isolation. –The Worst Case Execution Time of task over all possible relevant inputs.  Correct estimate of WCET is crucial for guaranteeing real time constraints will be met.

24 20.01.05Lecture 2, CS527024 Complex Tasks Contains blocking synchronization statement: –“wait” semaphore operation. –“receive” message operation. Must wait till another task has updated a common data structure: –Data dependency –Sharing Must wait for input to arrive. WCET of a complex task can not be computed in isolation..

25 20.01.05Lecture 2, CS527025 Interfaces Interfaces: –Common boundary between two subsystems. –Design is essentially interface design. –Designing and implementing the interface “glue logic” consumes the major portion of the design cycle.

26 20.01.05Lecture 2, CS527026

27 20.01.05Lecture 2, CS527027 Interfaces Interface Parameters: –Control signals flowing across the interface and the associated task invocations. –Temporal properties to be satisfied by the control signals and data values flowing across. – Functional relationships between input and output data.

28 20.01.05Lecture 2, CS527028 The Host Computer DSP Processor ASIC Timer Memory

29 20.01.05Lecture 2, CS527029 Tasks DATA SETS TASK1 TASK2 TASK3 TASK4

30 20.01.05Lecture 2, CS527030 Tasks There will be tasks that are triggered by exceptions, interrupts and alarms. There will be tasks that need to be executed periodically. These tasks may have precedence relationships. These tasks may have deadlines. These tasks may share data structures. They may have to execute on the same processor. We must schedule!

31 20.01.05Lecture 2, CS527031 Scheduling: Basic Concepts Scheduling Policy: –CPU has to execute –sequentially- a set of concurrent tasks.  If T1 and T2 are both executable at t we must choose between T! and T2 Scheduling Algorithm: –The recipe (algorithm) which determines at each time t which task to execute. Dispatching:  Allocating the CPU to the task selected by the scheduling algorithm.

32 20.01.05Lecture 2, CS527032 Scheduling: Basic Concepts Active Task: – A task which can potentially execute on the CPU (which may or may not be available). Ready Task: – An active task which is waiting for the CPU Running Task: –An active task in execution. Ready Queue: –The queue in which ready tasks are kept.

33 20.01.05Lecture 2, CS527033 Scheduling : Basic Concepts Preemption: –Tasks may be activated dynamically  time of activation not determined. –If the task activated at time is more important (has higher priority) than the running task:  running task is interrupted and inserted in the running queue. Preemption is needed for : –Exception-handling tasks. –Tasks may have different levels of criticality. –Improve system responsiveness, throughput, utilization etc.

34 20.01.05Lecture 2, CS527034 The Ready Queue

35 20.01.05Lecture 2, CS527035 Schedules Task set J = { J 1, J 2, …, J n } A schedule assigns at each t one task to the processor so that each task is eventually completed. A schedule can be preemptive. Schedules will have to perform context switching. A schedule is feasible if all tasks can be completed while satisfying the given constraints. A task set is schedulable If there is at least one scheduling algorithm which produces a feasible schedule.

36 20.01.05Lecture 2, CS527036 Schedule

37 20.01.05Lecture 2, CS527037 Schedule 0 1 3 5 9 1.5? 7?

38 20.01.05Lecture 2, CS527038 Schedule with Preemption

39 39 Schedule with Preemption 0 1 3 4 6 7.5 9.5 2? 5? Context switch?

40 20.01.05Lecture 2, CS527040 Task Constraints Timing Constraints Precedence Constraints Resource Constraints

41 20.01.05Lecture 2, CS527041 Timing Constraints Timing Constraints: –A task should meet its deadline.  Hard  Soft Relevant Parameters for the task J i : –arrival time a i  Request time, release time –computation time C i  time needed to execute J i (without interruption).

42 20.01.05Lecture 2, CS527042 Timing Parameters deadline d i – time before which J i must be completed. start time s i finishing time f i value (priority?) v i : –The relative importance of J i.

43 20.01.05Lecture 2, CS527043 Basic Timing Parameters

44 20.01.05Lecture 2, CS527044 Basic Timing Parameters D i = d i – a i Relative deadline

45 20.01.05Lecture 2, CS527045 Timing Parameters Pattern of activation: – Periodic task  regularly activated at a constant rate.  instances or jobs corresponding to the same task.   i the phase of  I :  The activation time of the first instance of the periodic task  i.  T i the period of the task.  D i the relative deadline  Often, one assumes D i = T i –Aperiodic task:  Same as periodic tasks but the activation times are NOT periodic.

46 20.01.05Lecture 2, CS527046 Periodic/Aperiodic Task

47 20.01.05Lecture 2, CS527047 Task Constraints Timing Constraints Precedence Constraints Resource Constraints

48 20.01.05Lecture 2, CS527048 Precedence Constraints Precedence Constraints: – Tasks can not be executed in any arbitrary order.  Data dependencies.  Control strategy Task Graphs: –Instead of Task sets. –Nodes are tasks –Edges capture precedence.

49 20.01.05Lecture 2, CS527049 Task Graph

50 20.01.05Lecture 2, CS527050 The Task Graph

51 20.01.05Lecture 2, CS527051 Task Constraints Timing Constraints Precedence Constraints Resource Constraints

52 20.01.05Lecture 2, CS527052 Resource Constraints resource: –software structure used by a task during its execution. –A data structure, variables, an area of main memory, a file, a piece of code, a set of registers of a peripheral device. Shared resource: –Used by more than one task. Exclusive resource: –No simultaneous access. –Require mutual exclusion. –Operating must provide a synchronization mechanism to ensure sequential access..

53 20.01.05Lecture 2, CS527053 Critical Section Critical section: –A piece of code belonging to task executed under mutual exclusion constraints. Mutual exclusion enforced by semaphores. –wait(s)  Blocked if s = 0. – signal(s)  s is set to 1 when signal(s) executes.

54 20.01.05Lecture 2, CS527054 Structure of Critical Sections.

55 20.01.05Lecture 2, CS527055 Wait State A task waiting for an exclusive resource is blocked on that resource. Tasks blocked on the same resource are kept in a wait queue associated with the semaphore protecting the resource. A task in the running state executing wait(s) on a locked semaphore (s = 0) enters the waiting state. When a task currently using the resource executes signal(s), the semaphore is released. When a task leaves its waiting state (because the semaphore has been released) it goes into the ready state: –Why not enter the running state?

56 20.01.05Lecture 2, CS527056 Waiting State

57 20.01.0557 Blocking via Exclusive Resource J 1 has higher priority than J 2. Preemption is in play. Only one processor available.

58 20.01.05Lecture 2, CS527058 Multiprocessor Settings a1 e1 d H a2 e2 s rec

59 20.01.05Lecture 2, CS527059 Scheduling Problem. Task set {J 1, J 2,..,J n } Processors {P 1, P 2,…, P m } Resources {R 1, R 2, …,R s } Timing constraints Precedence constraints Resource constraints Problem: Assign processors and resources to tasks so that all the tasks can be finished under the imposed constraints.

60 20.01.05Lecture 2, CS527060 Scheduling Problem The general problem (in fact various simpler versions of it) is NP-complete.

61 20.01.05Lecture 2, CS527061 Scheduling Problem The general problem (in fact various simpler versions of it) is NP-complete. There is a non-deterministic Turing Machine TM and a polynomial in one variable p(n) (egs. 8n 3 + 5n + 6) such for each problem instance of size n (in binary representation!), TM determines if there exists a schedule and if so outputs one in atmost p(n) steps. Any non-deterministic polynomial time problem can be transformed in deterministic polynomial time to the general scheduling problem. Only exponential time deterministic algorithms are known.

62 20.01.05Lecture 2, CS527062 Scheduling Problem Algorithm1 O(n) Algorithm2 O(5 n ) Each computation step 1  sec. n = 30 Algorithm1 : 30  seconds. Algorithm2 : 30, 000, 000 years!

63 20.01.05Lecture 2, CS527063 Scheduling Problems Must find imperfect but efficient solutions to scheduling problems. Great variety of algorithms exist: – various assumptions –Different complexities – Different pragmatic contents. Optimal scheduling algorithm: – Minimizes a given cost function. –If no cost function, then no algorithm in the same class can produce a feasible schedule if the optimal one can not.

64 20.01.05Lecture 2, CS527064 A Classic Example Rate Monotonic Scheduling. –Task set : {J 1, J 2, …, J n } –Each task is periodic. T 1, T 2,.., T n –  i = 0 for each i. –D i = T i for each i. –Pre-emption allowed. –Only one processor –No precedence constraints –No shared resources.

65 20.01.05Lecture 2, CS527065 RMS The RMS algorithm: –Assign a static priority to the tasks according to their periods.  Tasks with shorter periods have higher priorities. –Preemption policy:  If T i is executing and T j arrives which has higher priority (shorter period), then preempt T i and start executing T j.

66 20.01.05Lecture 2, CS527066 RMS Results RMS is optimal. – If a set of of periodic tasks (satisfying the assumptions set out previously) is not schedulable under RMS then no static priority algorithm can schedule this set of tasks. RMS requires very little run time processing. Static scheduling policy.

67 20.01.05Lecture 2, CS527067 Process Utilization Factor Task set = {T 1, T 2, …, T n } Process Utilization Factor –  C i / T i –C 1 / T 1 + C 2 / T 2 + … C n / T n If this factor is GREATER than 1 then the task set can not be scheduled. –Why? If UF ≤ 1 it may be schedulable. If UF  U lub then it is guaranteed to be schedulable.

68 20.01.05Lecture 2, CS527068 Process Utilization Factor Task set = {T 1, T 2, …, T n } If UF  U lub then it is guaranteed to be schedulable. U lub = n( 2 1/n – 1) For large n this is approximately 0.69. But if UF is greater than U lub and not greater than 1, we must check explicitly whether the task set is schedulable (under RM).

69 20.01.05Lecture 2, CS527069 EDF Earliest Deadline First. – Tasks with earlier deadlines will have higher priorities. –Applies to both periodic and aperiodic tasks. –EDF is optimal for dynamic priority algorithms. – A set of periodic tasks is schedulable with EDF iff the utilization factor is not greater than 1.

70 20.01.05Lecture 2, CS527070 An Example {T1, T2} T1 –Period = 5 –Computation time = 2

71 20.01.05Lecture 2, CS527071 An Example {T1, T2} T2 –Period = 7 –Computation time = 4

72 20.01.05Lecture 2, CS527072 An RMS Schedule ?

73 20.01.05Lecture 2, CS527073 An RMS Schedule Time-Overflow

74 20.01.05Lecture 2, CS527074 The Example UF = 2 / 5 + 4 / 7 = 0.4 + 0.57 = 0.97 Guaranteed to be schedulable under EDF!

75 20.01.05Lecture 2, CS527075 Resource Access Protocols Multiple tasks. Uniprocessor Shared resources. –Need proper protocols for accessing shared resources. –Resource access protocols. Avoid priority inversion!


Download ppt "20.01.05Lecture 2, CS52701 The Real Time Computing Environment I CS 5270 Lecture 2."

Similar presentations


Ads by Google