Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jim Anderson 1 EMSOFT, Sept 2013 MC 2 : Mixed Criticality on Multicore Jim Anderson University of North Carolina at Chapel Hill.

Similar presentations


Presentation on theme: "Jim Anderson 1 EMSOFT, Sept 2013 MC 2 : Mixed Criticality on Multicore Jim Anderson University of North Carolina at Chapel Hill."— Presentation transcript:

1 Jim Anderson 1 EMSOFT, Sept 2013 MC 2 : Mixed Criticality on Multicore Jim Anderson University of North Carolina at Chapel Hill

2 Jim Anderson 2 EMSOFT, Sept 2013 Driving Problem Joint Work with Northrop Grumman Corp. Next-generation UAVs. »Pilot functions performed using “AI-type” software. »Causes certification difficulties. »Workload is dynamic and computationally intensive. »Suggests the use of multicore. »Our focus: What should the OS look like? –Particular emphasis: Real-time constraints. Implies that we care about real-time OSs (RTOSs). Image source: http://www.as.northropgrumman.com/products/nucasx47b/assets/lgm_UCAS_3_0911.jpg

3 Jim Anderson 3 EMSOFT, Sept 2013 Multicore in Avionics: The Good Enables computationally- intensive workloads to be supported. Enables SWAP reductions. »SWAP = size, weight, and power. Multicore is now the standard platform. Core 1Core M L1 L2 …

4 Jim Anderson 4 EMSOFT, Sept 2013 Multicore in Avionics: The Bad & Ugly Interactions across cores through shared hardware (caches, buses, etc.) are difficult to predict. Results in very conservative execution-time estimates (and thus wasted processing resources). Core 1Core M L1 L2 … One Approach: Accept this fact. Use resulting slack for less critical computing. One Approach: Accept this fact. Use resulting slack for less critical computing. “Multicore processors are big slack generators.”

5 Jim Anderson 5 EMSOFT, Sept 2013 What is “Less Critical”? Consider, for example, the criticality levels of DO-178B: »Level A: Catastrophic. –Failure may cause a crash. »Level B: Hazardous. –Failure has a large negative impact on safety or performance… »Level C: Major. –Failure is significant, but… »Level D: Minor. –Failure is noticeable, but… »Level E: No Effect. –Failure has no impact. more conservative design less conservative design

6 Jim Anderson 6 EMSOFT, Sept 2013 Starting Assumptions Modest core count (e.g., 2-8). »Quad-core in avionics would be a tremendous innovation.

7 Jim Anderson 7 EMSOFT, Sept 2013 Starting Assumptions Modest core count (e.g., 2-8). Modest number of criticality levels (e.g., 2-5). »2 may be too constraining »  isn’t practically interesting. »These levels may not necessarily match DO-178B. »Perhaps 3 levels? –Safety Critical: Operations required for stable flight. –Mission Critical: Responses typically performed by pilots. –Background Work: E.g., planning operations.

8 Jim Anderson 8 EMSOFT, Sept 2013 Starting Assumptions Modest core count (e.g., 2-8). Modest number of criticality levels (e.g., 2-5). Statically prioritize criticality levels. »Schemes that dynamically mix levels may be problematic in practice. –Note: This is done in much theoretical work on mixed criticality scheduling. »For example, support for resource sharing and implementation overheads may be concerns. »Also, practitioners tend to favor simple resource sharing schemes.

9 Jim Anderson 9 EMSOFT, Sept 2013 Starting Assumptions Modest core count (e.g., 2-8). Modest number of criticality levels (e.g., 2-5). Statically prioritize criticality levels.

10 Jim Anderson 10 EMSOFT, Sept 2013 Outline Background. »Real-time scheduling basics. »Real-time multiprocessor scheduling. »Mixed-criticality scheduling. MC 2 design. »Scheduling and schedulability guarantees. MC 2 implementation. »OS concerns. Future research directions.

11 Jim Anderson 11 EMSOFT, Sept 2013 Basic Task Model Assumed Set  = {T = (T.e,T.p)} of periodic tasks. »T.e = T’s worst-case per-job execution cost. »T.p = T’s period & relative deadline. »T.u = T.e/T.p = T’s utilization. 0102030 T = (2,5) 51525 U = (9,15) 2 5 One Core Here job deadline job release processor 1

12 Jim Anderson 12 EMSOFT, Sept 2013 Basic Task Model Assumed Set  = {T = (T.e,T.p)} of periodic tasks. »T.e = T’s worst-case per-job execution cost. »T.p = T’s period & relative deadline. »T.u = T.e/T.p = T’s utilization. 0102030 T = (2,5) 51525 U = (9,15) 2 5 One Core Here job deadline job release processor 1

13 Jim Anderson 13 EMSOFT, Sept 2013 Basic Task Model Assumed Set  = {T = (T.e,T.p)} of periodic tasks. »T.e = T’s worst-case per-job execution cost. »T.p = T’s period & relative deadline. »T.u = T.e/T.p = T’s utilization. 0102030 T = (2,5) 51525 U = (9,15) 2 5 One Core Here job deadline job release processor 1

14 Jim Anderson 14 EMSOFT, Sept 2013 Basic Task Model Assumed Set  = {T = (T.e,T.p)} of periodic tasks. »T.e = T’s worst-case per-job execution cost. »T.p = T’s period & relative deadline. »T.u = T.e/T.p = T’s utilization. 0102030 T = (2,5) 51525 U = (9,15) 2 5 One Core Here 2/5 job deadline job release processor 1

15 Jim Anderson 15 EMSOFT, Sept 2013 Basic Task Model Assumed Set  = {T = (T.e,T.p)} of periodic tasks. »T.e = T’s worst-case per-job execution cost. »T.p = T’s period & relative deadline. »T.u = T.e/T.p = T’s utilization. 0102030 T = (2,5) 51525 U = (9,15) 2 5 One Core Here This is an example of an earliest-deadline-first (EDF) schedule. This is an example of an earliest-deadline-first (EDF) schedule. job deadline job release processor 1

16 Jim Anderson 16 EMSOFT, Sept 2013 Multiprocessor Real-Time Scheduling Two Approaches: Steps: 1.Assign tasks to processors (bin packing). 2.Schedule tasks on each processor using uniprocessor algorithms. Partitioning Global Scheduling Important Differences: One task queue. Tasks may migrate among the processors. Today, we’ll mainly consider partitioned earliest-deadline-first (P-EDF) and global earliest deadline-first (G-EDF) scheduling… Today, we’ll mainly consider partitioned earliest-deadline-first (P-EDF) and global earliest deadline-first (G-EDF) scheduling…

17 Jim Anderson 17 EMSOFT, Sept 2013 Hard vs. Soft Real-Time HRT: No deadline is missed. »We assume all HRT tasks commence execution at time 0 and periods are harmonic (smaller periods divide larger ones). SRT: Deadline tardiness is bounded. »We allow SRT tasks to be non-harmonic and also sporadic (T’s job releases are at least T.p time units apart). »A wide variety of global scheduling algorithms are capable of ensuring bounded tardiness with no utilization loss.

18 Jim Anderson 18 EMSOFT, Sept 2013 Scheduling vs. Schedulability What’s “Utilization Loss”? W.r.t. scheduling, we actually care about two kinds of algorithms: »Scheduling algorithm (of course). –Example: Earliest-deadline-first (EDF): Jobs with earlier deadlines have higher priority. »Schedulability test. Test for EDF  yes no no timing requirement will be violated if  is scheduled with EDF a timing requirement will (or may) be violated … Utilization loss occurs when test requires utilizations to be restricted to get a “yes” answer.

19 Jim Anderson 19 EMSOFT, Sept 2013 P-EDF Util. Loss for Both HRT and SRT Under partitioning & most global algorithms, overall utilization must be capped to avoid deadline misses. »Due to connections to bin-packing. Exception: Global “Pfair” algorithms do not require caps. »Such algorithms schedule jobs one quantum at a time. –May therefore preempt and migrate jobs frequently. –Perhaps less of a concern on a multicore platform. Under most global algorithms, if utilization is not capped, deadline tardiness is bounded. »Sufficient for soft real-time systems. Example: Partitioning three tasks with parameters (2,3) on two processors will overload one processor. In terms of bin-packing… Processor 1 Processor 2 Task 1 Task 2 Task 3 0 1

20 Jim Anderson 20 EMSOFT, Sept 2013 G-EDF HRT: Loss, SRT: No Loss Under partitioning & most global algorithms, overall utilization must be capped to avoid deadline misses. »Due to connections to bin-packing. Exception: Global “Pfair” algorithms do not require caps. »Such algorithms schedule jobs one quantum at a time. –May therefore preempt and migrate jobs frequently. –Perhaps less of a concern on a multicore platform. Under most global algorithms, if utilization is not capped, deadline tardiness is bounded. »Sufficient for soft real-time systems. Previous example scheduled under G-EDF… 0102030 T 1 = (2,3) 51525 T 2 = (2,3) T 3 = (2,3) deadline miss Note: “Bin-packing” here. processor 1 processor 2

21 Jim Anderson 21 EMSOFT, Sept 2013 Mixed-Criticality Scheduling Proposed by Vestal [2007] In reality, task execution times may vary widely. Vestal noted that different design and analysis methods are used at different criticality levels. »Greater pessimism at higher levels. He suggested reflecting such differences in schedulability analysis. Measured Worst Case Proven Worst Case Average

22 Jim Anderson 22 EMSOFT, Sept 2013 Mixed-Criticality Scheduling Proposed by Vestal [2007] Mixed-Criticality Task Model: Each task has an execution cost specified at each criticality level. »Costs at higher levels are (typically) larger. »Example: T.e A = 20, T.e B = 12, T.e C = 5, … »Rationale: Will use more pessimistic analysis at high levels, more optimistic at low levels.

23 Jim Anderson 23 EMSOFT, Sept 2013 Mixed-Criticality Scheduling Proposed by Vestal [2007] Mixed-Criticality Task Model: Each task has an execution cost specified at each criticality level. »Costs at higher levels are (typically) larger. The task system is correct at level X iff all level-X tasks meet their timing requirements assuming all tasks have level-X execution costs. Some “weirdness” here: Not just one system anymore, but several: the level-A system, level-B,… Some “weirdness” here: Not just one system anymore, but several: the level-A system, level-B,…

24 Jim Anderson 24 EMSOFT, Sept 2013 Outline Background. »Real-time scheduling basics. »Real-time multiprocessor scheduling. »Mixed-criticality scheduling. MC 2 design. »Scheduling and schedulability guarantees. MC 2 implementation. »OS concerns. Future research directions.

25 Jim Anderson 25 EMSOFT, Sept 2013 Recall These Assumptions… Modest core count (e.g., 2-8). Modest number of criticality levels (e.g., 2-5). Statically prioritize criticality levels.

26 Jim Anderson 26 EMSOFT, Sept 2013 MC 2 : Mixed-Criticality on Multicore Our Proposed Mixed-Criticality Architecture (Joint with NGC) We assume four criticality levels, A-D. »Originally, we assumed five, like in DO-178B. We statically prioritize higher levels over lower ones. We assume: »Levels A & B require HRT guarantees. »Level C requires SRT guarantees. »Level D is non-RT.

27 Jim Anderson 27 EMSOFT, Sept 2013 Level-A Scheduling “Absolute, Under All Circumstances, HRT” Partitioned scheduling is used, with a cyclic executive on each core. »Cyclic executive: offline-computed dispatching table. »Now the de facto standard in avionics. »Preferred by certification authorities. »But somewhat painful from a software-development perspective. Require: »Total per-core level-A utilization  1 (assuming level-A execution costs). »Sufficient to ensure that tables can be constructed (no level-A deadline is missed).

28 Jim Anderson 28 EMSOFT, Sept 2013 So Far… CE Level A Level B Level C Level D Core 1 Core 2 Core 3 Core 4 higher (static) priority lower (static) priority

29 Jim Anderson 29 EMSOFT, Sept 2013 Level-B Scheduling “HRT Unless Something Really Unexpected Happens” Use P-EDF scheduling. »Prior work @ UNC has shown that P-EDF excels at HRT scheduling. –Harmonic periods  can instead use partitioned rate-monotonic (P-RM). »Eases the software-development pain of CEs. Require: »Total per-core level-A+B utilization  1 (assuming level-B execution costs). ( * ) »Each level-B period is a multiple of the level-A hyperperiod (= LCM of level-A periods). ( ** ) –More on this later. »( * ) and ( ** )  if no level-A+B task ever exceeds its level-B execution cost, then no level-B task will miss a deadline.

30 Jim Anderson 30 EMSOFT, Sept 2013 So Far… CE EDF/RM Level A Level B Level C Level D Core 1 Core 2 Core 3 Core 4 higher (static) priority lower (static) priority

31 Jim Anderson 31 EMSOFT, Sept 2013 Level-C Scheduling “SRT” Use G-EDF scheduling. »Prior work @ UNC has shown that G-EDF excels at SRT scheduling. Require: »Total level-A+B+C utilization  M = number of cores (assuming level-C execution costs). »Theorem [Prior work @ UNC]: G-EDF ensures bounded deadline tardiness, even on restricted-capacity processors, if no over-utilization and certain task util. restrictions hold. »These restrictions aren’t very limiting if level-A+B utilization (assuming level-C execution costs) is not large. »Bottom Line: If no level-A+B+C task ever exceeds its level- C execution cost, can ensure bounded tardiness at level C.

32 Jim Anderson 32 EMSOFT, Sept 2013 So Far… CE EDF/RM G-EDF Level A Level B Level C Level D Core 1 Core 2 Core 3 Core 4 higher (static) priority lower (static) priority

33 Jim Anderson 33 EMSOFT, Sept 2013 Level-D Scheduling “Best Effort” Ready level-D tasks run whenever there is no pending level-A+B+C work on some processor. Seems suitable for potentially long- running jobs that don’t require RT guarantees.

34 Jim Anderson 34 EMSOFT, Sept 2013 Full Architecture CE EDF/RM G-EDF Best Effort Level A Level B Level C Level D Core 1 Core 2 Core 3 Core 4 higher (static) priority lower (static) priority

35 Jim Anderson 35 EMSOFT, Sept 2013 Remaining Issues Why the level-B period restriction? Budgeting Slack shifting.

36 Jim Anderson 36 EMSOFT, Sept 2013 Example Task System (All Tasks on the same Processor) Crit.T.pT.e A T.u A T.e B T.u B T1T1 A1050.530.3 T2T2 A20100.560.3 T3T3 B10--20.2 T4T4 B40--80.2 Σ1.0

37 Jim Anderson 37 EMSOFT, Sept 2013 Example Task System (All Tasks on the same Processor) Crit.T.pT.e A T.u A T.e B T.u B T1T1 A1050.530.3 T2T2 A20100.560.3 T3T3 B10--20.2 T4T4 B40--80.2 Σ1.0 This violates our level-B period assumption.

38 Jim Anderson 38 EMSOFT, Sept 2013 Example Task System (All Tasks on the same Processor) Crit.T.pT.e A T.u A T.e B T.u B T1T1 A1050.530.3 T2T2 A20100.560.3 T3T3 B10--20.2 T4T4 B40--80.2 Σ1.0 Each level-A task has a different execution cost (and hence utilization) per level. Execution costs (and hence utilizations) are larger at higher levels. Each level-A task has a different execution cost (and hence utilization) per level. Execution costs (and hence utilizations) are larger at higher levels.

39 Jim Anderson 39 EMSOFT, Sept 2013 “Level-A System” 0 5 10 15 20 25 30 35 40 T 2 = (10,20) T 1 = (5,10) = job release = job deadline Higher levels have statically higher priority. Thus, level-A tasks are oblivious of other tasks.

40 Jim Anderson 40 EMSOFT, Sept 2013 “Level-B System” 0 5 10 15 20 25 30 35 40 T 2 = (6,20) T 1 = (3,10) = job release = job deadline T 3 misses some deadlines. This is because our period constraint for level B is violated. We can satisfy this constraint by “slicing” each job of T 2 into two “subjobs”… T 4 = (8,40) T 3 = (2,10)

41 Jim Anderson 41 EMSOFT, Sept 2013 “Level-B System” 0 5 10 15 20 25 30 35 40 T 2 = (3,10) T 1 = (3,10) = job release = job deadline All deadlines are now met. T 4 = (8,40) T 3 = (2,10)

42 Jim Anderson 42 EMSOFT, Sept 2013 Budgeting MC 2 optionally supports budget enforcement. »The OS treats a level-X task’s level-X execution time as a budget that cannot be exceeded. 0 5 10 15 20 25 30 35 40 Provisioned Without BE With BE Job 1 Job 2 Job 3

43 Jim Anderson 43 EMSOFT, Sept 2013 Budgeting MC 2 optionally supports budget enforcement. »The OS treats a level-X task’s level-X execution time as a budget that cannot be exceeded. »Pro: Makes “job slicing” easier. »Con: Can make the system less resilient in dealing with breeches of execution-time assumptions. –More on this later.

44 Jim Anderson 44 EMSOFT, Sept 2013 Slack Shifting Provisioned execution times at higher criticality levels may be very pessimistic. »At level A, WCETs (worst-case execution times) may be much higher than actual execution times. Thus, higher levels produce “slack” that can be consumed by lower levels. »Could lessen tardiness at level C… »… or increase throughput at level D.

45 Jim Anderson 45 EMSOFT, Sept 2013 Slack Shifting (Cont’d) Slack Shifting Rule: »When a level-X job under-runs its level-X execution time, it becomes a “ghost job.” »When a ghost job is scheduled, its processor time is donated to lower levels. Such donations do not invalidate correctness properties.

46 Jim Anderson 46 EMSOFT, Sept 2013 A (Very) Simple Example 010203051525 Level B Level C … … Donated execution time (at level B)

47 Jim Anderson 47 EMSOFT, Sept 2013 Without Slack Shifting 010203051525 Level B Level C … … finishes later Remember, could be huge. execution time (at level B)

48 Jim Anderson 48 EMSOFT, Sept 2013 Spare Capacity Reclamation Summary MC 2 does this in two ways: »At design time: This occurs when we validate level-X RT guarantees. –This is done assuming execution costs “appropriate” for level X. »At run time: MC 2 uses slack shifting to re- allocate available slack. –This exploits the fact that actual execution costs (at whatever level) may be less than provisioned costs.

49 Jim Anderson 49 EMSOFT, Sept 2013 Outline Background. »Real-time scheduling basics. »Real-time multiprocessor scheduling. »Mixed-criticality scheduling. MC 2 design. »Scheduling and schedulability guarantees. MC 2 implementation. »OS concerns. Future research directions.

50 Jim Anderson 50 EMSOFT, Sept 2013 LITMUS RT Implementation of MC 2 Real-time Linux extension originally developed at UNC. »(Multiprocessor) scheduling and synchronization algorithms implemented as “plug-in” components. »Developed as a kernel patch (currently based on Linux 3.10.5). »Code is available at http://www.litmus-rt.org/. Linux Testbed for Multiprocessor Scheduling in Real-Time systems

51 Jim Anderson 51 EMSOFT, Sept 2013 MC 2 LITMUS RT Plugin Plugin is container-based. »One level-A and -B container per core; one level-C and -D container for the whole system. »Each container schedules its constituent tasks; Linux schedules level-D tasks.

52 Jim Anderson 52 EMSOFT, Sept 2013 MC 2 LITMUS RT Plugin Plugin is container-based. In developing an MC 2 plugin, we sought to address two questions: »How to best implement MC 2 so that high levels are shielded from OS overheads caused by low levels? »How robust is MC 2 ? That is, even though the theory views MC 2 as four different systems (level A, B, C, and D) does the “almost” level-C (or -B) system behave reasonably?

53 Jim Anderson 53 EMSOFT, Sept 2013 Implementation Issues Higher criticality levels can be adversely impacted by the following: »Lock delays in the OS caused by low levels. –Solution: Re-factor kernel data structures and use fine-grained locking and wait-free techniques. »Interrupts caused by low levels. –Solution: Direct all interrupts to one processor. »Timer events caused by low levels. –Solution: Enable timer events that are “close together” to be merged and serviced in criticality order.

54 Jim Anderson 54 EMSOFT, Sept 2013 These Techniques Matter To show this, we experimented with “avionics-like” synthetic task systems. »Levels A and B consume 20% of the available capacity on average. »Levels A and B have harmonic periods. »All periods range over [10,100]ms. »Number of tasks varied. Assessed scheduling and release overhead on a six-core system.

55 Jim Anderson 55 EMSOFT, Sept 2013 Example Graph Worst-Case Release Overhead

56 Jim Anderson 56 EMSOFT, Sept 2013 Example Graph Worst-Case Release Overhead Worst-Case Level-A Release Overhead With No Techniques Enabled Worst-Case Level-A Release Overhead With No Techniques Enabled

57 Jim Anderson 57 EMSOFT, Sept 2013 Example Graph Worst-Case Release Overhead Worst-Case Level-A Release Overhead With All Techniques Enabled Worst-Case Level-A Release Overhead With All Techniques Enabled

58 Jim Anderson 58 EMSOFT, Sept 2013 Example Graph Worst-Case Release Overhead Techniques Reduced Level-A Release Overhead for the 80-Task Microbenchmark from about 27  s to about 10  s Techniques Reduced Level-A Release Overhead for the 80-Task Microbenchmark from about 27  s to about 10  s

59 Jim Anderson 59 EMSOFT, Sept 2013 Example Graph Worst-Case Release Overhead

60 Jim Anderson 60 EMSOFT, Sept 2013 General Conclusions The proposed techniques lower both worst- case and average-case overheads. »Worst-case improvement benefits levels A and B. »Average-case improvement benefits level C. –Assuming an average- or near-average-case provisioning is suitable for level C. Note: In schedulability analysis, overheads are dealt with by inflating execution times. Some overheads result in multiple inflations.

61 Jim Anderson 61 EMSOFT, Sept 2013 Robustness Experiments Again, used synthetic, “avionics-like” task systems. Provisioning: »T.e C : Observed average case. »T.e B : Observed worst case  10  T.e C. »T.e A : Twice level B. Configured actual task execution times using  distribution with »mean = T.e C and »standard dev. s.t. P[exec  T.e B ]  0.01.

62 Jim Anderson 62 EMSOFT, Sept 2013 Perturbing the System  distr. values range over (0,1) and must be scaled to produce exec. times. In the “expected” system, a  value of »0.05 corresponds to a level-C cost, »0.5 corresponds to a level-B cost, and »1.0 corresponds to a level-A cost. We created “unexpected” systems by »allowing some tasks (either 10% or 50%) to be aberrant, and »having aberrant tasks use a higher  mean.

63 Jim Anderson 63 EMSOFT, Sept 2013 Perturbing the System  distr. values range over (0,1) and must be scaled to produce exec. times. In the “expected” system, a  value of »0.05 corresponds to a level-C cost, »0.5 corresponds to a level-B cost, and »1.0 corresponds to a level-A cost. We created “unexpected” systems by »allowing some tasks (either 10% or 50%) to be aberrant, and »having aberrant tasks use a higher  mean.

64 Jim Anderson 64 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant

65 Jim Anderson 65 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant Average Relative Response Time (Rel. Resp. Time = Resp. Time / Period) Average Relative Response Time (Rel. Resp. Time = Resp. Time / Period)

66 Jim Anderson 66 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant 10% of Task are “Aberrant”

67 Jim Anderson 67 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant The “Expected” System (On Average, All Costs  Level-C Costs) The “Expected” System (On Average, All Costs  Level-C Costs)

68 Jim Anderson 68 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant On Average, Aberrant Tasks Execute Twice as Long as Expected On Average, Aberrant Tasks Execute Twice as Long as Expected

69 Jim Anderson 69 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant On Average, Aberrant Tasks Execute 10 times as Long as Expected (These are Level-B Costs) On Average, Aberrant Tasks Execute 10 times as Long as Expected (These are Level-B Costs)

70 Jim Anderson 70 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant On Average, Aberrant Tasks Execute  20 times as Long as Expected (These are Level-A Costs) On Average, Aberrant Tasks Execute  20 times as Long as Expected (These are Level-A Costs)

71 Jim Anderson 71 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant Average Relative Response Time, Level C, With Budget Enforcement Average Relative Response Time, Level C, With Budget Enforcement

72 Jim Anderson 72 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant Average Relative Response Time, Level C, No Budget Enforcement Average Relative Response Time, Level C, No Budget Enforcement

73 Jim Anderson 73 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant Average Relative Response Time, Level B, With Budget Enforcement Average Relative Response Time, Level B, With Budget Enforcement

74 Jim Anderson 74 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant Average Relative Response Time, Level B, No Budget Enforcement Average Relative Response Time, Level B, No Budget Enforcement

75 Jim Anderson 75 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant

76 Jim Anderson 76 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant Let’s Zoom In to See Level C Better…

77 Jim Anderson 77 EMSOFT, Sept 2013 Example Graph Avg. Rel. Response Time, 10% aberrant, Magnified

78 Jim Anderson 78 EMSOFT, Sept 2013 General Conclusions MC 2 seems fairly robust when reality doesn’t match expectations. Turning off budget enforcement seems helpful. »But, certification authorities probably like budget enforcement.

79 Jim Anderson 79 EMSOFT, Sept 2013 Outline Background. »Real-time scheduling basics. »Real-time multiprocessor scheduling. »Mixed-criticality scheduling. MC 2 design. »Scheduling and schedulability guarantees. MC 2 implementation. »OS concerns. Future research directions.

80 Jim Anderson 80 EMSOFT, Sept 2013 Other Work Ongoing: »Add shared cache management. –An initial prototype was discussed at ECRTS’13. Future: »Add support for synchronization. »Add support for dynamic adaptations at level C. »Study other combinations of per-level schedulers.

81 Jim Anderson 81 EMSOFT, Sept 2013 MC 2 Papers (All papers available at http://www.cs.unc.edu/~anderson/papers.html) J. Anderson, S. Baruah, and B. Brandenburg, “Multicore Operating-System Support for Mixed Criticality,” Proc. of the Workshop on Mixed Criticality: Roadmap to Evolving UAV Certification, 2009. »A “precursor” paper that discusses some of the design decisions underlying MC 2. M. Mollison, J. Erickson, J. Anderson, S. Baruah, and J. Scoredos, “Mixed Criticality Real-Time Scheduling for Multicore Systems,” Proc. of the 7th IEEE International Conf. on Embedded Software and Systems, 2010. »Focus is on schedulability, i.e., how to check timing constraints at each level and “shift” slack. J. Herman, C. Kenna, M. Mollison, J. Anderson, and D. Johnson, “RTOS Support for Multicore Mixed-Criticality Systems,” Proc. of the 18th IEEE Real-Time and Embedded Technology and Applications Symp., 2012. »Focus is on RTOS design, i.e., how to reduce the impact of RTOS-related overheads on high- criticality tasks due to low-criticality tasks. B. Ward, J. Herman, C. Kenna, and J. Anderson, “Making Shared Caches More Predictable on Multicore Platforms,” Proc. of the 25th Euromicro Conference on Real-Time Systems, 2013. »Adds shared cache management to a two-level variant of MC 2.

82 Jim Anderson 82 EMSOFT, Sept 2013 Mixed-Criticality Work by Others For a good, recent survey on mixed- criticality work by others, I suggest: »“Mixed Criticality Systems - A Review,” by Alan Burns and Rob Davis, University of York, U.K., July 2013. »Available at: http://www- users.cs.york.ac.uk/~burns/review.pdf.http://www- users.cs.york.ac.uk/~burns/review.pdf

83 Jim Anderson 83 EMSOFT, Sept 2013 Thanks! Questions?


Download ppt "Jim Anderson 1 EMSOFT, Sept 2013 MC 2 : Mixed Criticality on Multicore Jim Anderson University of North Carolina at Chapel Hill."

Similar presentations


Ads by Google