Multiprocessor Fixed Priority Scheduling with Limited Preemptions Abhilash Thekkilakattil, Rob Davis, Radu Dobrin, Sasikumar Punnekkat and Marko Bertogna.

Slides:



Advertisements
Similar presentations
Simulation of Feedback Scheduling Dan Henriksson, Anton Cervin and Karl-Erik Årzén Department of Automatic Control.
Advertisements

Washington WASHINGTON UNIVERSITY IN ST LOUIS Real-Time: Periodic Tasks Fred Kuhns Applied Research Laboratory Computer Science Washington University.
Reducing the Number of Preemptions in Real-Time Systems Scheduling by CPU Frequency Scaling Abhilash Thekkilakattil, Anju S Pillai, Radu Dobrin, Sasikumar.
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Undoing the Task: Moving Timing Analysis back to Functional Models Marco Di Natale, Haibo Zeng Scuola Superiore S. Anna – Pisa, Italy McGill University.
Explicit Preemption Placement for Real- Time Conditional Code via Graph Grammars and Dynamic Programming Bo Peng, Nathan Fisher, and Marko Bertogna Department.
Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points.
THE UNIVERSITY of TEHRAN Mitra Nasri Sanjoy Baruah Gerhard Fohler Mehdi Kargahi October 2014.
RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor Paul Regnier † George Lima † Ernesto Massa † Greg Levin ‡ Scott Brandt ‡
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Soft Real-Time Semi-Partitioned Scheduling with Restricted Migrations on Uniform Heterogeneous Multiprocessors Kecheng Yang James H. Anderson Dept. of.
Sporadic Server Scheduling in Linux Theory vs. Practice Mark Stanovich Theodore Baker Andy Wang.
Project 2 – solution code
Uniprocessor Scheduling Chapter 9. Aim of Scheduling The key to multiprogramming is scheduling Scheduling is done to meet the goals of –Response time.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
New Schedulability Tests for Real- Time task sets scheduled by Deadline Monotonic on Multiprocessors Marko Bertogna, Michele Cirinei, Giuseppe Lipari Scuola.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Abhilash Thekkilakattil, Radu Dobrin, Sasikumar Punnekkat Mälardalen Real-time Research Center, Mälardalen University Västerås, Sweden Preemption Control.
Minimizing Cache Overhead via Loaded Cache Blocks and Preemption Placement John Cavicchio, Corey Tessler, and Nathan Fisher Department of Computer Science.
Embedded System Design Framework for Minimizing Code Size and Guaranteeing Real-Time Requirements Insik Shin, Insup Lee, & Sang Lyul Min CIS, Penn, USACSE,
The Design of an EDF- Scheduled Resource-Sharing Open Environment Nathan Fisher Wayne State University Marko Bertogna Scuola Superiore Santa’Anna of Pisa.
Introductory Seminar on Research CIS5935 Fall 2009 Ted Baker.
Non-Preemptive Access to Shared Resources in Hierarchical Real-Time Systems Marko Bertogna, Fabio Checconi, Dario Faggioli CRTS workshop – Barcelona, November,
Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
Quantifying the Sub-optimality of Non-preemptive Real-time Scheduling Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
1 Reducing Queue Lock Pessimism in Multiprocessor Schedulability Analysis Yang Chang, Robert Davis and Andy Wellings Real-time Systems Research Group University.
Multiprocessor Real-time Scheduling Jing Ma 马靖. Classification Partitioned Scheduling In the partitioned approach, the tasks are statically partitioned.
Real-Time Systems Mark Stanovich. Introduction System with timing constraints (e.g., deadlines) What makes a real-time system different? – Meeting timing.
Abhilash Thekkilakattil, Radu Dobrin, Sasikumar Punnekkat Mälardalen Real-time Research Center, Mälardalen University Västerås, Sweden Towards Preemption.
The Global Limited Preemptive Earliest Deadline First Feasibility of Sporadic Real-time Tasks Abhilash Thekkilakattil, Sanjoy Baruah, Radu Dobrin and Sasikumar.
Relaxing the Synchronous Approach for Mixed-Criticality Systems
Uniprocessor Scheduling
Scheduling Real-Time tasks on Symmetric Multiprocessor Platforms Real-Time Systems Laboratory RETIS Lab Marko Bertogna Research Area: Multiprocessor Systems.
Resource Augmentation for Performance Guarantees in Embedded Real-time Systems Abhilash Thekkilakattil Licentiate Thesis Presentation Västerås, November.
Fault Tolerant Scheduling of Mixed Criticality Real-Time Tasks under Error Bursts Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
Mixed Criticality Systems: Beyond Transient Faults Abhilash Thekkilakattil, Alan Burns, Radu Dobrin and Sasikumar Punnekkat.
1 Semi-partitioned Model on Dual-core Mixed Criticality System Hao Xu.
Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Bounding Preemption Delay within Data Cache Reference.
Mok & friends. Resource partition for real- time systems (RTAS 2001)
CS333 Intro to Operating Systems Jonathan Walpole.
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
Introductory Seminar on Research CIS5935 Fall 2008 Ted Baker.
Istituto di Tecnologie della Comunicazione dell’Informazione e della Percezione Explicit Preemption Point (EPP) placement for conditional code Marko Bertogna.
Distributed Process Scheduling- Real Time Scheduling Csc8320(Fall 2013)
Tardiness Bounds for Global EDF Scheduling on a Uniform Multiprocessor Kecheng Yang James H. Anderson Dept. of Computer Science UNC-Chapel Hill.
Embedded System Scheduling
Reducing the Number of Preemptions in Real-Time Systems Scheduling by CPU Frequency Scaling Abhilash Thekkilakattil, Anju S Pillai, Radu Dobrin, Sasikumar.
CPU SCHEDULING.
Chapter 5a: CPU Scheduling
Networks and Operating Systems: Exercise Session 2
CPU Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Outline Scheduling algorithms Multi-processor scheduling
Limited-Preemption Scheduling of Sporadic Tasks Systems
Chapter 6: CPU Scheduling
Uniprocessor scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

Multiprocessor Fixed Priority Scheduling with Limited Preemptions Abhilash Thekkilakattil, Rob Davis, Radu Dobrin, Sasikumar Punnekkat and Marko Bertogna

Motivation  Preemptive scheduling on multi (-core) processors introduces new challenges Complex hardware, e.g., different levels of caches -Difficult to perform timing analysis Potentially large number of task migrations -Difficult to demonstrate predictability -Difficult to reason about safety  Non-preemptive scheduling can be infeasible at arbitrarily small utilization Long task problem: at least one task has execution time greater than the shortest deadline One solution: limit preemptions

System Model Release time Minimum inter-arrival time (period) Relative Deadline Fixed Preemption Points Non-Preemptive Region (NPR) Σ = WCET Identical multiprocessor platform with m processors job 1 job 2

Combines best of preemptive and non-preemptive scheduling Controls preemption related overheads -Context switch costs, cache related preemption delays, pipeline delaysand bus contention costs Improves processor utilization -Reduce preemption related costs while eliminating infeasibility due to blocking Anecdotal evidence: “limiting preemptions improves safety and makes it easier to certify software for safety-critical applications” Limited Preemptive Scheduling blocking preemption overheads low high

Uniprocessor Limited preemptive FPS (Burns’94, Bril et al., RTSJ’09, Yao et al., RTSJ’11) Limited preemptive EDF (Baruah, ECRTS’05) Multiprocessor Global limited preemptive FPS (Block et al., RTCSA’07, Marinho et al., RTSS’13, Davis et al., TECS’15) Global limited preemptive EDF (Block et al., RTCSA’07, Thekkilakattil et al., ECRTS’14, Chattopadhyay and Baruah, RTNS’14) Limited preemptive scheduling landscape … of course the references are by no way exhaustive!

Managing Preemptions in Global Limited Preemptive Scheduling Processor 1 Processor 2 High priority Lazy Preemption Approach Medium priority Low priority High priority

Managing Preemptions in Global Limited Preemptive Scheduling Processor 1 Processor 2 High priority Eager Preemption Approach Medium priority Low priority blocking High priority

Global Limited Preemptive FPS with Fixed Preemption Points Lazy Preemption Approach Block et al., RTCSA’07 and Marinho et al., RTSS’13 Eager Preemption Approach Block et al., RTCSA’07: Link Based Scheduling

Lazy Preemption Approach: Link Based Scheduling Developed in the context of resource sharing by Block et al., RTCSA’07 -Applicable to limited preemptive scheduling Implements lazy preemption approach Higher priority tasks blocked on a processor is linked to that processor Analyzable using a simple and generic inflation based test (Brandenburg and Anderson, MPI-Tech Report’14) 1)Inflate WCET with largest blocking factor 2)Determine schedulability using any standard test e.g., response time analysis for global preemptive FPS

Global Limited Preemptive FPS with Fixed Preemption Points Lazy Preemption Approach Block et al., RTCSA’07: Link Based Scheduling Eager Preemption Approach No significant work! How can we perform schedulability analysis of tasks scheduled using G-LP-FPS with eager preemptions?

Schedulability Analysis under G-LP-FPS with Eager Preemptions Interference (higher and lower priority) Task i

Schedulability Analysis under G-LP-FPS with Eager Preemptions Interference (higher and lower priority) Case 1: no “push through” blocking Case 2: presence of “push through” blocking Task i

Schedulability Analysis under G-LP-FPS with Eager Preemptions Interference (higher and lower priority) Case 1: no “push through” blocking Case 2: presence of “push through” blocking Task i

Lower Priority Interference before Task Start Time Processor 1 Processor 2 Task i ( high) blocking= sum of m largest ({lower priority NPRs}) ε Medium priority Low priority Case 1: no push through blocking blocking

Schedulability Analysis under G-LP-FPS with Eager Preemptions Interference (higher and lower priority) Case 1: no “push through” blocking Case 2: presence of “push through” blocking Task i

Lower Priority Interference before Task Start Time Processor 1 Processor 2 Task i ( high) blocking= sum of m largest ({lower priority NPRs, final NPR of i }) ε Highest priority Low priority Case 2: presence of push through blocking blocked Task i ( high)

Schedulability Analysis under G-LP-FPS with Eager Preemptions Interference (higher and lower priority) Task i

Schedulability Analysis under G-LP-FPS with Eager Preemptions Interference (higher and lower priority) Task i

Lower Priority Interference after Task Start Time blocking= sum of (m-1) largest ({lower priority NPRs}) Processor 1 Processor 2 High priority Task i blocked Number of processors executing a lower priority NPR ≤ (m-1) ε Task i Low priority

Schedulability Analysis under G-LP-FPS with Eager Preemptions Interference (higher and lower priority) Interference (higher and lower priority) Interference (higher and lower priority) Interference (higher and lower priority) R i = + C i Of course, preemption may not occur at all preemption points No. of preemptions as a function of response time to reduce pessimism Details in the paper Task i

Experiments Which among eager and lazy preemption approaches is better for Global Limited Preemptive FPS (G-LP-FPS)? Compared schedulability under eager preemptions and lazy preemptions Test for lazy preemptions: test for link-based scheduling that implements lazy preemptions −Inflate task execution time with largest blocking time −Perform response time analysis for G-P-FPS

Overview of Experiments Task utilizations generated using UUnifastDiscard Periods in the range 50 to 500 Taskset utilization in the range 2.4 to m We investigated how weighted schedulability varies with: 1.Varying number of tasks 2.Varying number of processors 3.Varying NPR lengths a.relatively large NPR w.r.t task WCETs b.relatively small NPR w.r.t task WCETs

Weighted Schedulability Weighs schedulability with utilization (Bastoni et al., OSPERT’10)

Weighted Schedulability Weighs schedulability with utilization (Bastoni et al., OSPERT’10) Schedulability of taskset Γ w.r.t parameter p

Weighted Schedulability Weighs schedulability with utilization (Bastoni et al., OSPERT’10) Utilization of taskset Γ

Weighted Schedulability Weighs schedulability with utilization (Bastoni et al., OSPERT’10) Enables investigation of schedulability w.r.t a second parameter in addition to utilization Higher weighted schedulability implies a better algorithm with respect to scheduling high utilization tasksets (and thus better algorithm w.r.t efficiency)

Experiments We investigated how weighted schedulability varies with: 1.Varying number of tasks 2.Varying number of processors 3.Varying NPR lengths a.relatively large NPR w.r.t task WCETs b.relatively small NPR w.r.t task WCETs

Varying Number of Tasks Eager preemptions Lazy preemptions Eager approach outperforms lazy approach for larger number of tasks

Experiments We investigated how weighted schedulability varied with: 1.Varying number of tasks 2.Varying number of processors 3.Varying NPR lengths a.relatively large NPR w.r.t task WCETs b.relatively small NPR w.r.t task WCETs

Varying Number of Processors Eager preemptions Lazy preemptions Higher utilization and fixed n  large execution times  large NPRs  more blocking after start time

Experiments We investigated how weighted schedulability varied with: 1.Varying number of tasks 2.Varying number of processors 3.Varying NPR lengths a.relatively large NPR w.r.t task WCETs b.relatively small NPR w.r.t task WCETs

Varying Lengths of NPRs (large) Eager preemptions Lazy preemptions number of preemptions=3 number of preemptions=2 number of preemptions=1 number of preemptions≈1 number of preemptions=0

Experiments We investigated how weighted schedulability varied with: 1.Varying number of tasks 2.Varying number of processors 3.Varying NPR lengths a.relatively large NPR w.r.t task WCETs b.relatively small NPR w.r.t task WCETs

Varying Lengths of NPRs (small) Eager preemptions Lazy preemptions Lazy approach outperforms eager approach for smaller NPR lengths Small NPR lengths  many preemption points  more blocking

Conclusions Presented a schedulability test for global LP FPS with eager preemptions Compared eager and lazy approaches using synthetically generated tasksets –Eager approach outperforms lazy approach Eager preemption is beneficial if high priority tasks have short deadlines relative to their WCETs –Need to schedule them ASAP Lazy preemption is beneficial if tasks have many preemptions points –Need to reduce blocking occurring after tasks start their execution

Future Work Evaluation of runtime preemptive behaviors of eager and lazy approaches under global EDF and FPS –LP scheduling with eager approach generates more runtime preemptions compared to preemptive scheduling (under submission to RTAS’16) Evaluation on a real hardware –Context Switch Overheads –Cache related preemptions delays Efficient preemption point placement strategies for multiprocessor systems

Questions ? Thank you !