Presentation is loading. Please wait.

Presentation is loading. Please wait.

Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points.

Similar presentations


Presentation on theme: "Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points."— Presentation transcript:

1 Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points

2 2 Motivation Timing Analysis — Calculation of Worst Case Execution Times (WCETs) of tasks — Required for scheduling of real-time tasks – Schedulability theory requires a-priori knowledge of WCET — Estimates need to be safe — Static Timing Analysis – an efficient method to calculate WCET of a program! — Data caches (D$) introduce unpredictability in timing analysis Data caches: Improve Performance Significantly Complicate Static Timing Analysis for a task

3 3 Preemptive scheduling Practical Real-Time systems — Multiple tasks with varying priorities — Higher prio. task may preempt a lower prio. task at any time — Additional misses occur when lower prio. task restarted — WCET with preemption delay required Static Timing Analysis becomes even more complicated!

4 4 Data Cache Reference Patterns (Prior Work) Data Cache Analyzer added to Static Timing Analysis framework Enhanced Cache Miss Equations (Ghosh et al.) framework  D$ miss/hit patterns for memory references Used for loop-nest oriented code — Scalar and array references analyzed Considers only a single task with no preemptions Patterns fed to timing analyzer to tighten WCET estimate Necessary terminology: — Iteration point –Represents an iteration of a loop-nest — Set of all iteration points – Iteration Space

5 5 Static Timing Analyzer Framework

6 6 Methodology Task Schedulability  Response Time Analysis used Steps involved in calculation of WCET with preemption delay — Calculate max. # of preemptions possible for a task — Identify placement of preemption points in iteration space — Calculate preemption delay at a certain point

7 7 Methodology: Analysis Phases Phase 1: Single-Task Analysis — For every task –Build D$ Reference Patterns assuming NO preemptions –Calculate stand-alone WCET and BCET Performed once for each task using D$ analyzer + static timing analyzer Phase 2: Preemption Delay Calculation (in task-set context) — Per-job analysis done — All jobs within hyperperiod considered Proof of correctness is in paper

8 8 Identification of Preemption Points Identification of preemption points for a job — All higher priority (hp) jobs can potentially preempt — Eliminate infeasible points — In every interval between potential preemption points –Check whether job can be scheduled  use BCET and WCET of hp jobs –Check whether portion of job remains beyond interval –Count preemption point only if both criteria satisfied

9 9 Eliminating Infeasible Preemption Points Task Period (= deadline) WCETBCET T02075 T1501210 T22003025 0 1020304050 T0T0 T0T0 T0T0 T1T1 T1T1 BEST CASE WORST CASE Partial timeline for task T1 Infeasible since T1 is already done before point

10 10 Eliminating Infeasible Preemption Points 0 102030405060 T0T0 T0T0 T0T0 T1T1 T1T1 T2T2 T0T0 BEST CASE WORST CASE Partial timeline for task T2 Min exec time for T2 in interval Max exec time for T2 in interval Infeasible since T2 not scheduled in interval  cannot be preempted

11 11 Placement of Points within Job Identification of worst-case scenario — Preemption point placement –Bound by range of exec. time available for task in interval –Interact with Timing Analyzer  find iteration point corresponding to point in time 1.Min iter point reached in shortest possible time 2.Min iter point reached in longest possible time 3.Max iter point reached in shortest possible time 4.Max iter point reached in longest possible time 1234 Access space for task Range for preemption

12 12 Preemption Delay at a Point — Access Chain building –Build time-ordered list of all mem. refs in task –Connect all refs accessing same D$ set to form chain –Different cache sets shown with different colors — Assign weights to every access point –Weight –# distinctly colored chains that cross the point –indicates # misses if preemption at that point –Count only chains for D$ sets used by a higher prio. task –Count only if next point in chain is a HIT

13 13 Experimental Results Benchmark Period (cycles) WCET w/o delay (cycles) BCET (cycles) #jobs# preemptions New method (feasible points) HJ Bound Old method Staschulat method AvgMinMax n-real-updates10000016738 50000000 900convulution625000763916109180.7501771 Matrix162500059896540158111883 1000convolution625000870916779181.2512995 600convolution1000000452914099150.40116157 300n-real-updates1000000565384733851.21217169 800fir1250000770376973741.512232118 900lms125000015863611853643242422- 1000fir2500000992378693724444741- 500fir500000043937 13339480- Task set with U = 0.8, hyperperiod = 5000000 Our new method gives the tightest bound on # of preemptions in all cases

14 14 Maximum # of preemptions (U = 0.8) Our method gives tightest bound on # preemptions

15 15 WCET w/ delay (U = 0.8) Our method gives lowest preemption delay and hence WCET Since WCET is unique to task, there’s no pattern of increase/decrease

16 16 Response Time (U = 0.8) Response times monotonically increase as task priority decreases Our method has least rate of increase All task-sets deemed schedulable by our method

17 17 Varying WCET/BCET Ratios Task IDPeriod (cycles) WCET (cycles) # Feasible Preemptions (min/max/avg)# preempts HJ Bound # preempts Old Method # preempts Staschulat W/B = 1W/B = 1.5W/B = 2W/B = 2.5W/B = 3 U = 0.5 180000160001/1/1 882 210000050000/1/0.25 0/2/0.5 12 4 3200000300003/3/33/4/3.5 3/5/4 25 8 U = 0.8 180000200002/2/2 883 2100000150001/2/1.51/3/1.75 1/4/2 12 6 3200000500006/7/6.58/8/88/9/8.5 8/8/825 19 Our method produces significantly lower # preemptions As WCET/BCET increases # preemptions increases upto a point # preemptions decreases slightly beyond point # preemptions is lowest when WCET/BCET = 1 Max increase beyond lowest ~ 30%

18 18 Critical Instant Does Critical instant (CI) occur on simultaneous task release? — Not when preemption delay is considered! When preemption delay is considered — CI occurs when tasks are released in reverse priority order — Similar to effect of critical sections/blocking! Considering all jobs in hyperperiod eliminates safety concerns

19 19 Critical Instant ФPCΔ T12310 T21155.1250.125 T30201.250.75 T40251.250.25 Task Set 2. Preemption with WCET - phasing 1. Preemption with WCET - no phasing In 1, response time of T3 is more In 1, response time of T4 is shorter RT = 12 RT = 12.25

20 20 Conclusions Contributions: — Determination of critical instant under cache preemption — Calculation of tighter bound on max # preemptions — Construction of realistic worst-case scenario for preemptions Results show significant improvements in — Max # preemptions — WCET with preemption delay — Response time Improvements — Order of magnitude over simplistic methods — Half an order of magnitude over best prior method Observations — As WCET/BCET increases, # preemptions increases by ~30% — Some tools don’t provide BCET — Compromise  use BCET = 0, get slightly pessimistic results

21 21 Future Work Consider phased task-sets in experiments Extend framework to deal with dynamic scheduling policies — Recalculate priority at every interval

22 22 Related Work C.-G. Lee et al. 1. Analysis of cache-related preemption delay in Fixed-priority preemptive scheduling. 2. Bounding cache related preemption delay for real-time systems. Basic ideas involved in calculating cache related preemption delay Works only with instruction caches J. Staschulat et al. 1. Multiple process execution in cache related preemption delay analysis. 2. Scheduling analysis of real-time systems with precise modeling of cache related preemption delay. Complete framework to calculate cache related preemption delay Works only with instruction caches Takes indirect preemption effects into consideration

23 23 Thank you! Questions?


Download ppt "Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points."

Similar presentations


Ads by Google