Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scheduling Periodic Real-Time Tasks with Heterogeneous Reward Requirements I-Hong Hou and P.R. Kumar 1.

Similar presentations


Presentation on theme: "Scheduling Periodic Real-Time Tasks with Heterogeneous Reward Requirements I-Hong Hou and P.R. Kumar 1."— Presentation transcript:

1 Scheduling Periodic Real-Time Tasks with Heterogeneous Reward Requirements I-Hong Hou and P.R. Kumar 1

2 Problem Overview  Imprecise Computation Model:  Tasks generate jobs periodically, each job with some deadline  Jobs that miss deadline cause performance degradation of the system, rather than timing fault  Partially-completed jobs are still useful and generate some rewards  Previous work: maximize the total rewards of all tasks  Assumes that rewards of different tasks are equivalent  May result in serious unfairness  Does not allow tradeoff between tasks  This work: Provide guarantees on reward for each task 2

3 Example: Video Streaming  A server serves several video streams  Each stream generates a group of frames (GOF) periodically  Frames need to be delivered on time, or they are not useful  Lost frames result in glitches of videos  Frames of the same flow are not equally important  MPEG has three types of frames: I, P, and B  I-frames are more important than P-frames, which are more important than B-frames  Goal: provide guarantees on perceived video quality for each stream 3

4 System Model  A system with several tasks (task = video stream)  Each task X generates one job every τ X time slots, deadline = τ X (job = GOF)  All tasks generate one job at the first time slot A B C A A A A B B B C C C C C τ A =4 τ B =6 τ C =3 4

5 System Model  A frame = time between two time slots that all tasks generate a job  Length of frame (T) = least common multiple of τ A, τ B, … A B C T = 12 5

6 Model for Rewards  A job can be executed several times before its deadline  A job of task X obtains a reward of r X k when it is executed for the k th time, where r X 1 ≥ r X 2 ≥ r X 3 ≥… (the reward of the i th frame in a GOF is r X i ) A B C A A A A B B B C C C C C τ A =4 τ B =6 τ C =3 6

7 Scheduling Example  Reward of A per frame = 3 r A 1 +2 r A 2 + r A 3 A B C A A A A B B B C C C C C A A B C C B A C B A A A rA1rA1 rA2rA2 rA1rA1 rA1rA1 rA2rA2 rA3rA3 7

8 Scheduling Example  Reward of A per frame = 3 r A 1 +2 r A 2 + r A 3  Reward of B per frame = 2 r B 1 + r B 2  Reward of A per frame = 3 r C 1 A B C A A A A B B B C C C C C A A B C C B A C B A A A 8

9 Reward Requirements  Task X requires an average reward per frame of q X  Q: How to evaluate whether [ q A, q B,…] is feasible? How to meet reward requirement of each task? A B C A A A A B B B C C C C C A A B C C B A C B A A A 9

10 Extension for Imprecise Computation Models  Imprecise Computation Model: Each job may have a mandatory part and an optional part  Mandatory part needs to be completed, or results in a timely fault  Incomplete optional part only reduces performance  Our model: set the reward of a mandatory part to be M, where M is larger than any finite number  The reward requirement of a task = aM + b, where a is the number of mandatory parts, and b is the requirements on optional parts  Fulfill reward requirement ensures that all mandatory parts are completed on time 10

11 Feasibility Condition  f X i := average number of jobs of X that are executed at least i times per frame  Obviously, 0 ≤ f X i ≤ T/τ X  Average reward of X = ∑ i f X i r X i  Average reward requirement: ∑ i f X i r X i ≥ q X  The average number of time slots that the server spends on X per frame = ∑ i f X i  Hence, ∑ X ∑ i f X i ≤ T 11

12 Admission Control  Theorem: A system is feasible if and only if there exists a vector [ f X i ]such that 1. 0 ≤ f X i ≤ T/τ X 2. ∑ i f X i r X i ≥ q X 3. ∑ X ∑ i f X i ≤ T  Check feasibility by linear programming  Complexity of admission control can be further reduced by noting r X 1 ≥ r X 2 ≥ r X 3 ≥…  Theorem: check feasibility in O(∑ X τ X ) time 12

13 Scheduling Policy  Q: Given a feasible system, how to design a policy that fulfill all reward requirements?  Propose a framework for designing policies  Propose an on-line scheduling policy  Analyze the performance of the on-line scheduling policy 13

14 A Condition Based on Debts  Let s X (k) be the reward obtained by X in the k th frame  Debt of task X in the k th frame: d X (k) = [d X (k-1)+q X - s X (k)] +  x + := max{x, 0}  The requirement of task X is met if d X (k)/k→0, as k→∞  Theorem: A policy that maximizes ∑ X d X (k)s X (k) for every frame fulfills every feasible system  Such a policy is called a feasibility optimal policy 14

15 Approximation Policy  Computation overhead of a feasibility optimal policy may be high  Study performance guarantees of suboptimal policies  Theorem: A policy whose resulting ∑ X d X (k)s X (k) is at least 1/p of the resulting ∑ X d X (k)s X (k) by an optimal policy, then this policy achieves reward requirement [q X ] if the reward requirement [pq X ] is feasible  Such a policy is called a p-approximation policy 15

16 An On-Line Scheduling Policy  At some time slot, let ( j X - 1) be the number of times that the server has worked on the current job of X  If the server schedules X in this time slot, X obtains a reward of r X j X  Greedy Maximizer: Schedule the task X that maximizes r X j X d X ( k ) in every time slot  Greedy Maximizer can be efficiently implemented 16

17 Performance of Greedy Maximizer  The Greedy Maximizer is feasibility optimal when the period length of all tasks are the same  τ A = τ B = …  However, when tasks have different period lengths, the Greedy Maximizer is not feasibility optimal 17

18 Example of Suboptimality  A system with two tasks  Task A: τ A = 6, r A 1 = r A 2 = r A 3 = r A 4 = 100, r A 5 = r A 6 = 1  Task B: τ B = 3, r B 1 = 10, r B 2 = r B 3 = 0  Suppose d A (k) = d B (k) = 1  d A (k)s A (k) + d B (k)s B (k) of Greedy Maximizer = 411 A A A AA B B A 100 1 10 18

19 Example of Suboptimality  A system with two tasks  Task A: τ A = 6, r A 1 = r A 2 = r A 3 = r A 4 = 100, r A 5 = r A 6 = 1  Task B: τ B = 3, r B 1 = 10, r B 2 = r B 3 = 0  Suppose d A (k) = d B (k) = 1  d A (k)s A (k) + d B (k)s B (k) of Greedy Maximizer = 411  d A (k)s A (k) + d B (k)s B (k) of an optimal policy = 420 A A A AA B B B 19

20 Approximation Bound  Analyze the worst case performance of the Greedy Maximizer  Show that resulting ∑ X d X (k)s X (k) is at least 1/2 of the resulting ∑ X d X (k)s X (k) by any other policy  Theorem: The Greedy Maximizer is a 2-approximation policy  The Greedy Maximizer achieves reward requirements [ q X ] as long as requirements [2 q X ] are feasible 20

21 Simulation Setup: MPEG Streaming  MPEG: 1 GOF consists of 1 I-frame, 3 P-frames, and 8 B- frames  Two groups of tasks, A and B  Tasks in A treat both I-frames and P-frames as mandatory parts, while tasks in B only require I-frames to be mandatory  B-frames are optional for tasks in A; both P-frames and B- frames are optional for tasks in B  3 tasks in each group 21

22 Reward Function for Optional Part  Each task gains some reward when its optional parts are executed  Consider three types of optional part reward functions: exponential, logarithmic, and linear  Exponential: X obtains a total reward of (5+k)(1-e -i/5 ) if its job is executed i times, where k is the index of the task X  Logarithmic: X obtains a total reward of (5+k)log(10i+1) if its job is executed i times  Linear: X obtains a total reward of (5+k)i if its job is executed i times 22

23 Performance Comparison  Assume all tasks in A requires an average reward of α, and all tasks in B requires an average reward of β  Plot all pairs of ( α, β ) that are achieved by each policy  Consider three policies:  Feasible: the feasible region characterized by the feasibility conditions  Greedy Maximizer  MAX: a policy that aims to maximize the total reward in the system 23

24 Simulation Results: Same Frame Rate  All streams generate one GOF every 30 time slots Exponential reward functions: Greedy = Feasible Thus, Greedy Maximizer is indeed feasibility optimal Greedy is much better than MAX 24

25 Simulation Results: Same Frame Rate  All streams generate one GOF every 30 time slots  Greedy = Feasible, and is always better than MAX LogarithmicLinear 25

26 Simulation Results: Heterogeneous Frame Rate  Different tasks generate GOFs at different rates  Period length may be 20, 30, or 40 time slots Performance of Greedy is close to optimal Greedy is much better than MAX Exponential 26

27 Simulation Results: Heterogeneous Frame Rate  Different tasks generate GOFs at different rates  Period length may be 20, 30, or 40 time slots LogarithmicLinear 27

28 Conclusions  We propose a model based on the imprecise computation models that supports per-task reward guarantees  This model can achieve better fairness, and allow fine- grain tradeoff between tasks  Derive a sharp condition for feasibility  Propose an on-line scheduling policy, the Greedy Maximizer  Greedy Maximizer is feasibility optimal when all tasks have the same period length  It is a 2-approximation policy, otherwise 28


Download ppt "Scheduling Periodic Real-Time Tasks with Heterogeneous Reward Requirements I-Hong Hou and P.R. Kumar 1."

Similar presentations


Ads by Google