EECS 262a Advanced Topics in Computer Systems Lecture 12 Realtime Scheduling CBS and M-CBS Resource-Centric Computing October 13th, 2014 John Kubiatowicz.

Slides:



Advertisements
Similar presentations
EE5900 Advanced Embedded System For Smart Infrastructure
Advertisements

1 EE5900 Advanced Embedded System For Smart Infrastructure RMS and EDF Scheduling.
Mehdi Kargahi School of ECE University of Tehran
Real-Time Scheduling Frank Drews
Real-Time Scheduling CIS700 Insup Lee October 3, 2005 CIS 700.
Module 2 Priority Driven Scheduling of Periodic Task
CS Advanced Operating Systems Structures and Implementation Lecture 11 Scheduling (Con’t) Real-Time Scheduling March 6 th, 2013 Prof. John Kubiatowicz.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Project 2 – solution code
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
Roadmap  Introduction  Concurrent Programming  Communication and Synchronization  Completing the Java Model  Overview of the RTSJ  Memory Management.
Chapter 5: CPU Scheduling
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
Misconceptions About Real-time Computing : A Serious Problem for Next-generation Systems J. A. Stankovic, Misconceptions about Real-Time Computing: A Serious.
Real-Time Operating System Chapter – 8 Embedded System: An integrated approach.
Technische Universität Dortmund Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund, Informatik 12 Germany 2007/12/14.
EECS 262a Advanced Topics in Computer Systems Lecture 12 Multiprocessor/Realtime Scheduling October 8 th, 2012 John Kubiatowicz and Anthony D. Joseph Electrical.
Module 2 Clock-Driven Scheduling
MM Process Management Karrie Karahalios Spring 2007 (based off slides created by Brian Bailey)
1 Previous lecture review n Out of basic scheduling techniques none is a clear winner: u FCFS - simple but unfair u RR - more overhead than FCFS may not.
Real Time Operating Systems Scheduling & Schedulers Course originally developed by Maj Ron Smith 8-Oct-15 Dr. Alain Beaulieu Scheduling & Schedulers- 7.
A S CHEDULABILITY A NALYSIS FOR W EAKLY H ARD R EAL - T IME T ASKS IN P ARTITIONING S CHEDULING ON M ULTIPROCESSOR S YSTEMS Energy Reduction in Weakly.
Tessellation: Refactoring the OS around Explicit Resource Containers with Continuous Adaptation John Kubiatowicz UC Berkeley June 5 th, 2013 Juan Colmenares,
A Real-Time, Parallel GUI Service in Tesselation OS Albert Kim, Juan A. Colmenares, Hilfi Alkaff, and John Kubiatowicz Par Lab, UC Berkeley.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
Scheduling policies for real- time embedded systems.
Multiprocessor and Real-Time Scheduling Chapter 10.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
OS 2020: Slide 1December 6 th, 2011Swarm Lab Opening Tessellation OS: an OS for the Swarm John Kubiatowicz
Real-Time Systems Mark Stanovich. Introduction System with timing constraints (e.g., deadlines) What makes a real-time system different? – Meeting timing.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 31 – Multimedia OS (Part 1) Klara Nahrstedt Spring 2011.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
Real-Time Scheduling CS 3204 – Operating Systems Lecture 20 3/3/2006 Shahrooz Feizabadi.
Prepare by : Ihab shahtout.  Overview  To give an overview of fixed priority schedule  Scheduling and Fixed Priority Scheduling.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
6. Application mapping 6.1 Problem definition
Undergraduate course on Real-time Systems Linköping 1 of 45 Autumn 2009 TDDC47: Real-time and Concurrent Programming Lecture 5: Real-time Scheduling (I)
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems RMS and EDF Schedulers.
Energy-Aware Resource Adaptation in Tessellation OS 3. Space-time Partitioning and Two-level Scheduling David Chou, Gage Eads Par Lab, CS Division, UC.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
Module 2 Overview of Real Time System Scheduling
Real-Time Scheduling CS 3204 – Operating Systems Lecture 13 10/3/2006 Shahrooz Feizabadi.
1 Real-Time Scheduling. 2Today Operating System task scheduling –Traditional (non-real-time) scheduling –Real-time scheduling.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
Dynamic Priority Driven Scheduling of Periodic Task
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 31 – Process Management (Part 1) Klara Nahrstedt Spring 2009.
Introduction to Real-Time Systems
John Kubiatowicz and Anthony D. Joseph
CS333 Intro to Operating Systems Jonathan Walpole.
For a good summary, visit:
2/23/2016COSC , Lecture 21 Real-Time Systems, COSC , Lecture 2 Stefan Andrei.
1.  System Characteristics  Features of Real-Time Systems  Implementing Real-Time Operating Systems  Real-Time CPU Scheduling  An Example: VxWorks5.x.
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
Unit - I Real Time Operating System. Content : Operating System Concepts Real-Time Tasks Real-Time Systems Types of Real-Time Tasks Real-Time Operating.
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 32 – Multimedia OS Klara Nahrstedt Spring 2010.
Lecture 4 Page 1 CS 111 Summer 2013 Scheduling CS 111 Operating Systems Peter Reiher.
Undergraduate course on Real-time Systems Linköping TDDD07 – Real-time Systems Lecture 1: Introduction & Scheduling I Simin Nadjm-Tehrani Real-time Systems.
Real-Time Operating Systems RTOS For Embedded systems.
CS Advanced Operating Systems Structures and Implementation Lecture 17 Scheduling (Con’t) Real-Time Scheduling April 2nd, 2014 Prof. John Kubiatowicz.
CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.
Embedded System Scheduling
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Wayne Wolf Dept. of EE Princeton University
Chapter 6: CPU Scheduling
Real Time Scheduling Mrs. K.M. Sanghavi.
CSCI1600: Embedded and Real Time Software
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Presentation transcript:

EECS 262a Advanced Topics in Computer Systems Lecture 12 Realtime Scheduling CBS and M-CBS Resource-Centric Computing October 13th, 2014 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~kubitron/cs262 CS252 S05

Today’s Papers Thoughts? Integrating Multimedia Applications in Hard Real-Time Systems Luca Abeni and Giorgio Buttazzo. Appears in Proceedings of the Real-Time Systems Symposium (RTSS), 1998 Implementing Constant-Bandwidth Servers upon Multiprocessor Platforms Sanjoy Baruah, Joel Goossens, and Giuseppe Lipari. Appears in Proceedings of Real-Time and Embedded Technology and Applications Symposium, (RTAS), 2002. (From Last Time!) Thoughts?

Motivation: Consolidation/Energy Efficiency Consider current approach with ECUs in cars: Today: 50-100 individual “Engine Control Units” Trend: Consolidate into smaller number of processors How to provide guarantees? Better coordination for hard realtime/streaming media Save energy rather than “throwing hardware at it”

Recall: Non-Real-Time Scheduling Primary Goal: maximize performance Secondary Goal: ensure fairness Typical metrics: Minimize response time Maximize throughput E.g., FCFS (First-Come-First-Served), RR (Round-Robin)

Characteristics of a RTS Slides adapted from Frank Drew Extreme reliability and safety Embedded systems typically control the environment in which they operate Failure to control can result in loss of life, damage to environment or economic loss Guaranteed response times We need to be able to predict with confidence the worst case response times for systems Efficiency is important but predictability is essential In RTS, performance guarantees are: Task- and/or class centric Often ensured a priori In conventional systems, performance is: System oriented and often throughput oriented Post-processing (… wait and see …) Soft Real-Time Attempt to meet deadlines with high probability Important for multimedia applications

Terminology Scheduling: Dispatching: Admission Control: Allocation: Define a policy of how to order tasks such that a metric is maximized/minimized Real-time: guarantee hard deadlines, minimize the number of missed deadlines, minimize lateness Dispatching: Carry out the execution according to the schedule Preemption, context switching, monitoring, etc. Admission Control: Filter tasks coming into they systems and thereby make sure the admitted workload is manageable Allocation: Designate tasks to CPUs and (possibly) nodes. Precedes scheduling

Typical Realtime Workload Characteristics Tasks are preemptable, independent with arbitrary arrival (=release) times Times have deadlines (D) and known computation times (C) Tasks execute on a uniprocessor system Example Setup:

Example: Non-preemptive FCFS Scheduling

Example: Round-Robin Scheduling

Real-Time Scheduling Primary goal: ensure predictability Secondary goal: ensure predictability Typical metrics: Guarantee miss ration = 0 (hard real-time) Guarantee Probability(missed deadline) < X% (firm real-time) Minimize miss ration / maximize completion ration (firm real-time) Minimize overall tardiness; maximize overall usefulness (soft real-time) E.g., EDF (Earliest Deadline First), LLF (Least Laxity First), RMS (Rate-Monotonic Scheduling), DM (Deadline Monotonic Scheduling) Real-time is about enforcing predictability, and does not equal to fast computing!!!

Scheduling: Problem Space Uni-processor / multiprocessor / distributed system Periodic / sporadic /aperiodic tasks Independent / interdependant tasks Preemptive / non-preemptive Tick scheduling / event-driven scheduling Static (at design time) / dynamic (at run-time) Off-line (pre-computed schedule), on-line (scheduling decision at runtime) Handle transient overloads Support Fault tolerance

Task Assignment and Scheduling Cyclic executive scheduling ( later) Cooperative scheduling scheduler relies on the current process to give up the CPU before it can start the execution of another process A static priority-driven scheduler can preempt the current process to start a new process. Priorities are set pre-execution E.g., Rate-monotonic scheduling (RMS), Deadline Monotonic scheduling (DM) A dynamic priority-driven scheduler can assign, and possibly also redefine, process priorities at run-time. Earliest Deadline First (EDF), Least Laxity First (LLF)

Simple Process Model Fixed set of processes (tasks) Processes are periodic, with known periods Processes are independent of each other System overheads, context switches etc, are ignored (zero cost) Processes have a deadline equal to their period i.e., each process must complete before its next release Processes have fixed worst-case execution time (WCET)

Performance Metrics Completion ratio / miss ration Maximize total usefulness value (weighted sum) Maximize value of a task Minimize lateness Minimize error (imprecise tasks) Feasibility (all tasks meet their deadlines)

Scheduling Approaches (Hard RTS) Off-line scheduling / analysis (static analysis + static scheduling) All tasks, times and priorities given a priori (before system startup) Time-driven; schedule computed and hardcoded (before system startup) E.g., Cyclic Executives Inflexible May be combined with static or dynamic scheduling approaches Fixed priority scheduling (static analysis + dynamic scheduling) Priority-driven, dynamic(!) scheduling The schedule is constructed by the OS scheduler at run time For hard / safety critical systems E.g., RMA/RMS (Rate Monotonic Analysis / Rate Monotonic Scheduling) Dynamic priority scheduling Tasks times may or may not be known Assigns priorities based on the current state of the system For hard / best effort systems E.g., Least Completion Time (LCT), Earliest Deadline First (EDF), Least Slack Time (LST)

Cyclic Executive Approach Clock-driven (time-driven) scheduling algorithm Off-line algorithm Minor Cycle (e.g. 25ms) - gcd of all periods Major Cycle (e.g. 100ms) - lcm of all periods Construction of a cyclic executive is equivalent to bin packing Process Period Comp. Time A 25 10 B 8 C 50 5 D 4 E 100 2

Cyclic Executive (cont.) Frank Drews Real-Time Systems

Cyclic Executive: Observations No actual processes exist at run-time Each minor cycle is just a sequence of procedure calls The procedures share a common address space and can thus pass data between themselves. This data does not need to be protected (via semaphores, mutexes, for example) because concurrent access is not possible All ‘task’ periods must be a multiple of the minor cycle time

Cyclic Executive: Disadvantages With the approach it is difficult to: incorporate sporadic processes; incorporate processes with long periods; Major cycle time is the maximum period that can be accommodated without secondary schedules (=procedure in major cycle that will call a secondary procedure every N major cycles) construct the cyclic executive, and handle processes with sizeable computation times. Any ‘task’ with a sizeable computation time will need to be split into a fixed number of fixed sized procedures.

Online Scheduling for Realtime

Schedulability Test Test to determine whether a feasible schedule exists Sufficient Test If test is passed, then tasks are definitely schedulable If test is not passed, tasks may be schedulable, but not necessarily Necessary Test If test is passed, tasks may be schedulable, but not necessarily If test is not passed, tasks are definitely not schedulable Exact Test (= Necessary + Sufficient) The task set is schedulable if and only if it passes the test.

Rate Monotonic Analysis: Assumptions A1: Tasks are periodic (activated at a constant rate). Period = Interval between two consecutive activations of task A2: All instances of a periodic task have the same computation time A3: All instances of a periodic task have the same relative deadline, which is equal to the period A4: All tasks are independent (i.e., no precedence constraints and no resource constraints) Implicit assumptions: A5: Tasks are preemptable A6: No task can suspend itself A7: All tasks are released as soon as they arrive A8: All overhead in the kernel is assumed to be zero (or part of )

Rate Monotonic Scheduling: Principle Principle: Each process is assigned a (unique) priority based on its period (rate); always execute active job with highest priority The shorter the period the higher the priority ( 1 = low priority) W.l.o.g. number the tasks in reverse order of priority Process Period Priority Name A 25 5 T1 B 60 3 T3 C 42 4 T2 D 105 1 T5 E 75 2 T4

Example: Rate Monotonic Scheduling Example instance RMA - Gant chart

Example: Rate Monotonic Scheduling Deadline Miss 5 10 15 response time of job

Utilization 5 10 15

RMS: Schedulability Test Theorem (Utilization-based Schedulability Test): A periodic task set with is schedulable by the rate monotonic scheduling algorithm if This schedulability test is “sufficient”! For harmonic periods ( evenly divides ), the utilization bound is 100%

RMS Example The schedulability test requires Hence, we get does not satisfy schedulability condition

EDF: Assumptions A1: Tasks are periodic or aperiodic. Period = Interval between two consequtive activations of task A2: All instances of a periodic task have the same computation time A3: All instances of a periodic task have the same relative deadline, which is equal to the period A4: All tasks are independent (i.e., no precedence constraints and no resource constraints) Implicit assumptions: A5: Tasks are preemptable A6: No task can suspend itself A7: All tasks are released as soon as they arrive A8: All overhead in the kernel is assumed to be zero (or part of )

EDF Scheduling: Principle Preemptive priority-based dynamic scheduling Each task is assigned a (current) priority based on how close the absolute deadline is. The scheduler always schedules the active task with the closest absolute deadline. 5 10 15

EDF: Schedulability Test Theorem (Utilization-based Schedulability Test): A task set with is schedulable by the earliest deadline first (EDF) scheduling algorithm if Exact schedulability test (necessary + sufficient) Proof: [Liu and Layland, 1973]

EDF Optimality EDF Properties EDF is optimal with respect to feasibility (i.e., schedulability) EDF is optimal with respect to minimizing the maximum lateness

EDF Example: Domino Effect EDF minimizes lateness of the “most tardy task” [Dertouzos, 1974] Frank Drews Real-Time Systems

Constant Bandwidth Server Intuition: give fixed share of CPU to certain of jobs Good for tasks with probabilistic resource requirements Basic approach: Slots (called “servers”) scheduled with EDF, rather than jobs CBS Server defined by two parameters: Qs and Ts Mechanism for tracking processor usage so that no more than Qs CPU seconds used every Ts seconds (or whatever measurement you like) when there is demand. Otherwise get to use processor as you like Since using EDF, can mix hard-realtime and soft realtime:

Comparison of CBS with EDF in Overload Overload with EDF Overload with CBS If scheduled items do not meet EDF schedulability: EDF yields unpredictable results when overload starts and stops CBS servers provide better isolation Isolation particularly important when deadlines crucial I.e. hard realtime tasks

CBS on multiprocessors Basic problem: EDF not all that efficient on multiprocessors. Schedulability constraint considerably less good than for uniprocessors. Need: Key idea of paper: send highest-utilization jobs to specific processors, use EDF for rest Minimizes number of processors required New acceptance test:

Are these good papers? What were the authors’ goals? What about the evaluation/metrics? Did they convince you that this was a good system/approach? Were there any red-flags? What mistakes did they make? Does the system/approach meet the “Test of Time” challenge? How would you review this paper today?

Going Further: Guaranteeing Resources What might we want to guarantee? Examples: Guarantees of BW (say data committed to Cloud Storage) Guarantees of Requests/Unit time (DB service) Guarantees of Latency to Response (Deadline scheduling) Guarantees of maximum time to Durability in cloud Guarantees of total energy/battery power available to Cell What level of guarantee? With high confidence (specified), Minimum deviation, etc. What does it mean to have guaranteed resources? A Service Level Agreement (SLA)? Something else? “Impedance-mismatch” problem The SLA guarantees properties that programmer/user wants The resources required to satisfy SLA are not things that programmer/user really understands

Resource Allocation must be Adaptive Fraction Swaptions Throughput Fraction RayTRace Throughput Three applications: 2 Guaranteed Performance apps (RayTrace, Swaptions) 1 Best-Effort App (Fluidanimate)

Resource-Level Abstraction: the Cell Properties of a Cell A user-level software component with guaranteed resources Has full control over resources it owns (“Bare Metal”) Contains at least one memory protection domain (possibly more) Contains a set of secured channel endpoints to other Cells Contains a security context which may protect and decrypt information When mapped to the hardware, a cell gets: Gang-schedule hardware thread resources (“Harts”) Guaranteed fractions of other physical resources Physical Pages (DRAM), Cache partitions, memory bandwidth, power Guaranteed fractions of system services Predictability of performance  Ability to model performance vs resources Ability for user-level schedulers to better provide QoS

Implementing Cells: Space-Time Partitioning Spatial Partition: Performance isolation Each partition receives a vector of basic resources A number HW threads Chunk of physical memory A portion of shared cache A fraction of memory BW Shared fractions of services Use of hardware partitioning mechanisms Partitioning varies over time Fine-grained multiplexing and guarantee of resources Resources are gang-scheduled Controlled multiplexing, not uncontrolled virtualization Partitioning adapted to the system’s needs Modified CBS—use of timing to provide gang scheduling

Two Level Scheduling: Control vs Data Plane Resource Allocation And Distribution Monolithic CPU and Resource Scheduling Two-Level Scheduling Application Specific Scheduling Split monolithic scheduling into two pieces: Course-Grained Resource Allocation and Distribution to Cells Chunks of resources (CPUs, Memory Bandwidth, QoS to Services) Ultimately a hierarchical process negotiated with service providers Fine-Grained (User-Level) Application-Specific Scheduling Applications allowed to utilize their resources in any way they see fit Performance Isolation: Other components of the system cannot interfere with Cells use of resources

Example: Convex allocation (PACORA) Convex Optimization with Online Application Performance Models Response Time1 Penalty1 Speech Recognition Response Time1((0,1), …, (n-1,1)) System Penalty Allocations 1 Allocations 2 Stencil Response Time2 Penalty2 Set of Running Applications Response Time2 ((0,2), …, (n-1,2)) Once discovered, resource can be handed out in a number of ways. Here is an example of a resource distribution mechanism utilized in the Tessellation OS. PACORA (Performance Aware Convex Optimization for Resource Allocation) utilizes models of performance as a function of resources combined with utility functions (detailing performance needs). These models are combined together into a convex optimization process whose solution is a proposed resource allocation. [ Note: of course performance as function of resources is not strictly convex – PACORA convexifies the performance models. Note2: The models can be derived either on or offline ] Continuously minimize the penalty of the system (subject to restrictions on the total amount of resources) Graph Traversal Response Timei Penaltyi Response Timei((0,i), …, (n-1,i))

Example: Feedback allocation Utilize dynamic control loops to fine-tune resources Example: Video Player interaction with Network Server or GUI changes between high and low bit rate Goal: set guaranteed network rate: Alternative: Application Driven Policy Static models Let network choose when to decrease allocation Application-informed metrics such as needed BW Another allocation technique is to utilize a feedback allocation loop. Although this can be a good way to fine-tune resource usage, it can lead to oscillations in behavior as the loop “tests the waters” to find minimal allocations. Better to include good models of performance, damping, or application hints to help avoid this behavior.

Architecture of Tessellation OS RAB Service Major Change Request ACK/NACK Cell Creation and Resizing Requests From Users Admission Control Minor Changes Cell #1 Cell #2 Cell #3 User/System Global Policies / User Policies and Preferences Resource Allocation And Adaptation Mechanism Cell All system resources Cell group with fraction of resources Space-Time Resource Graph (STRG) (Current Resources) Offline Models and Behavioral Parameters Online Performance Monitoring, Model Building, and Prediction Performance Reports Tessellation Kernel (Trusted) STRG Validator Resource Planner Partition Mapping and Multiplexing Layer Partition Multiplexing Partition #1 Partition #2 Partition #3 Partition Mechanism Layer QoS Enforcement Implementation Channel Authenticator TPM/ Crypto Network Bandwidth Cache/ Local Store Physical Memory Cores Performance Counters NICs Partitionable (and Trusted) Hardware

Tessellation Kernel (Partition Support) Frameworks for User-Level Runtimes PULSE: Preemptive User-Level SchEdulers Framework Components Hardware threads (harts) Timer Interrupts for preemption Able to handle cell resizing Automatic simulation of suspended scheduler segments Available schedulers Round-robin (and pthreads) EDF and Fixed Priority Constant Bandwidth Server (M-CBS) Juggle: load balancing SPMD apps Other framework: Lithe Non-preemptive, Cooperative Allows construction of schedulers that cooperate with libraries in handling processor resources. Cell Application Scheduler X PULSE Framework Tessellation Kernel (Partition Support) Used by applications and services in multiple demos Driving to an ABI that incorporates both preemptive and cooperative scheduling in the same framework Timer interrupts Hardware cores

Example: GUI Service Provides differentiated service to applications QoS-aware Scheduler Provides differentiated service to applications Soft service-time guarantees Performance isolation from other applications Operates on user-meaningful “actions” E.g. “draw frame”, “move window” Exploits task parallelism for improved service times Intended to offer window management with graphical, video, and image processing services.

Performance of GUI Service Out of 4000 frames Here is our actual data. Each bar represents a separate video client. Less is better. More stability on Tess again. By reallocating to CPU time from the 30-fps video clients to the 60-fps video clients. Note this increases service time for 30-fps video client. We can do better Very clear signs of our GUI Service exploiting parallelism. We don’t even need CPU reservations for 2-core case. Nano-X/ Linux Nano-X/ Tess GuiServ(1)/ Tess GuiServ(2)/ Tess GuiServ(4)/ Tess

Conclusion Constant Bandwidth Server (CBS) Multiprocessor CBS (M-CBS) Provide a compatible scheduling framework for both hard and soft realtime applications Targeted at supporting Continuous Media applications Multiprocessor CBS (M-CBS) Increase overall bandwidth of CBS by increasing processor resources Resource-Centric Computing Centered around resources Separate allocation of resources from use of resources Next Time: Resource Allocation and Scheduling Frameworks