Chapter 10 Multiprocessor and Real-Time Scheduling

Slides:



Advertisements
Similar presentations
IT Systems Multiprocessor System EN230-1 Justin Champion C208 –
Advertisements

Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 3: CPU Scheduling
1 Multiprocessor and Real-Time Scheduling Chapter 10.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
Multiprocessor and Real-Time Scheduling Chapter 10.
1 Scheduling in Representative Operating Systems.
Informationsteknologi Tuesday, October 9, 2007Computer Systems/Operating Systems - Class 141 Today’s class Scheduling.
Chapter 10 Multiprocessor and Real-Time Scheduling
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
 Scheduling  Linux Scheduling  Linux Scheduling Policy  Classification Of Processes In Linux  Linux Scheduling Classes  Process States In Linux.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU-Scheduling Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short term scheduler.
Chapter 6: CPU Scheduling
IT320 OPERATING SYSTEM CONCEPTS Unit 6: Processor Scheduling September 2012 Kaplan University 1.
Chapter 10 Multiprocessor and Real-Time Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Multiprocessor and Real-Time Scheduling Chapter 10.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
ICOM Noack Scheduling For Distributed Systems Classification – degree of coupling Classification – granularity Local vs centralized scheduling Methods.
Real time scheduling G.Anuradha Ref:- Stallings. Real time computing Correctness of the system depends not only on the logical result of computation,
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
Real-Time Operating Systems RTOS For Embedded systems.
Multiprocessor, Multicore, and Real-Time Scheduling Chapter 10
REAL-TIME OPERATING SYSTEMS
CPU SCHEDULING.
Chapter 5a: CPU Scheduling
Wayne Wolf Dept. of EE Princeton University
Uniprocessor Scheduling
Chapter 2 Scheduling.
Chapter 8 – Processor Scheduling
Real-time Software Design
Chapter 5: CPU Scheduling
CPU Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
CPU Scheduling Basic Concepts Scheduling Criteria
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
OverView of Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter 2: The Linux System Part 3
Chapter5: CPU Scheduling
TDC 311 Process Scheduling.
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Outline Scheduling algorithms Multi-processor scheduling
CPU scheduling decisions may take place when a process:
Multiprocessor and Real-Time Scheduling
Process Scheduling Decide which process should run and for how long
Chapter 5: CPU Scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
CS703 - Advanced Operating Systems
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: Scheduling Algorithms Dr. Amjad Ali
Chapter 6: CPU Scheduling
Uniprocessor Scheduling
Uniprocessor Scheduling
Module 5: CPU Scheduling
Presentation transcript:

Chapter 10 Multiprocessor and Real-Time Scheduling These slides are intended to help a teacher develop a presentation. This PowerPoint covers the entire chapter and includes too many slides for a single delivery. Professors are encouraged to adapt this presentation in ways which are best suited for their students and environment.

Scheduling Design Issues Scheduling on a multiprocessor involves three interrelated issues: Assignment of processes to processors Use of multiprogramming on individual processors Actual dispatching of a process Scheduling on a multiprocessor involves three interrelated issues: • The assignment of processes to processors • The use of multiprogramming on individual processors • The actual dispatching of a process It is important to keep in mind that the approach taken will depend, in general, on the degree of granularity of the applications and on the number of processors available.

Assignment of Processes to Processors Assuming all processors are equal, it is simplest to treat processors as a pooled resource and assign process to processors on demand. Should the assignment be static or dynamic though? Dynamic Assignment threads are moved from a queue of one processor to a queue of another processor; Assuming that the architecture of the multiprocessor is uniform, in the sense that no processor has a particular physical advantage with respect to access to main memory or to I/O devices, then the simplest scheduling approach is to treat the processors as a pooled resource and assign processes to processors on demand. The question then arises as to whether the assignment should be static or dynamic. With Dynamic Assignment, threads are moved for a queue for one processor to a queue for another processor; Linux uses this approach.

Static Assignment Permanently assign process to a processor Less overhead But may leave a processor idle, while others have a backlog Solution: use a common queue If a process is permanently assigned to one processor from activation until its completion, then a dedicated short-term queue is maintained for each processor. An advantage of this approach is that there may be less overhead in the scheduling function, because the processor assignment is made once and for all. Also, the use of dedicated processors allows a strategy known as group or gang scheduling, as discussed later. A disadvantage is that one processor can be idle, with an empty queue, while another processor has a backlog. To prevent this situation, a common queue can be used. All processes go into one global queue and are scheduled to any available processor.

Process Scheduling Processes are not dedicated to processors A single queue is used for all processes Or multiple queues are used for priorities All queues feed to the common pool of processors Usually, processes are not dedicated to processors. Instead, a single queue is used for all processors, or if some sort of priority scheme is used, there are multiple queues based on priority, all feeding into the common pool of processors. We can view the system as being a multiserver queuing architecture.

Real-Time Scheduling Correctness of the system depends not only on the logical result of the computation but also on the time at which the results are produced Tasks or processes attempt to control or react to events that take place in the outside world These events occur in “real time” and tasks must be able to keep up with them Real-time computing may be defined as that type of computing in which the correctness of the system depends not only on the logical result of the computation but also on the time at which the results are produced. A real-time system is defined by what is meant by a real-time process, or task. In general, in a real-time system, some of the tasks are real-time tasks, and these have a certain degree of urgency to them.

Real-Time Systems Control of laboratory experiments Process control in industrial plants Robotics Air traffic control Telecommunications Military

Hard vs Soft “Hard “ real time task: “Soft” real time task One that must meet a deadline “Soft” real time task Has a deadline which is desirable but not mandatory A hard real-time task is one that must meet its deadline; otherwise it will cause unacceptable damage or a fatal error to the system. A soft real-time task has an associated deadline that is desirable but not mandatory; it still makes sense to schedule and complete the task even if it has passed its deadline.

Periodic vs Aperiodic Periodic tasks Aperiodic tasks Are completed regularly, once per period T or T units apart Aperiodic tasks Have time constraints either for deadlines or start A periodic task, the requirement may be stated as “once per period T” or “exactly T units apart.” An aperiodic task has a deadline by which it must finish or start, or it may have a constraint on both start and finish time.

Round Robin scheduling unacceptable In a preemptive scheduler that uses simple round-robin scheduling, a real-time task would be added to the ready queue to await its next time slice, as illustrated in Figure 10.5a. In this case, the scheduling time will generally be unacceptable for real-time applications.

Non-preemptive Priority Driven unacceptable In a nonpreemptive scheduler, we could use a priority scheduling mechanism, giving real-time tasks higher priority. In this case, a real-time task that is ready would be scheduled as soon as the current process blocks or runs to com- pletion (Figure 10.5b). This could lead to a delay of several seconds if a slow, low-priority task were executing at a critical time. Again, this approach is not acceptable.

Combine priorities with clock-based interrupts A more promising approach is to combine priorities with clock-based interrupts. Preemption points occur at regular intervals. When a preemption point occurs, the currently running task is preempted if a higher-priority task is waiting. This would include the preemption of tasks that are part of the operating system kernel. Such a delay may be on the order of several milliseconds. While this last approach may be adequate for some real-time applications, it will not suffice for more demanding applications.

Immediate Preemption For more demanding applications, the approach that has been taken is sometimes referred to as immediate preemption. In this case, the operating system responds to an interrupt almost immediately, unless the system is in a critical-code lockout section. Scheduling delays for a real-time task can then be reduced to 100 "s or less.

Linux Scheduling Scheduling classes SCHED_FIFO: First-in-first-out real-time threads SCHED_RR: Round-robin real-time threads SCHED_OTHER: Other, non-real-time threads Within each class multiple priorities may be used The default values are as follows: Real-time priority classes range from 0 to 99 inclusively, and SCHED_OTHER classes range from 100 to 139.

Linux Scheduling The system will not interrupt an executing FIFO thread except in the following cases: a. Another FIFO thread of higher priority becomes ready. b. The executing FIFO thread becomes blocked waiting for an event, such as I/O. c. The executing FIFO thread voluntarily gives up the Processor by calling sched_yield.

Linux Scheduling When an executing FIFO thread is interrupted, it is placed in the queue associated with its priority. When a FIFO thread becomes ready and if that thread has a higher priority than the currently executing thread, then the currently executing thread is preempted and the highest-priority ready FIFO thread is executed. If more than one thread has that highest priority, the thread that has been waiting the longest is chosen.

Linux Real-Time Scheduling Figure 10.11 is an example that illustrates the distinction between FIFO and RR scheduling. Assume a process has four threads with three relative priorities assigned as shown in Figure 10.11a. Assume that all waiting threads are ready to execute when the current thread waits or terminates and that no higher-priority thread is awakened while a thread is executing. Figure 10.11b shows a flow in which all of the threads are in the SCHED_FIFO class. Thread D executes until it waits or terminates. Next, although threads B and C have the same priority, thread B starts because it has been waiting longer than thread C. Thread B executes until it waits or terminates, then thread C executes until it waits or terminates. Finally, thread A executes. Figure 10.11c shows a sample flow if all of the threads are in the SCHED_RR class. Next, threads B and C are time sliced, because they both have the same priority. The final scheduling class is SCHED_OTHER. A thread in this class can only execute if there are no real-time threads ready to execute.