Chapter 2: The Linux System Part 3

Slides:



Advertisements
Similar presentations
Uniprocessor Scheduling Chapter 9 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
Advertisements

Scheduling Algorithems
Chapter 9 Uniprocessor Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
OS/2 Warp Chris Ashworth Cameron Davis John Weatherley.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Introduction to Operating Systems – Windows process and thread management In this lecture we will cover Threads and processes in Windows Thread priority.
Chapter 11 Operating Systems
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
Job scheduling Queue discipline.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
CE Operating Systems Lecture 11 Windows – Object manager and process management.
CE Operating Systems Lecture 10 Processes and process management in Linux.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 3.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Operating System Examples - Scheduling. References r er/ch10.html r bangalore.org/blug/meetings/200401/scheduler-
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 6: CPU Scheduling.
UNIT–II: Process Management
REAL-TIME OPERATING SYSTEMS
Chapter 19: Real-Time Systems
CPU SCHEDULING.
Chapter 6: CPU Scheduling
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 5a: CPU Scheduling
Uniprocessor Scheduling
Chapter 2.2 : Process Scheduling
Process Scheduling B.Ramamurthy 9/16/2018.
10CS53 Operating Systems Unit VIII Engineered for Tomorrow
CPU Scheduling.
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Lecture 5 : The Linux System
Chapter 5: CPU Scheduling
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
Chapter 5: CPU Scheduling
10CS53 Operating Systems Unit VIII Engineered for Tomorrow
Chapter 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 9 Uniprocessor Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Process Description and Control
Outline Scheduling algorithms Multi-processor scheduling
CPU scheduling decisions may take place when a process:
Operating System Concepts
Chapter 6: CPU Scheduling
Process Scheduling Decide which process should run and for how long
Chapter 5: CPU Scheduling
Chapter 19: Real-Time Systems
Process Description and Control
Chapter 10 Multiprocessor and Real-Time Scheduling
Uniprocessor scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Uniprocessor Scheduling
Lecture 5 : The Linux System
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Components of a Linux System
Presentation transcript:

Chapter 2: The Linux System Part 3

Chapter 2: The Linux System Linux History Design Principles Kernel Modules Process Management Scheduling Memory Management File Systems Input and Output Interprocess Communication Network Structure Security

Scheduling

Scheduling Scheduling is the job of allocating CPU time to different tasks within an operating system. While scheduling is normally thought of as the running and interrupting of processes, in Linux, scheduling also includes the running of the various kernel tasks. Running kernel tasks encompasses both tasks that are requested by a running process and tasks that execute internally on behalf of a device driver. As of 2.5, new scheduling algorithm – preemptive, priority-based Real-time range nice value

Relationship Between Priorities and Time-slice Length

Relationship Between Priorities and Time-slice Length The Linux scheduler is a preemptive, priority-based algorithm with two separate priority ranges: a real-time range from0 to 99 a nice value ranging from 100 to 140. These two ranges map into a global priority scheme in which numerically lower values indicate higher priorities. the Linux scheduler assigns higher-priority tasks longer time quanta and lower-priority tasks shorter time quanta.

List of Tasks Indexed by Priority

List of Tasks Indexed by Priority Each processor maintains its own run queue and schedules itself independently. Each run queue contains two priority arrays—active and expired. The active array contains all tasks with time remaining in their time slices, and the expired array contains all expired tasks. Each of these priority arrays includes a list of tasks indexed according to priority. The scheduler chooses the task with the highest priority from the active array for execution on the CPU. On multiprocessor machines, this means that each processor is scheduling the highest-priority task from its own run queue structure. When all tasks have exhausted their time slices (that is, the active array is empty), the two priority arrays are exchanged as the expired array becomes the active array and vice versa.

List of Tasks Indexed by Priority Linux’s real-time scheduling is simpler still. Linux implements the two real time scheduling classes required by POSIX.1b: first-come, first-served (FCFS) and round-robin. In both cases, each process has a priority in addition to its scheduling class. Processes with different priorities can compete with one another to some extent in time-sharing scheduling; in real-time scheduling, however, the scheduler always runs the process with the highest priority. Among processes of equal priority, it runs the process that has been waiting longest.

List of Tasks Indexed by Priority The only difference between FCFS and round-robin scheduling is that FCFS processes continue to run until they either exit or block, whereas a round-robin process will be preempted after a while and will be moved to the end of the scheduling queue. so round-robin processes of equal priority will automatically time-share among themselves. Unlike routine time-sharing tasks, real-time tasks are assigned static priorities.

Process Scheduling Linux uses two process-scheduling algorithms: A time-sharing algorithm for fair preemptive scheduling between multiple processes. A real-time algorithm for tasks where absolute priorities are more important than fairness. A process’s scheduling class defines which algorithm to apply.

Process Scheduling (Cont.) Linux implements the FIFO and round-robin real-time scheduling classes; in both cases, each process has a priority in addition to its scheduling class. The scheduler runs the process with the highest priority; for equal-priority processes, it runs the process waiting the longest. FIFO processes continue to run until they either exit or block . A round-robin process will be preempted after a while and moved to the end of the scheduling queue, so that round-robin processes of equal priority automatically time-share between themselves.

Kernel Synchronization A request for kernel-mode execution can occur in two ways: A running program may request an operating system service, either explicitly via a system call, or implicitly, for example, when a page fault occurs A device driver may deliver a hardware interrupt that causes the CPU to start executing a kernel-defined handler for that interrupt Kernel synchronization requires a framework that will allow the kernel’s critical sections to run without interruption by another critical section.

Kernel Synchronization (Cont.) Linux uses two techniques to protect critical sections: 1. Normal kernel code is nonpreemptible (until 2.4) – when a time interrupt is received while a process is executing a kernel system service routine, the kernel’s need_resched flag is set so that the scheduler will run once the system call has completed and control is about to be returned to user mode 2. The second technique applies to critical sections that occur in an interrupt service routines – By using the processor’s interrupt control hardware to disable interrupts during a critical section, the kernel guarantees that it can proceed without the risk of concurrent access of shared data structures

Kernel Synchronization (Cont.) To avoid performance penalties, Linux’s kernel uses a synchronization architecture that allows long critical sections to run without having interrupts disabled for the critical section’s entire duration Interrupt service routines are separated into a top half and a bottom half. The top half is a normal interrupt service routine, and runs with recursive interrupts disabled The bottom half is run, with all interrupts enabled, by a miniature scheduler that ensures that bottom halves never interrupt themselves This architecture is completed by a mechanism for disabling selected bottom halves while executing normal, foreground kernel code

Interrupt Protection Levels Each level may be interrupted by code running at a higher level, but will never be interrupted by code running at the same or a lower level. User processes can always be preempted by another process when a time-sharing scheduling interrupt occurs.

Symmetric Multiprocessing Linux 2.0 was the first Linux kernel to support SMP hardware; separate processes or threads can execute in parallel on separate processors. To preserve the kernel’s nonpreemptible synchronization requirements, SMP imposes the restriction, via a single kernel spinlock, that only one processor at a time may execute kernel-mode code.