Process Scheduling III ( 5.4, 5.7) CPE 261403 - Operating Systems

Slides:



Advertisements
Similar presentations
Scheduling Algorithems
Advertisements

Threads, SMP, and Microkernels
Operating Systems: Internals and Design Principles
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Chapter 6: CPU Scheduling
IT Systems Multiprocessor System EN230-1 Justin Champion C208 –
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
CPU Scheduling Basic Concepts Scheduling Criteria
Operating Systems Parallel Systems (Now basic OS knowledge)
Threads Irfan Khan Myo Thein What Are Threads ? a light, fine, string like length of material made up of two or more fibers or strands of spun cotton,
1 Threads, SMP, and Microkernels Chapter 4. 2 Process: Some Info. Motivation for threads! Two fundamental aspects of a “process”: Resource ownership Scheduling.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Chapter 17 Parallel Processing.
Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.
Process Concept An operating system executes a variety of programs
Budapesti Műszaki és Gazdaságtudományi Egyetem Méréstechnika és Információs Rendszerek Tanszék Scheduling in Windows Zoltan Micskei
CPU Scheduling - Multicore. Reading Silberschatz et al: Chapter 5.5.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 1: Introduction.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Operating Systems Lecture 09: Threads (Chapter 4)
Ioana Burcea Initial Observations of the Simultaneous Multithreading Pentium 4 Processor Nathan Tuck and Dean M. Tullsen.
CSC 360- Instructor: K. Wu CPU Scheduling. CSC 360- Instructor: K. Wu Agenda 1.What is CPU scheduling? 2.CPU burst distribution 3.CPU scheduler and dispatcher.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Scheduler What is the job of.
Scheduling: Chapter 3  Process: Entity competing for resources  Process states: New, running, waiting, ready, terminated, zombie (and perhaps more).
Threads, SMP, and Microkernels Chapter 4. 2 Outline n Threads n Symmetric Multiprocessing (SMP) n Microkernel n Linux Threads.
1 Previous lecture review n Out of basic scheduling techniques none is a clear winner: u FCFS - simple but unfair u RR - more overhead than FCFS may not.
Chapter 5 – CPU Scheduling (Pgs 183 – 218). CPU Scheduling  Goal: To get as much done as possible  How: By never letting the CPU sit "idle" and not.
1 Threads, SMP, and Microkernels Chapter 4. 2 Focus and Subtopics Focus: More advanced concepts related to process management : Resource ownership vs.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
History of Microprocessor MPIntroductionData BusAddress Bus
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Operating Systems CSE 411 Multi-processor Operating Systems Multi-processor Operating Systems Dec Lecture 30 Instructor: Bhuvan Urgaonkar.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Kevin Eady Ben Plunkett Prateeksha Satyamoorthy.
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership: process includes a virtual address space to hold the process image (fig 3.16)
Hyper Threading Technology. Introduction Hyper-threading is a technology developed by Intel Corporation for it’s Xeon processors with a 533 MHz system.
Understanding Performance, Power and Energy Behavior in Asymmetric Processors Nagesh B Lakshminarayana Hyesoon Kim School of Computer Science Georgia Institute.
Processor Architecture
Copyright © Curt Hill Parallelism in Processors Several Approaches.
Operating Systems: Internals and Design Principles
1 Lecture 1: Computer System Structures We go over the aspects of computer architecture relevant to OS design  overview  input and output (I/O) organization.
Lecture on Central Process Unit (CPU)
Shouqing Hao Institute of Computing Technology, Chinese Academy of Sciences Processes Scheduling on Heterogeneous Multi-core Architecture.
Lecture 27 Multiprocessor Scheduling. Last lecture: VMM Two old problems: CPU virtualization and memory virtualization I/O virtualization Today Issues.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Chapter 3: Processes Process Concept.
CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Process Scheduling III ( 5.4, 5.7) CPE Operating Systems
Chapter 4: Threads 羅習五. Chapter 4: Threads Motivation and Overview Multithreading Models Threading Issues Examples – Pthreads – Windows XP Threads – Linux.
Guy Martin, OSLab, GNU Fall-09
Chapter 6: CPU Scheduling (Cont’d)
lecture 5: CPU Scheduling
Thread & Processor Scheduling
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
CPU Scheduling – Multiprocessor
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
Threads, SMP, and Microkernels
Chapter 4: Threads.
Symmetric Multiprocessing (SMP)
Mid Term review CSC345.
Scheduling.
Chapter 5: CPU Scheduling
Process scheduling Chapter 5.
Lecture 4- Threads, SMP, and Microkernels
Outline Scheduling algorithms Multi-processor scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Operating System Overview
Presentation transcript:

Process Scheduling III ( 5.4, 5.7) CPE Operating Systems

Multiple-Processor Scheduling (5.4)

Asymmetric vs Symmetric Processing

Asymmetric Multiprocessing Cell Processor

The Cell Processor

EIB – A Ring Bus Topology PPE SPE Supports concurrent transmissions

How the OS can manage the Cell PPE SPE PPE SPE … Job QueueStream Processing

Job Queue vs Stream Processing

Symmetric Multiprocessing (SMP)

The Xenon Processor Xenon is actually a modified PPE unit of the Cell Processor. IBM designed it for Microsoft.

Broadway CPU Single Core 729 MHz

4-Way SMP

Has 750 Million Transistors How does 750 million objects look like?

Garth Brooks in Central Park New York, 1997

750,000 Viewers

Biggest Concert in History?

Rod Stewart

3,500,000 Viewers

Symmetric Multithreading (SMT) Hyperthreading

SMT Architecture Figure 5.8 Each logical CPU has: - Its own registers - Can handle interrupts Similar to Virtual Machines but done at the HW level

CPU Affinity (proc staying at one processor) CORE 1CORE 2 Cache Main Memory Soft Affinity – Process may be migrated to a different processor Hard Affinity – Process is locked to one processor

Load Balancing: Push Migration CORE 1CORE 2 Ready Queue 1Ready Queue 2 Kernel Check load Push Migration

Load Balancing: Pull Migration CORE 1CORE 2 Ready Queue 1Ready Queue 2 Kernel Notify queue empty Pull Migration

CPU 0 Scheduling Domains in the Linux Kernel (v and later) Core 0 Core 1 CPU 1 Core 0 Core 1 Sched Level 0 Level 1 Level 2 Load Balance Load Balance Load Balance Takes CPU Affinity into consideration. It tries to migrate only in the same group

Benefits of Scheduling Domain Keep migration local when possible. Less cache-miss. Can optimize for power saving mode. Schedule only for one domain when possible.

Future trend of Multi-CPU Processors? AMP Asymmetric Multi-Processing Few High-speed Serial Core + Many Slower Parallel Cores

The Cell Processor PPE – Serial Core SPE – Parallel Cores

Turbo Boost Technology (Intel) Core i5, i7 Processors 3-4 Cores 2.26 GHz 2 Cores 3.06 GHz 1 Cores 3.2 GHz Can turn on/off any core and adjust the speed Parallel TasksSequential Tasks

Scheduling for AMP Performance Asymmetry Handling High Number of Cores

Example of Performance Asymmetry Core 0 Core 1 Core 2 Core 3 Performance Index = 2 PI = 1 Scaled Load – Core 0’s ready queue should be twice as long

Handling High Number of Cores None Pre-emptive - Less need to share CPU - Save context switch time Smart Barrier - A thread can tell the OS what resources it is waiting for - OS does not need to schedule the thread until the resources are ready

Job 0 Printf() OS Job 1 Job 2 Message: Waiting for the display

Parallel Processing Exercise New Data [ x, y] = Old Data [x, y] ^ 2 2. New Data [ x, y] = (D [x, y] + D [x-1, y] + D [x+1, y] + D [x, y-1] + D [x, y+1]) / 5