Chapter 4: Threads Overview Multithreading Models Thread Libraries

Slides:



Advertisements
Similar presentations
OPERATING SYSTEMS Threads
Advertisements

Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 AE4B33OSS Chapter 4: Threads Overview Multithreading Models Threading Issues Pthreads Windows.
Chap 4 Multithreaded Programming. Thread A thread is a basic unit of CPU utilization It comprises a thread ID, a program counter, a register set and a.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Multithreaded Programming.
Threads. Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
國立台灣大學 資訊工程學系 Chapter 4: Threads. 資工系網媒所 NEWS 實驗室 Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the.
Threads.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 6: Threads Chapter 4.
Modified from Silberschatz, Galvin and Gagne ©2009 Lecture 7 Chapter 4: Threads (cont)
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
02/02/2004CSCI 315 Operating Systems Design1 Threads Notice: The slides for this lecture have been largely based on those accompanying the textbook Operating.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Chapter 4: Threads READ 4.1 & 4.2 NOT RESPONSIBLE FOR 4.3 &
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Multithreaded Programming.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 4: Threads Overview Multithreading Models Threading Issues.
Chapter 4: Threads Adapted to COP4610 by Robert van Engelen.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
國立台灣大學 資訊工程學系 Chapter 4: Threads. 資工系網媒所 NEWS 實驗室 Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the.
Thread. A basic unit of CPU utilization. It comprises a thread ID, a program counter, a register set, and a stack. It is a single sequential flow of control.
Chapter 4: Threads. 4.2CSCI 380 Operating Systems Chapter 4: Threads Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux.
Chapter 4: Multithreaded Programming. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 4: Multithreaded Programming Overview.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 4 Operating Systems.
Chapter 4: Threads. 4.2 Chapter 4: Threads Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux Threads Java Threads.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2011Operating System Concepts Essentials – 8 th Edition Chapter 4: Threads.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Threads A thread (or lightweight process) is a basic unit of CPU.
4.1 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 4: Threads Overview Multithreading Models Thread Libraries  Pthreads  Windows.
Source: Operating System Concepts by Silberschatz, Galvin and Gagne.
Shan Gao Fall 2007 Department of Computer Science Georgia State University.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 4: Threads Modified from the slides of the text book. TY, Sept 2010.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 14 Threads 2 Read Ch.
Chapter 4: Multithreaded Programming. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts What is Thread “Thread is a part of a program.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Outline n Overview n Multithreading.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
Lecture 3 Threads Erick Pranata © Sekolah Tinggi Teknik Surabaya 1.
CS307 Operating Systems Threads Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2011.
Saurav Karmakar. Chapter 4: Threads  Overview  Multithreading Models  Thread Libraries  Threading Issues  Operating System Examples  Windows XP.
Chapter 4: Threads 羅習五. Chapter 4: Threads Motivation and Overview Multithreading Models Threading Issues Examples – Pthreads – Windows XP Threads – Linux.
CISC2200 Threads Fall 09. Process  We learn the concept of process  A program in execution  A process owns some resources  A process executes a program.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
CMSC 421 Spring 2004 Section 0202 Part II: Process Management Chapter 5 Threads.
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
OPERATING SYSTEM CONCEPT AND PRACTISE
Chapter 4: Threads.
Chapter 4: Threads.
Threads.
Chapter 4: Threads.
Operating System Concepts 7th Edition Abraham SilBerschatz Peter Baer Galvin Greg Gagne Prerequisite: CSE212.
Chapter 4: Threads 羅習五.
Nadeem MajeedChoudhary.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
OPERATING SYSTEMS Threads
Modified by H. Schulzrinne 02/15/10 Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4 Threads!!!.
Chapter 4: Threads.
Chapter 4:Threads Book: Operating System Principles , 9th Edition , Abraham Silberschatz, Peter Baer Galvin, Greg Gagne.
Presentation transcript:

Chapter 4: Threads Overview Multithreading Models Thread Libraries Pthreads Windows Thread Library Java Threads Threading Issues Implicit Threading Open MP Kernel level threads Windows threads Linux threads

Single and Multithreaded Processes

Benefits Responsiveness: Multithreading an interactive application increases responsiveness to the user. Resource Sharing: Threads share memory and resources of the processes to which they belong. Economy: Process creation management takes more time than creating threads; In Solaris 2, creating a process is 30 times slower than creating a thread and context switching is five times slower Utilization of MP Architectures: Benefits of multithreading increase in multiprocessor systems because different threads can be scheduled to different processors

Multicore programming Multithreaded programming provides a mechanism for more efficient use of the multiple cores in current day desktops. CPU designers have improved system performance by adding hardware to improve thread performance. Modern Intel CPUs frequently support two threads per core. (more about it when we talk about CPU scheduling) Concurrent systems: A concurrent system supports more than one task by allowing all the tasks to make progress Parallel systems: A parallel system allows more than one task to be executing simultaneously. So, if a system is parallel it has to be multiprocessor system. Programming challenges in multicore systems: Operating system designers must write scheduling algorithms that use multiple cores to allow parallel execution. Application programmers need to modify existing programs as well as design new multithreaded programs to exploit the multiple cores.

Programming challenges for multicore systems … Identifying tasks: Identifying areas in applications that can be divided into separate concurrent tasks. Balance: Programmers also should ensure that the tasks identified perform equal work of equal value Data splitting: The data accessed and manipulated by different tasks must divided to run on separate cores. Data dependency: When one task depends on data from another, then the processes must be synchronized. More in chapter 5. Testing and debugging: When a program is running on multiple cores in parallel many execution paths are possible. Testing and debugging such concurrent programs is more difficult than testing and debugging single-threaded sequential programs.

Multithreading Models Many systems provide support for both user-level and kernel level threads. This provides different multithreading models Many-to-One: maps many user level threads into one kernel level thread. Thread creation, synchronization done in user space Fast, no system call required, portable. The entire process will block if a thread makes a blocking system call Since only one thread can access kernel at a time, multiple threads cannot run concurrently and thus cannot make use of multiprocessors One-to-One: maps each user level thread to a kernel level thread Thread creation, synchronization require system calls and hence expensive Creating a user level thread results in creating a kernel thread More overhead but allows parallelism Because of the overhead, most implementations limit the number of kernel threads created Many-to-Many: Multiplexes many user level threads to a smaller or equal number of kernel threads Has the advantages of both the many-to-one and one-to-one model A user level thread is not bound to any particular kernel level thread. Solaris2, IRIS, HP-UX takes this approach

Many-to-One Model

One-to-one Model

Many-to-Many Model

Thread Libraries A thread library provides the programmer with an API for creating and managing threads. User level threads: Thread library lies entirely in the users’ space with no kernel support. This means, invoking a function in the library results in a local function call in the user space, and not a system call. Kernel-level threads: In this case, code and data structures for the library exist in the kernel space. Three main thread libraries available today are: POSIX Pthreads – visit https://computing.llnl.gov/tutorials/pthreads. Windows Threads - visit mdsn.microsoft.com Java Threads. - docs.oracle.com/en/java

Pthreads – Programming for Multi-core PCs A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization API specifies behavior of the thread library, implementation is up to developer of the library Common in UNIX operating systems (Solaris, Linux, Mac OS X) Refer to the link on “Posix Thread Programming” in the class website for all the pthread library routines and how to use them. Compiling and linking program fun.c with pthread library Use the command “gcc –o fun –pthread fun.c”

An example C Program using Pthreads API. Int main(int argc, char *argv[]) { pthread_t tid; //thread identifier pthread_attr_t attr; // set of thread attributes if (argc != 2){ fprintf(stderr, “usage: a.out <integer>\n”); return -1; } if (atoi(argv[1]) < 0){ fprintf (stderr, “%d must be >=0\n”, atoi(argv[1])); /*get default attributes*/ pthread_attr_init(&attr); /*create a thread*/ pthread_create(&tid,&attr, runner, argv[1]); /*wait for the thread to exit */ pthread_join(tid,NULL); printf(“sum=%d”, sum); #include <pthread.h> #include <stdio.h> Int sum; void *runner(void *param) { int i; upper=atoi(param); sum=0; for (i=1; i<=upper; i++) sum +=i; pthread_exit(0); }

Thread Libraries … Windows Thread Library Similar to Pthreads library in many ways. Read the example in the book or visit mdsn.microsoft.com for learning more about windows threads. Java Threads Java language and its API provide a rich set of features for creating and managing threads. Java threads are available on any system that provides a JVM including Windows, Linux and Mac OS X. Java threads API is also available for Android applications.

Implicit threading Thread pools Reduce the overhead involved in thread creation Instead of creating a thread for each service request, a set of threads are created by the server and placed in a pool When a request for a service arrives, the service is passed to one of the threads waiting in the pool. After the request is serviced, the thread is returned to the pool awaiting more work If a request arrives and there are no threads awaiting in the queue, the server waits until a thread becomes free Benefits of thread pooling: Faster servicing of the requests because no overhead involved in thread creation Limits the total number of threads that exist at any one point in the system

Implicit threading… Implicit threading libraries take care of much of the work needed to create, manage, and (to some extent) synchronize threads. Two such libraries are : (i) OpenMP and (ii) Intel Thread building blocks OpenMP: is a set of compiler directives, library routines, and environment variables that specify shared memory concurrency in FORTRAN, C and C++ programs The rationale behind OpenMP was to create a portable and unified standard for shared memory parallelism. OpenMP was introduced in November 1997 and its version 4.5 has been released in November 2015. All major compilers support OpenMP language. This includes Microsoft visual C/C++ .NET for Windows, and GNU GCC compiler for Linux

Implicit threading… OpenMP OpenMP compiler directives demarcate code that can be executed in parallel (called “parallel regions”) and control how code is assigned to threads. The threads in an OpenMP code operate under the fork-join model. When the main thread encounters a parallel region while executing the application, a team of threads is forked off, and these threads begin executing the code within the parallel region. At the end of the parallel region, the threads within the team wait until all other threads in the team have finished before being joined. The main thread resumes serial execution with the statement following the parallel region.

OPenMP… Pragmas For C/C++ , OpenMP uses “pragmas” as compiler directives All OpenMP pragmas have the same prefix of “ #pragma omp”. This is followed by an OpenMP construct and one or more optional clauses to modify the construct. For example, to define a parallel region within an application, use the parallel construct as in #pragma omp parallel This pragma will be followed by a single statement or a block of code enclosed within curly braces. When the application encounters this statement during execution, it will create a team of threads, execute all the statements within the parallel region on each thread and join the threads after the last statement in the region.

OPenMP… Pragmas… In many applications, large number of independent operations are found in loops. Using the loop work sharing construct in OpenMP, you can split these loop iterations and assign them to threads for concurrent execution. The “parallel for” construct will initiate a new parallel region around the single for loop following the pragma and divide the loop iterations among the threads of the team. Upon completion of the assigned iterations, threads sit at the implicit barrier at the end of the parallel region waiting to join with other threads. For More information on OpenMP, visit openmp.org

Threading Issues Semantics of fork() and exec() system calls. In a multithreaded program, when a fork() system call is executed by a thread, does the new process duplicate all the threads or is the new process single threaded. Depends on the implementation The exec () system call works the same way we saw earlier – i.e, it replaces the entire process including all threads Thread cancellation: deals with termination of a thread before it has completed. E.g. If a database search is performed by several threads concurrently, all threads can be terminated once one thread returns the result. Cancellation of a thread can occur in two different scenarios: Asynchronous cancellation: One thread immediately terminates the target thread Deferred cancellation: The target thread can periodically check if it should terminate, allowing it to terminate itself in an orderly fashion. Most Operating Systems allow a process or thread to be cancelled asynchronously

Threading Issues… Signal handling: Signal is used in UNIX systems to notify a process a particular event has occurred. Signals follow the following pattern: A signal is generated by the occurrence of a particular event A generated signal is delivered to a process. Once delivered, the signal must be handled Synchronous signals: e.g. signals delivered to the same process that performed the operation causing the signal. For example, division by zero Asynchronous signals: signals generated by an event external to the process are received by the process asynchronously. For example, <control> <C>. Signal handlers: Default signal handlers provided by the OS kernel User-defined signal handlers Issues in handling signals: In single-threaded programs, signals are always delivered to the process. In multithreaded programs who should be delivered the signal? Thread specific data

Threading Issues… Scheduler Activations Many-to-many, many-to-one and one-to-one models discussed earlier require communication between the kernel and the thread library to maintain the appropriate number of kernel threads allocated to the application to help ensure best performance. Many systems accomplish this communication through scheduler activation. Scheduler activation provides upcalls - a communication mechanism from the kernel to the thread library

Windows XP Threads Implements the one-to-one mapping, as I mentioned earlier Each thread contains A thread id Register set, representing the status of the processor Separate user and kernel stacks Private data storage area used by various run-time libraries The register set, stacks, and private storage area are known as the context of the thread The primary data structures of a thread include: ETHREAD (executive thread block) KTHREAD (kernel thread block) TEB (thread environment block)

Linux Threads Linux refers to them as tasks rather than threads Thread creation is done through clone() system call clone() allows a child task to share the address space of the parent task (process)