1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section.

Slides:



Advertisements
Similar presentations
1 Processes and Threads Creation and Termination States Usage Implementations.
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Process Management.
Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
3.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Process An operating system executes a variety of programs: Batch system.
1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
Processes Management.
CS Lecture 4 Programming with Posix Threads and Java Threads George Mason University Fall 2009.
CSCC69: Operating Systems
Threads. Readings r Silberschatz et al : Chapter 4.
Threads. What do we have so far The basic unit of CPU utilization is a process. To run a program (a sequence of code), create a process. Processes are.
Chapter 4: Threads. Overview Multithreading Models Threading Issues Pthreads Windows XP Threads.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Operating Systems Paulo Marques Departamento de Eng. Informática Universidade de Coimbra 2006/ Threads.
3.5 Interprocess Communication
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Threads© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science Department.
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
10/16/ Realizing Concurrency using the thread model B. Ramamurthy.
Today’s topic Pthread Some materials and figures are obtained from the POSIX threads Programming tutorial at
B. RAMAMURTHY 10/24/ Realizing Concurrency using the thread model.
June-Hyun, Moon Computer Communications LAB., Kwangwoon University Chapter 26 - Threads.
Threads and Thread Control Thread Concepts Pthread Creation and Termination Pthread synchronization Threads and Signals.
CS333 Intro to Operating Systems Jonathan Walpole.
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
Thread Implementations; MUTEX Reference on thread implementation –text: Tanenbaum ch. 2.2 Reference on mutual exclusion (MUTEX) –text: Tanenbaum ch
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Unix System Calls and Posix Threads.
Department of Computer Science and Software Engineering
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
12/22/ Thread Model for Realizing Concurrency B. Ramamurthy.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
Threads. Readings r Silberschatz et al : Chapter 4.
Threads A thread is an alternative model of program execution
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
NCHU System & Network Lab Lab #6 Thread Management Operating System Lab.
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
1 Introduction to Threads Race Conditions. 2 Process Address Space Revisited Code Data OS Stack (a)Process with Single Thread (b) Process with Two Threads.
B. RAMAMURTHY 5/10/2013 Amrita-UB-MSES Realizing Concurrency using the thread model.
7/9/ Realizing Concurrency using Posix Threads (pthreads) B. Ramamurthy.
1 Chapter 5: Threads Overview Multithreading Models & Issues Read Chapter 5 pages
Realizing Concurrency using the thread model
Threads Some of these slides were originally made by Dr. Roger deBry. They include text, figures, and information from this class’s textbook, Operating.
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
Realizing Concurrency using the thread model
Threads Threads.
CS399 New Beginnings Jonathan Walpole.
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Multithreading Tutorial
Process Management Presented By Aditya Gupta Assistant Professor
Computer Engg, IIT(BHU)
Chapter 3 Process Management.
Realizing Concurrency using Posix Threads (pthreads)
Realizing Concurrency using the thread model
Thread Implementations; MUTEX
Multithreading Tutorial
Realizing Concurrency using the thread model
Jonathan Walpole Computer Science Portland State University
Threads and Concurrency
Multithreading Tutorial
Jonathan Walpole Computer Science Portland State University
Multithreading Tutorial
Realizing Concurrency using the thread model
Realizing Concurrency using Posix Threads (pthreads)
Realizing Concurrency using the thread model
Thread Implementations; MUTEX
Realizing Concurrency using Posix Threads (pthreads)
CS510 Operating System Foundations
CS333 Intro to Operating Systems
Foundations and Definitions
Presentation transcript:

1.1 T5-multithreading SO-Grade Q2

1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section Mutual exclusion access Index

1.3 Until now… Just one sequence of execution: just one program counter and just one stack There is not support to execute different concurrent functions inside one process But there can be some independent functions that could exploit concurrency Processes vs. Threads

1.4 Single-process server: Server cannot serve more than one client at the same time It is not possible to exploit advantage of concurrency and parallelism Multi-process server: one process per simultaneous client to be served Concurrent and/or parallel execution But… there exists resource wasting Replication of data structures that keep the same information, replication of logical address spaces, inefficient communication mechanisms,… Example: client-server application Client 1 {.. Send_request(); Wait_response(); Process_response(); … } Client 1 {.. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while(){ Wait_request(); Prepare_response(); Send_response(); } GLOBAL DATA Server { while(){ Wait_request(); Prepare_response(); Send_response(); } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … }

1.5 Example: client-server application Client 1 {.. Send_request(); Wait_response(); Process_response(); … } Client 1 {.. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process }

1.6 Alternative: multithreaded server Enable several concurrent executions associated to one process What is it necessary to describe one execution sequence? Stack Program counter Values of the general purpose registers Rest of process characteristics can be shared (rest of the logical address space, information about devices, signals management, etc.) Example: server application

1.7 Most of resources are assigned to processes Characteristics/resources per thread: Next instruction to execute (PC value) A memory region to hold its stack Value of general purpose registers An identifier Scheduling unit is thread (each thread requires a CPU) Rest of resources/characteristics are shared by all threads in a process Traditional process: contains just one execution thread Processes vs. Threads

1.8 Example: client-server application Client 1 {.. Send_request(); Wait_response(); Process_response(); … } Client 1 {.. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { INICIO_FLUJO Esperar_peticion(); Preparar_respuesta(); Enviar_respuesta(); FIN_FLUJO } GLOBAL DATA Server { while() { INICIO_FLUJO Esperar_peticion(); Preparar_respuesta(); Enviar_respuesta(); FIN_FLUJO } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } START_thread Wait_request(); Prepare_response(); Send_response(); END_thread START_thread Wait_request(); Prepare_response(); Send_response(); END_thread START_thread Wait_request(); Prepare_response(); Send_response(); END_thread START_thread Wait_request(); Prepare_response(); Send_response(); END_thread START_thread Wait_request(); Prepare_response(); Send_response(); END_thread

1.9 Internals: Processes vs. Threads 1 process with N threads 1 PCB N different code sequences can be executed concurrently PCB allocates space to store execution context all threads Address space – 1 code region – 1 data region – 1 heap region + N stack regions (1 per thread)

1.10 Procesos vs. Flujos

1.11 Memory Sharing Between processes All process memory is private by default: no other process can access it (there are system calls to ask explicitly for shared memory between processes) Between threads All threads in a process can access all process address space. Some considerations – Each thread has its own stack region, to keep its local variables, parameters and values to control the execution flow – However, all stacks regions are also accessible by all threads in the process » Variables/parameters scope vs. permission of access to memory Internals: Processes vs. Threads

1.12 Potential scenarios for multithreaded or multiprocess applications Exploiting parallelism and concurrency Improving modularity I/O bounded applications Processes or threads dedicated just to implement device accesses Server applications Using concurrent applications

1.13 Benefits from using threads compared to using processes Management costs: creation, destruction and context switch Improve resource exploitations Communication mechanism is very simple: shared memory Benefits from using threads

1.14 There is not a standard interface common to all OS kernels: applications using kernel interface are not portable POSIX threads (Portable Operating System Interface, defined IEEE) Thread management interface in user-level Creation and destruction Synchronization Scheduling configuration It uses the OS system calls as required There exist implementations for all OS: using this interface applications become portable API is very complete and for some OS it is only partially implemented User level management: thread libraries

1.15 Pthread management services Creation Processes fork() Threads pthread_create(out Pth_id,in NULL, in function_name, in Pparam) Identification Processes : getpid() Threads : pthread_self() Ending Processes : exit(exit_code) Threads Pthexit_code) Synchronization with the end of execution Processes : waitpid(pid,ending_status, FLAGS) Threads: pthread_join(in thread_id, out PPexit_code) Check in the web the interfaces (man pages are not installed in the labs)

1.16 pthread_create Creates a new thread that will execute start_routine using arg parameter #include int pthread_create(pthread_t *th, pthread_attr_t *attr, void *(*start_routine)(void *), void *arg); th: will hold the thread identifier attr: initial characteristics of the thread (if NULL thread start the execution with the default characteristics) start_routine: that will execute the new thread (in C, the name of a function represents its starting address). This routine can receive just one parameter of void* type arg: routine parameter Returns 0 if creation ends ok or an error code otherwise Thread creation

1.17 pthread_self Returns the identifier of the thread that executes this function #include int pthread_self(void); Returns thread identifier Thread identification

1.18 pthread_exit It is executed by the thread that ends the execution Its parameter is the thread ending code #include int pthread_exit(void *status); status: thread return value (ending code) Retunrs 0 if creation ends ok or an error code otherwise Thread destruction

1.19 pthread_join Bloquea al flujo que la ejecuta hasta que el flujo indicado acabe y recoge el valor que ha pasado al pthread_exit Provoca la liberación de la estructura de datos asociada al flujo #include int pthread_join(pthread_t th, void **status); th: identificador del thread al que se espera status: contendrá el parámetro que el flujo th le pasó al pthread_exit. Si NULL se ignora el parámetro del pthread_exit. Devuelve código de error o 0 si ok Sincronización con el fin de un flujo

1.20 Threads in a process can exchange information through memory (all memory is shared between all threads in a process) Accessing same variables Risk: race condition There is a race condition when results of the execution depends on the relative execution order between the instructions of threads (or processes) Shared memory communication

1.21 int first= 1 /* shared variable*/ Example : race condition /* thread 1 */ if (first) { first--; task1(); } else { task2(); } /* thread 2 */ if (first) { first--; task1(); } else { task2(); } task1task2 Thread 1Thread 2 Thread 1 Thread 1 and Thread 2-- WRONG RESULT Programmer goal: use first boolean to distribute task 1 and task 2 between two threads. But using non-atomic operations!!!

1.22 Assembler code Do_task: pushl %ebp movl %esp, %ebp subl $8, %esp movl first, %eax testl %eax, %eax je.L2 movl first, %eax subl $1, %eax movl %eax, first call task1 jmp.L5.L2: call task2.L5: leave ret This is if code more tan 1 instruction This is substraction code more tan 1 instruction This is else code Which will be the effects if after executing movl instruction in the if section happens a context switch?

1.23 What happens?…eax is already set to 1 Do_task: pushl %ebp movl %esp, %ebp subl $8, %esp movl first, %eax testl %eax, %eax je.L2 movl first, %eax subl $1, %eax movl %eax, first call task1 jmp.L5.L2: call task2.L5: leave ret Do_task : pushl %ebp movl %esp, %ebp subl $8, %esp movl first, %eax testl %eax, %eax je.L2 movl first, %eax subl $1, %eax movl %eax, first call task1 jmp.L5.L2: call task2.L5: leave ret THREAD 1 THREAD 2 Context switch!

1.24 Critical section Sequence of code lines that contains race conditions that may cause wrong results Sequence of code lines that access shared changing variables Solution Mutual exclusion access to that code regions Avoid context switching? Critical section

1.25 Ensures that access to a critical section it is sequential Only one thread can execute code in a critical section at the same time (even if a context switch happens) Programmer responsibilities: Identify critical sections in the code Mark starting point and ending point of each critical section using toolds provided by OS OS provides programmers with system calls to mark starting point and ending point of a critical section: Starting point: if there is not other thread with permission to access the critical section, this thread gets the permission to access and continues with the code execution. Otherwise, this thread waits until access to critical section is released. Ending point: critical section is released and gives permission to one thread waiting for accessing the critical section, if there is any waiting. Mutual exclusion access

1.26 Mutual exclusion: pthread interface To consider: Each critical section is identified through a global variable of type pthread_mutex_t. It its necessary to define one variable per type of critical section. It is necessary to initialize this variable before using it. Ideally, this initialization should be performed before creating the pthreads that will use it. FunctinDescription pthread_mutex_initInitializes a pthread_mutex_t variable pthread_mutex_lockBlocks access to a critical section pthread_mutex_unlockReleases access to a critical section

1.27 int first= 1 // shared variable pthread_mutex_t rc1; // New shared variable Exemple: Mutex pthread_mutex_init(& rc1,NULL); // INITIALIZE rc1 VARIABLE: JUST ONCE ….. pthread_mutex_lock(& rc1); // BLOCK ACCESS if (first) { first--; pthread_mutex_unlock (& rc1); //RELEASE ACCESS task1(); } else { pthread_mutex_unlock(& rc1); // RELEASE ACCESS task2(); }

1.28 Programming considerations Critical sections should be as small as possible in order to maximize concurrency Mutual exclusion access is driven by the identifier (variable) used in the starting and ending point It is not necessary to have the same code in related critical sections If there exists several independent shared variable may be convenient to use different identifiers to protect them Mutual exclusion: considerations