Presentation is loading. Please wait.

Presentation is loading. Please wait.

1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section.

Similar presentations


Presentation on theme: "1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section."— Presentation transcript:

1 1.1 T5-multithreading SO-Grade 2013-2014-Q2

2 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section Mutual exclusion access Index

3 1.3 Until now… Just one sequence of execution: just one program counter and just one stack There is not support to execute different concurrent functions inside one process But there can be some independent functions that could exploit concurrency Processes vs. Threads

4 1.4 Single-process server: Server cannot serve more than one client at the same time It is not possible to exploit advantage of concurrency and parallelism Multi-process server: one process per simultaneous client to be served Concurrent and/or parallel execution But… there exists resource wasting Replication of data structures that keep the same information, replication of logical address spaces, inefficient communication mechanisms,… Example: client-server application Client 1 {.. Send_request(); Wait_response(); Process_response(); … } Client 1 {.. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while(){ Wait_request(); Prepare_response(); Send_response(); } GLOBAL DATA Server { while(){ Wait_request(); Prepare_response(); Send_response(); } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … }

5 1.5 Example: client-server application Client 1 {.. Send_request(); Wait_response(); Process_response(); … } Client 1 {.. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process }

6 1.6 Alternative: multithreaded server Enable several concurrent executions associated to one process What is it necessary to describe one execution sequence? Stack Program counter Values of the general purpose registers Rest of process characteristics can be shared (rest of the logical address space, information about devices, signals management, etc.) Example: server application

7 1.7 Most of resources are assigned to processes Characteristics/resources per thread: Next instruction to execute (PC value) A memory region to hold its stack Value of general purpose registers An identifier Scheduling unit is thread (each thread requires a CPU) Rest of resources/characteristics are shared by all threads in a process Traditional process: contains just one execution thread Processes vs. Threads

8 1.8 Example: client-server application Client 1 {.. Send_request(); Wait_response(); Process_response(); … } Client 1 {.. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { INICIO_FLUJO Esperar_peticion(); Preparar_respuesta(); Enviar_respuesta(); FIN_FLUJO } GLOBAL DATA Server { while() { INICIO_FLUJO Esperar_peticion(); Preparar_respuesta(); Enviar_respuesta(); FIN_FLUJO } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client 2 {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } Client N {.. Send_request(); Wait_response(); Process_response(); … } START_thread Wait_request(); Prepare_response(); Send_response(); END_thread START_thread Wait_request(); Prepare_response(); Send_response(); END_thread START_thread Wait_request(); Prepare_response(); Send_response(); END_thread START_thread Wait_request(); Prepare_response(); Send_response(); END_thread START_thread Wait_request(); Prepare_response(); Send_response(); END_thread

9 1.9 Internals: Processes vs. Threads 1 process with N threads 1 PCB N different code sequences can be executed concurrently PCB allocates space to store execution context all threads Address space – 1 code region – 1 data region – 1 heap region + N stack regions (1 per thread)

10 1.10 Procesos vs. Flujos

11 1.11 Memory Sharing Between processes All process memory is private by default: no other process can access it (there are system calls to ask explicitly for shared memory between processes) Between threads All threads in a process can access all process address space. Some considerations – Each thread has its own stack region, to keep its local variables, parameters and values to control the execution flow – However, all stacks regions are also accessible by all threads in the process » Variables/parameters scope vs. permission of access to memory Internals: Processes vs. Threads

12 1.12 Potential scenarios for multithreaded or multiprocess applications Exploiting parallelism and concurrency Improving modularity I/O bounded applications Processes or threads dedicated just to implement device accesses Server applications Using concurrent applications

13 1.13 Benefits from using threads compared to using processes Management costs: creation, destruction and context switch Improve resource exploitations Communication mechanism is very simple: shared memory Benefits from using threads

14 1.14 There is not a standard interface common to all OS kernels: applications using kernel interface are not portable POSIX threads (Portable Operating System Interface, defined IEEE) Thread management interface in user-level Creation and destruction Synchronization Scheduling configuration It uses the OS system calls as required There exist implementations for all OS: using this interface applications become portable API is very complete and for some OS it is only partially implemented User level management: thread libraries

15 1.15 Pthread management services Creation Processes fork() Threads pthread_create(out Pth_id,in NULL, in function_name, in Pparam) Identification Processes : getpid() Threads : pthread_self() Ending Processes : exit(exit_code) Threads Pthexit_code) Synchronization with the end of execution Processes : waitpid(pid,ending_status, FLAGS) Threads: pthread_join(in thread_id, out PPexit_code) Check in the web the interfaces (man pages are not installed in the labs)

16 1.16 pthread_create Creates a new thread that will execute start_routine using arg parameter #include int pthread_create(pthread_t *th, pthread_attr_t *attr, void *(*start_routine)(void *), void *arg); th: will hold the thread identifier attr: initial characteristics of the thread (if NULL thread start the execution with the default characteristics) start_routine: routine @ that will execute the new thread (in C, the name of a function represents its starting address). This routine can receive just one parameter of void* type arg: routine parameter Returns 0 if creation ends ok or an error code otherwise Thread creation

17 1.17 pthread_self Returns the identifier of the thread that executes this function #include int pthread_self(void); Returns thread identifier Thread identification

18 1.18 pthread_exit It is executed by the thread that ends the execution Its parameter is the thread ending code #include int pthread_exit(void *status); status: thread return value (ending code) Retunrs 0 if creation ends ok or an error code otherwise Thread destruction

19 1.19 pthread_join Bloquea al flujo que la ejecuta hasta que el flujo indicado acabe y recoge el valor que ha pasado al pthread_exit Provoca la liberación de la estructura de datos asociada al flujo #include int pthread_join(pthread_t th, void **status); th: identificador del thread al que se espera status: contendrá el parámetro que el flujo th le pasó al pthread_exit. Si NULL se ignora el parámetro del pthread_exit. Devuelve código de error o 0 si ok Sincronización con el fin de un flujo

20 1.20 Threads in a process can exchange information through memory (all memory is shared between all threads in a process) Accessing same variables Risk: race condition There is a race condition when results of the execution depends on the relative execution order between the instructions of threads (or processes) Shared memory communication

21 1.21 int first= 1 /* shared variable*/ Example : race condition /* thread 1 */ if (first) { first--; task1(); } else { task2(); } /* thread 2 */ if (first) { first--; task1(); } else { task2(); } task1task2 Thread 1Thread 2 Thread 1 Thread 1 and Thread 2-- WRONG RESULT Programmer goal: use first boolean to distribute task 1 and task 2 between two threads. But using non-atomic operations!!!

22 1.22 Assembler code Do_task: pushl %ebp movl %esp, %ebp subl $8, %esp movl first, %eax testl %eax, %eax je.L2 movl first, %eax subl $1, %eax movl %eax, first call task1 jmp.L5.L2: call task2.L5: leave ret This is if code more tan 1 instruction This is substraction code more tan 1 instruction This is else code Which will be the effects if after executing movl instruction in the if section happens a context switch?

23 1.23 What happens?…eax is already set to 1 Do_task: pushl %ebp movl %esp, %ebp subl $8, %esp movl first, %eax testl %eax, %eax je.L2 movl first, %eax subl $1, %eax movl %eax, first call task1 jmp.L5.L2: call task2.L5: leave ret Do_task : pushl %ebp movl %esp, %ebp subl $8, %esp movl first, %eax testl %eax, %eax je.L2 movl first, %eax subl $1, %eax movl %eax, first call task1 jmp.L5.L2: call task2.L5: leave ret THREAD 1 THREAD 2 Context switch!

24 1.24 Critical section Sequence of code lines that contains race conditions that may cause wrong results Sequence of code lines that access shared changing variables Solution Mutual exclusion access to that code regions Avoid context switching? Critical section

25 1.25 Ensures that access to a critical section it is sequential Only one thread can execute code in a critical section at the same time (even if a context switch happens) Programmer responsibilities: Identify critical sections in the code Mark starting point and ending point of each critical section using toolds provided by OS OS provides programmers with system calls to mark starting point and ending point of a critical section: Starting point: if there is not other thread with permission to access the critical section, this thread gets the permission to access and continues with the code execution. Otherwise, this thread waits until access to critical section is released. Ending point: critical section is released and gives permission to one thread waiting for accessing the critical section, if there is any waiting. Mutual exclusion access

26 1.26 Mutual exclusion: pthread interface To consider: Each critical section is identified through a global variable of type pthread_mutex_t. It its necessary to define one variable per type of critical section. It is necessary to initialize this variable before using it. Ideally, this initialization should be performed before creating the pthreads that will use it. FunctinDescription pthread_mutex_initInitializes a pthread_mutex_t variable pthread_mutex_lockBlocks access to a critical section pthread_mutex_unlockReleases access to a critical section

27 1.27 int first= 1 // shared variable pthread_mutex_t rc1; // New shared variable Exemple: Mutex pthread_mutex_init(& rc1,NULL); // INITIALIZE rc1 VARIABLE: JUST ONCE ….. pthread_mutex_lock(& rc1); // BLOCK ACCESS if (first) { first--; pthread_mutex_unlock (& rc1); //RELEASE ACCESS task1(); } else { pthread_mutex_unlock(& rc1); // RELEASE ACCESS task2(); }

28 1.28 Programming considerations Critical sections should be as small as possible in order to maximize concurrency Mutual exclusion access is driven by the identifier (variable) used in the starting and ending point It is not necessary to have the same code in related critical sections If there exists several independent shared variable may be convenient to use different identifiers to protect them Mutual exclusion: considerations


Download ppt "1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section."

Similar presentations


Ads by Google