CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization.

Slides:



Advertisements
Similar presentations
Operating Systems ECE344 Midterm review Ding Yuan
Advertisements

Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
Race Conditions. Isolated & Non-Isolated Processes Isolated: Do not share state with other processes –The output of process is unaffected by run of other.
Chapter 4: Threads. Overview Multithreading Models Threading Issues Pthreads Windows XP Threads.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Threads© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science Department.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
CS252: Systems Programming Ninghui Li Final Exam Review.
CS510 Concurrent Systems Introduction to Concurrency.
Introduction to Threads CS240 Programming in C. Introduction to Threads A thread is a path execution By default, a C/C++ program has one thread called.
Concurrency, Mutual Exclusion and Synchronization.
Implementing Synchronization. Synchronization 101 Synchronization constrains the set of possible interleavings: Threads “agree” to stay out of each other’s.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
4061 Session 23 (4/10). Today Reader/Writer Locks and Semaphores Lock Files.
Games Development 2 Concurrent Programming CO3301 Week 9.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
CSE 451: Operating Systems Section 5 Midterm review.
Kernel Locking Techniques by Robert Love presented by Scott Price.
CS252: Systems Programming
Java Thread and Memory Model
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 11: Thread-safe Data Structures, Semaphores.
Department of Computer Science and Software Engineering
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Processes, Threads, and Process States. Programs and Processes  Program: an executable file (before/after compilation)  Process: an instance of a program.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Thread-safe Data Structures, Semaphores.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
CSE 153 Design of Operating Systems Winter 2015 Lecture 5: Synchronization.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Big Picture Lab 4 Operating Systems C Andras Moritz
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
CSE 120 Principles of Operating
Background on the need for Synchronization
Threads Threads.
Atomic Operations in Hardware
Atomic Operations in Hardware
Lecture 11: Mutual Exclusion
Lecture 14: Pthreads Mutex and Condition Variables
Jonathan Walpole Computer Science Portland State University
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
Implementing Mutual Exclusion
Lecture 14: Pthreads Mutex and Condition Variables
Kernel Synchronization II
Lecture 11: Mutual Exclusion
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Operating Systems Concepts
CSE 542: Operating Systems
Presentation transcript:

CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization

Introduction to Threads A thread is a path execution By default, a C/C++ program has one thread called "main thread" that starts the main() function. main() --- printf( "hello\n" ); --- }

Introduction to Threads You can create multiple paths of execution using: POSIX threads ( standard ) pthread_create( &thr_id, attr, func, arg ) Solaris threads thr_create( stack, stack_size, func, arg, flags, &thr_id ) Windows CreateThread(attr, stack_size, func, arg, flags, &thr_id)

Introduction to Threads Every thread will have its own Stack PC – Program counter Set of registers State Each thread will have its own function calls, and local variables. The process table entry will have a stack, set of registers, and PC for every thread in the process.

Applications of Threads Concurrent Server applications Assume a web server that receives two requests: First, one request from a computer connected through a modem that will take 2 minutes. Then another request from a computer connected to a fast network that will take.01 secs. If the web server is single threaded, the second request will be processed only after 2 minutes. In a multi-threaded server, two threads will be created to process both requests simultaneously. The second request will be processed as soon as it arrives.

Application of Threads Taking Advantage of Multiple CPUs A program with only one thread can use only one CPU. If the computer has multiple cores, only one of them will be used. If a program divides the work among multiple threads, the OS will schedule a different thread in each CPU. This will make the program run faster.

Applications of Threads Interactive Applications. Threads simplify the implementation of interactive applications that require multiple simultaneous activities. Assume an Internet telephone application with the following threads: Player thread - receives packets from the internet and plays them. Capture Thread – captures sound and sends the voice packets Ringer Server – Receives incoming requests and tells other phones when the phone is busy. Having a single thread doing all this makes the code cumbersome and difficult to read.

Advantages and Disadvantages of Threads vs. Processes Advantages of Threads Fast thread creation - creating a new path of execution is faster than creating a new process with a new virtual memory address space and open file table. Fast context switch - context switching across threads is faster than across processes. Fast communication across threads – threads communicate using global variables that is faster and easier than processes communicating through pipes or files.

Advantages and Disadvantages of Threads vs. Processes Disadvantages of Threads Threads are less robust than processes – If one thread crashes due to a bug in the code, the entire application will go down. If an application is implemented with multiple processes, if one process goes down, the other ones remain running. Threads have more synchronization problems – Since threads modify the same global variables at the same time, they may corrupt the data structures. Synchronization through mutex locks and semaphores is needed for that. Processes do not have that problem because each of them have their own copy of the variables.

Synchronization Problems with Multiple Threads Threads share same global variables. Multiple threads can modify the same data structures at the same time This can corrupt the data structures of the program. Even the most simple operations, like increasing a counter, may have problems when running multiple threads.

Example of Problems with Synchronization // Global counter int counter = 0; void increment_loop(void *arg){ int i; int max = * ((int *)arg); for(i=0;i<max;i++){ int tmp = counter; tmp=tmp+1; counter=tmp; }

Example of Problems with Synchronization int main(){ pthread_t t1,t2; int max = ; void *ret; pthread_create(&t1,NULL, increment_loop,(void*)&max); pthread_create(&t2,NULL, increment_loop,(void*)&max); //wait until threads finish pthread_join(t1, &ret); pthread_join(t2, &ret); printf(“counter total=%d”,counter); }

Example of Problems with Synchronization We would expect that the final value of counter would be 10,000, ,000,000= 20,000,000 but very likely the final value will be less than that (E.g ). The context switch from one thread to another may change the sequence of events so the counter may loose some of the counts.

Example of Problems with Synchronization int counter = 0; void increment_loop(int max){ for(int i=0;i<max;i++){ a)int tmp= counter; b)tmp=tmp+1; c)counter=tmp; } T2 int counter = 0; void increment_loop(int max){ for(int i=0;i<max;i++){ a)int tmp = counter; b)tmp=tmp+1; c)counter=tmp; } T1

Example of Problems with Synchronization T1T2T0 (main) for(…) a)tmp1=counter (tmp1=0) (Context switch) Join t1 (wait) Starts running a)tmp2=counter (tmp2=0) b)tmp2=tmp2+1 c)counter=tmp2 Counter=1 a)b)c)a)b)c)… Counter=23 (context switch) b)tmp1=tmp1+1 c)counter=tmp1 Counter=1 time

Example of Problems with Synchronization As a result 23 of the increments will be lost. T1 will reset the counter variable to 1 after T2 increased it 23 times. Even if we use counter++ instead of a)b) c) we still have the same problem because the compiler will generate separate instructions that will look like a)b)c). Worse things will happen to lists, hash tables and other data structures in a multi-threaded program. The solution is to make certain pieces of the code Atomic.

Atomicity Atomic Section: A portion of the code that needs to appear to the rest of the system to occur instantaneously. Otherwise corruption of the variables is possible. An atomic section is also called sometimes a “Critical Section”

Atomicity by disabling interrupts On uni-processor, operation is atomic as long as context switch doesn’t occur during operation To achieve atomicity: disable interrupts upon entering atomic section, and enable upon leaving Context switches cannot happen with interrupt disabled. Available only in Kernel mode; Only used in kernel programming Other interrupts may be lost. Does not provide atomicity with multiprocessor

Achieving Atomicity in Concurrent Programs Our main goal is to learn how to write concurrent programs using synchronization tools We also explain a little bit how these tools are implemented Concurrent Program High-level synchronization tools (mutex locks, spin locks, semaphores, condition variables, read/write locks) Hardware support (interrupt disable/enable, test & set, and so on)

Atomicity by Mutex Locks Mutex Locks are software mechanisms that enforce atomicity Only one thread can hold a mutex lock at a time When a thread tries to obtain a mutex lock that is held by another thread, it is put on hold (aka put to sleep, put to wait, blocked, etc). The thread may be wake up when the lock is released.

Mutex Locks Usage Declaration: #include pthread_mutex_t mutex; Initialize pthread_mutex_init( &mutex, atttributes); Start Atomic Section pthread_mutex_lock(&mutex); End Atomic section pthread_mutex_unlock(&mutex);

Example of Mutex Locks #include int counter = 0; // Global counter pthread_mutex_t mutex; void increment_loop(int max){ for(int i=0;i<max;i++){ pthread_mutex_lock(&mutex); int tmp = counter; tmp=tmp+1; counter=tmp; pthread_mutex_unlock(&mutex); } Threads

Example of Mutex Locks int main(){ pthread_t t1,t2; pthread_mutex_init(&mutex,NULL); pthread_create(&t1,NULL, increment, ); pthread_create(&t2,NULL, increment, ); //wait until threads finish pthread_join(&t1); pthread_join(&t2); printf(“counter total=%d”,counter); }

Example of Mutex Locks T1T2T0 (main) for(…) mutex_lock(&m) a)tmp1=counter (tmp1=0) (Context switch) Join t1 (wait) Starts running mutex_lock(&m) (wait) (context switch) b)tmp1=tmp1+1 c)counter=tmp1 Counter=1 mutex_unlock(&m) a)tmp2=counter b)tmp2=tmp2+1 c)counter=tmp2 time

Example of Mutex Locks As a result, the steps a),b),c) will be atomic so the final counter total will be 10,000, ,000,000= 20,000,000 no matter if there are context switches in the middle of a)b)c)

Mutual Exclusion Mutex Locks enforce the mutual exclusion of all code between lock and unlock Mutex_lock(&m) A B C Mutex_unlock(&m) Mutex_lock(&m) D E F Mutex_unlock(&m)

Mutual Exclusion This means that the sequence ABC, DEF, can be executed as an atomic block without interleaving. Time > T1 -> ABC ABC T2 -> DEF DEF T3 -> ABC DEF

Mutual Exclusion If different mutex locks are used (m1!=m2) then the sections are no longer atomic ABC and DEF can interleave Mutex_lock(&m1) A B C Mutex_unlock(&m1) Mutex_lock(&m2) D E F Mutex_unlock(&m2)

Atomicity by Spin Locks Spinlocks make thread “spin” busy waiting until lock is released, instead of putting thread in waiting state. Why do this? Using mutex blocks a thread if it fails to obtain the lock, and later unblocks it, this has overhead If the lock will be available soon, then it is better to do busy waiting Could provide better performance when locks are held for short period of time.

Example of Spin Locks #include int counter = 0; // Global counter int m = 0; void increment_loop(int max){ for(int i=0;i<max;i++){ spin_lock(&m); a) int tmp = counter; b) tmp=tmp+1; c) counter=tmp; spin_unlock(&m); }

Spin Locks Example T1T2T0 for(…) spin_lock(&m) while (test_and_set(&m))  oldval=0 (m=1)break while a) (Context switch) Join t1 Join t2 (wait) Starts running spin_lock(&m) while (test_and_set(&m)) ->oldval=1 (m==1)continue in while thr_yield()(context switch) b)c) Counter=1 spin_unlock(&m) m=0 while (test_and_set(&m)) -> oldval=0 Break while a) b) c)

Spin Locks vs. Mutex should-one-use-a-spinlock-instead-of-mutex On a single CPU, it makes no sense to use spin locks Why? Spin locks could be useful on multi-core/multi-CPU system when locks are typically held for short period of time. In kernel code, spin locks can be useful for code that cannot be put to sleep (e.g., interrupt handlers)

Implementing Mutex Locks using Spin Locks mutex_lock(mutex) { spin_lock(); if (mutex.lock) { mutex.queue( currentThread) spin_unlock(); setWaitState(); GiveUpCPU(); } else{ mutex.lock = true; spin_unlock(); } mutex_unlock() { spin_lock(); if (mutex.queue. nonEmpty) { t=mutex.dequeue(); t.setReadyState(); } else { mutex.lock=false; } spin_unlock(); }

Test_and_set There is an instruction test_and_set that is guaranteed to be atomic Pseudocode: int test_and_set(int *v){ int oldval = *v; *v = 1; return oldval; } This instruction is implemented by the CPU. You don’t need to implement it.

A Semi-Spin Lock Implemented Using test_and_set int lock = 0; void spinlock(int * lock) { while (test_and_set(&lock) != 0) { } void spinunlock(int*lock){ *lock = 0; }

Review Questions What does the system need to maintain for each thread? Why one wants to use multiple threads? What are the pros and cons of using threads vs. processes? What is an atomic section? Why disabling interrupt ensures atomicity on a single CPU machine?

Review Questions What is the meaning of the “test and set” primitive? What is a mutex lock? What is the semantics of lock and unlock calls on a mutex lock? How to use mutex locks to achieve atomicity? The exam does not require spin lock or implementation of mutex lock.