Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization.

Similar presentations


Presentation on theme: "CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization."— Presentation transcript:

1 CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization

2 Introduction to Threads A thread is a path execution By default, a C/C++ program has one thread called "main thread" that starts the main() function. main() --- printf( "hello\n" ); --- }

3 Introduction to Threads You can create multiple paths of execution using: POSIX threads ( standard ) pthread_create( &thr_id, attr, func, arg ) Solaris threads thr_create( stack, stack_size, func, arg, flags, &thr_id ) Windows CreateThread(attr, stack_size, func, arg, flags, &thr_id)

4 Introduction to Threads Every thread will have its own Stack PC – Program counter Set of registers State Each thread will have its own function calls, and local variables. The process table entry will have a stack, set of registers, and PC for every thread in the process.

5 Applications of Threads Concurrent Server applications Assume a web server that receives two requests: First, one request from a computer connected through a modem that will take 2 minutes. Then another request from a computer connected to a fast network that will take.01 secs. If the web server is single threaded, the second request will be processed only after 2 minutes. In a multi-threaded server, two threads will be created to process both requests simultaneously. The second request will be processed as soon as it arrives.

6 Application of Threads Taking Advantage of Multiple CPUs A program with only one thread can use only one CPU. If the computer has multiple cores, only one of them will be used. If a program divides the work among multiple threads, the OS will schedule a different thread in each CPU. This will make the program run faster.

7 Applications of Threads Interactive Applications. Threads simplify the implementation of interactive applications that require multiple simultaneous activities. Assume an Internet telephone application with the following threads: Player thread - receives packets from the internet and plays them. Capture Thread – captures sound and sends the voice packets Ringer Server – Receives incoming requests and tells other phones when the phone is busy. Having a single thread doing all this makes the code cumbersome and difficult to read.

8 Advantages and Disadvantages of Threads vs. Processes Advantages of Threads Fast thread creation - creating a new path of execution is faster than creating a new process with a new virtual memory address space and open file table. Fast context switch - context switching across threads is faster than across processes. Fast communication across threads – threads communicate using global variables that is faster and easier than processes communicating through pipes or files.

9 Advantages and Disadvantages of Threads vs. Processes Disadvantages of Threads Threads are less robust than processes – If one thread crashes due to a bug in the code, the entire application will go down. If an application is implemented with multiple processes, if one process goes down, the other ones remain running. Threads have more synchronization problems – Since threads modify the same global variables at the same time, they may corrupt the data structures. Synchronization through mutex locks and semaphores is needed for that. Processes do not have that problem because each of them have their own copy of the variables.

10 Synchronization Problems with Multiple Threads Threads share same global variables. Multiple threads can modify the same data structures at the same time This can corrupt the data structures of the program. Even the most simple operations, like increasing a counter, may have problems when running multiple threads.

11 Example of Problems with Synchronization // Global counter int counter = 0; void increment_loop(void *arg){ int i; int max = * ((int *)arg); for(i=0;i<max;i++){ int tmp = counter; tmp=tmp+1; counter=tmp; }

12 Example of Problems with Synchronization int main(){ pthread_t t1,t2; int max = 10000000; void *ret; pthread_create(&t1,NULL, increment_loop,(void*)&max); pthread_create(&t2,NULL, increment_loop,(void*)&max); //wait until threads finish pthread_join(t1, &ret); pthread_join(t2, &ret); printf(“counter total=%d”,counter); }

13 Example of Problems with Synchronization We would expect that the final value of counter would be 10,000,000+ 10,000,000= 20,000,000 but very likely the final value will be less than that (E.g. 12804354). The context switch from one thread to another may change the sequence of events so the counter may loose some of the counts.

14 Example of Problems with Synchronization int counter = 0; void increment_loop(int max){ for(int i=0;i<max;i++){ a)int tmp= counter; b)tmp=tmp+1; c)counter=tmp; } T2 int counter = 0; void increment_loop(int max){ for(int i=0;i<max;i++){ a)int tmp = counter; b)tmp=tmp+1; c)counter=tmp; } T1

15 Example of Problems with Synchronization T1T2T0 (main) for(…) a)tmp1=counter (tmp1=0) (Context switch) Join t1 (wait) Starts running a)tmp2=counter (tmp2=0) b)tmp2=tmp2+1 c)counter=tmp2 Counter=1 a)b)c)a)b)c)… Counter=23 (context switch) b)tmp1=tmp1+1 c)counter=tmp1 Counter=1 time

16 Example of Problems with Synchronization As a result 23 of the increments will be lost. T1 will reset the counter variable to 1 after T2 increased it 23 times. Even if we use counter++ instead of a)b) c) we still have the same problem because the compiler will generate separate instructions that will look like a)b)c). Worse things will happen to lists, hash tables and other data structures in a multi-threaded program. The solution is to make certain pieces of the code Atomic.

17 Atomicity Atomic Section: A portion of the code that needs to appear to the rest of the system to occur instantaneously. Otherwise corruption of the variables is possible. An atomic section is also called sometimes a “Critical Section”

18 Atomicity by disabling interrupts On uni-processor, operation is atomic as long as context switch doesn’t occur during operation To achieve atomicity: disable interrupts upon entering atomic section, and enable upon leaving Context switches cannot happen with interrupt disabled. Available only in Kernel mode; Only used in kernel programming Other interrupts may be lost. Does not provide atomicity with multiprocessor

19 Achieving Atomicity in Concurrent Programs Our main goal is to learn how to write concurrent programs using synchronization tools We also explain a little bit how these tools are implemented Concurrent Program High-level synchronization tools (mutex locks, spin locks, semaphores, condition variables, read/write locks) Hardware support (interrupt disable/enable, test & set, and so on)

20 Atomicity by Mutex Locks Mutex Locks are software mechanisms that enforce atomicity Only one thread can hold a mutex lock at a time When a thread tries to obtain a mutex lock that is held by another thread, it is put on hold (aka put to sleep, put to wait, blocked, etc). The thread may be wake up when the lock is released.

21 Mutex Locks Usage Declaration: #include pthread_mutex_t mutex; Initialize pthread_mutex_init( &mutex, atttributes); Start Atomic Section pthread_mutex_lock(&mutex); End Atomic section pthread_mutex_unlock(&mutex);

22 Example of Mutex Locks #include int counter = 0; // Global counter pthread_mutex_t mutex; void increment_loop(int max){ for(int i=0;i<max;i++){ pthread_mutex_lock(&mutex); int tmp = counter; tmp=tmp+1; counter=tmp; pthread_mutex_unlock(&mutex); } Threads

23 Example of Mutex Locks int main(){ pthread_t t1,t2; pthread_mutex_init(&mutex,NULL); pthread_create(&t1,NULL, increment,10000000); pthread_create(&t2,NULL, increment,10000000); //wait until threads finish pthread_join(&t1); pthread_join(&t2); printf(“counter total=%d”,counter); }

24 Example of Mutex Locks T1T2T0 (main) for(…) mutex_lock(&m) a)tmp1=counter (tmp1=0) (Context switch) Join t1 (wait) Starts running mutex_lock(&m) (wait) (context switch) b)tmp1=tmp1+1 c)counter=tmp1 Counter=1 mutex_unlock(&m) a)tmp2=counter b)tmp2=tmp2+1 c)counter=tmp2 time

25 Example of Mutex Locks As a result, the steps a),b),c) will be atomic so the final counter total will be 10,000,000+ 10,000,000= 20,000,000 no matter if there are context switches in the middle of a)b)c)

26 Mutual Exclusion Mutex Locks enforce the mutual exclusion of all code between lock and unlock Mutex_lock(&m) A B C Mutex_unlock(&m) Mutex_lock(&m) D E F Mutex_unlock(&m)

27 Mutual Exclusion This means that the sequence ABC, DEF, can be executed as an atomic block without interleaving. Time ------------------------> T1 -> ABC ABC T2 -> DEF DEF T3 -> ABC DEF

28 Mutual Exclusion If different mutex locks are used (m1!=m2) then the sections are no longer atomic ABC and DEF can interleave Mutex_lock(&m1) A B C Mutex_unlock(&m1) Mutex_lock(&m2) D E F Mutex_unlock(&m2)

29 Atomicity by Spin Locks Spinlocks make thread “spin” busy waiting until lock is released, instead of putting thread in waiting state. Why do this? Using mutex blocks a thread if it fails to obtain the lock, and later unblocks it, this has overhead If the lock will be available soon, then it is better to do busy waiting Could provide better performance when locks are held for short period of time.

30 Example of Spin Locks #include int counter = 0; // Global counter int m = 0; void increment_loop(int max){ for(int i=0;i<max;i++){ spin_lock(&m); a) int tmp = counter; b) tmp=tmp+1; c) counter=tmp; spin_unlock(&m); }

31 Spin Locks Example T1T2T0 for(…) spin_lock(&m) while (test_and_set(&m))  oldval=0 (m=1)break while a) (Context switch) Join t1 Join t2 (wait) Starts running spin_lock(&m) while (test_and_set(&m)) ->oldval=1 (m==1)continue in while thr_yield()(context switch) b)c) Counter=1 spin_unlock(&m) m=0 while (test_and_set(&m)) -> oldval=0 Break while a) b) c)

32 Spin Locks vs. Mutex http://stackoverflow.com/questions/5869825/when- should-one-use-a-spinlock-instead-of-mutex On a single CPU, it makes no sense to use spin locks Why? Spin locks could be useful on multi-core/multi-CPU system when locks are typically held for short period of time. In kernel code, spin locks can be useful for code that cannot be put to sleep (e.g., interrupt handlers)

33 Implementing Mutex Locks using Spin Locks mutex_lock(mutex) { spin_lock(); if (mutex.lock) { mutex.queue( currentThread) spin_unlock(); setWaitState(); GiveUpCPU(); } else{ mutex.lock = true; spin_unlock(); } mutex_unlock() { spin_lock(); if (mutex.queue. nonEmpty) { t=mutex.dequeue(); t.setReadyState(); } else { mutex.lock=false; } spin_unlock(); }

34 Test_and_set There is an instruction test_and_set that is guaranteed to be atomic Pseudocode: int test_and_set(int *v){ int oldval = *v; *v = 1; return oldval; } This instruction is implemented by the CPU. You don’t need to implement it.

35 A Semi-Spin Lock Implemented Using test_and_set int lock = 0; void spinlock(int * lock) { while (test_and_set(&lock) != 0) { } void spinunlock(int*lock){ *lock = 0; }

36 Review Questions What does the system need to maintain for each thread? Why one wants to use multiple threads? What are the pros and cons of using threads vs. processes? What is an atomic section? Why disabling interrupt ensures atomicity on a single CPU machine?

37 Review Questions What is the meaning of the “test and set” primitive? What is a mutex lock? What is the semantics of lock and unlock calls on a mutex lock? How to use mutex locks to achieve atomicity? The exam does not require spin lock or implementation of mutex lock.


Download ppt "CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization."

Similar presentations


Ads by Google