Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSNB334 Advanced Operating Systems 4

Similar presentations


Presentation on theme: "CSNB334 Advanced Operating Systems 4"— Presentation transcript:

1 CSNB334 Advanced Operating Systems 4
CSNB334 Advanced Operating Systems 4. Concurrency : Mutual Exclusion and Synchronization Asma Shakil

2 Concurrency Concurrency is the simultaneous execution of threads
The system must support concurrent execution of threads Scheduling : Deals with execution of “unrelated” threads Concurrency: Deals with execution of “related” threads Why is it necessary? Cooperation: One thread may need to wait for the result of some operation done by another thread e.g. “Calculate Average” must wait until all “data reads” are completed Competition: Several threads may compete for exclusive use of resources e.g. two threads trying to increment the value is a memorylocation

3 Concurrency Thr A Thr B load mem, reg load mem, reg inc reg inc reg
store reg, mem store reg, mem Critical Section All three instructions of a thread must be executed before other thread

4 Mutual Exclusion If one thread is going to use a shared resource (critical resource) a file a variable printer register, etc the other thread must be excluded from using the same resource Critical resource: A resource for which sharing by the threads must be controlled by the system Critical section of a program: A part of a program where access to a critical resource occurs

5 Concurrency

6 Concurrency

7 Concurrency Two problems related to concurrency control

8 Concurrency

9 Concurrency Mutual Exclusion Mechanism

10 Concurrency requirements
among all threads that have CSs for the same resource Only one thread at a time is allowed into its CS, It must not be possible for a thread requiring access to a CS to be delayed indefinitely no deadlock no starvation When no thread is in a CS, any thread requesting entry to the CS must be granted permission without delay No assumptions are made about the relative thread speeds or number of processors. A thread remains inside its CS for a finite time only.

11 Concurrency The responsibility of Mutual Exclusion can be satisfied in a number of ways: Leave it to the processes. Use special purpose machine instructions Provide some support within the OS semaphores message passing monitors, etc. It is the responsibility of the OS (not the programmer) to enforce mutual exclusion.

12 Mutual Exclusion : Hardware Support
Interrupt Disabling The only way of providing threads to interleave on a single processor machine is by the use of interrupts. If it is guaranteed that no interrupt occurs while a thread is in the CS, then no other thread can enter the same CS Simplest solution But it is not desirable to give a thread the power of controlling interrupts In a multiprocessor environment, it does not work This approach is often used by some OS threads (because they are short)

13 Mutual Exclusion : OS Support Semaphores
Semaphore is a non-negative integer variable Its value is initialized Its value can be changed by two “atomic” instructions WAIT: (P) Wait until the value is greater than 0. Then the value is decremented by 1. (The thread when waiting is moved to a wait queue) SIGNAL: (V) The value is incremented by 1 (If there is a thread waiting for that semaphore, it’s woken up and continues)

14 Semaphores

15 Semaphores

16 Semaphores

17 Semaphores

18 Synchronization

19 Synchronization

20 Synchronization

21 Synchronization

22 Synchronization Only one thread can access the buffer at a time
Order of WAIT signals is crucial E.g. if WAIT(Mutex) comes before WAIT(SlotFree) in producer algorithm, the system would go into deadlock when the buffer is full.

23 Implementation of Semaphores
They must be atomic There must be a queue mechanism, for putting the waiting thread into a queue, and waking it up later. Scheduler must be involved Define a semaphore as a record typedef struct { int value; struct process *PList; } semaphore; Assume two simple operations: block suspends the process that invokes it. wakeup(P) resumes the execution of a blocked process P.

24 Implementation of Semaphores
Semaphore operations now defined as wait(S): S.value--; if (S.value < 0) { add this process to S.PList; block; } signal(S): S.value++; if (S.value <= 0) { remove a process P from S.PList; wakeup(P); REM: Note that, S can be negative with this implementation, where the negative value indicates the number of processes waiting for S…

25 Semaphores Semaphore mechanism is handled by OS
Writing correct semaphore algorithms is a complex task All threads using the same semaphore are assumed to have the same priority. Implementation does not take priority into account.

26 Readers-Writers Reader tasks and writer tasks share a resource, say a database Many readers may access a database without fear of data corruption (interference) However, only one write may access the database at a time (all other readers and writers must be “locked” out of database. Solutions: simple solution gives priority to readers. Readers enter CS regardless of whether a write is waiting writers may starve second solution requires once a write is ready, it gets to perform its write as soon as possible readers may starve

27 Readers-Writers Problem
// enter rdrcnt C.S. wait(mutex); rdrcnt++; if (rdrcnt == 1) // get reader lock wait(wrt); // exit rdrcnt C.S. signal(mutex); … reading is performed … // enter rdrcnt C.S rdrcnt--; if (rdrcnt == 0) // release reader lock signal(wrt); signal(mutex): semaphore mutex = 1, wrt = 1; int rdrcnt = 0; Writer // get exclusive lock wait(wrt); … modify object … // release exclusive lock signal(wrt);

28 Concurrency in the Linux Kernel
The arrival of SMP machines and the facility of kernel preemption raises the concern of protecting shared kernel resources from concurrent access. A code area that accesses shared resources is called a critical section. Three mechanisms Spinlocks Mutexes Semaphores (old-style)

29 Spinlocks A spinlock ensures that only a single thread enters a critical section at a time. Any other thread has to remain spinning at the door until the first thread exits. #include <linux/spinlock.h> Spinlock_t mylock = SPIN_LOCK_UNLOCKED // Initialize Spin_lock (&mylock); //Acquire the spinlock() /*…………Critical Section code………..*/ Spin_unlock (&mylock); //Release the spinlock() Wasteful in terms of CPU resources

30 Mutexes Threads are put to sleep on a wait queue, as opposed to spinning when the mutex they try to acquire is locked When releasing a mutex, checks to see if anyone is sleeping on it. If so, wake up the first thread on the wait queue #include <linux/mutex.h> static DEFINE_MUTEX(mymutex) // Declare a mutex mutex_lock (&mymutex); //Acquire the mutex /*…………Critical Section code………..*/ mpin_unlock (&mymutex); //Release the mutex

31 The Old Semaphore Interface
Replaced by the mutex interface. #include <asm/semaphore.h> static DECLARE_MUTEX(mysem) // Declare a semaphore down (&mysem); //Acquire the semaphore /*…………Critical Section code………..*/ up (&mysem); //Release the semaphore

32 Example 1 A simple readers/writers program using a one-word shared memory. read-write-1.c

33 mmap() system call To memory map a file, use the mmap() system call, which is defined as follows: void *mmap(void *addr, size_t len, int prot, int flags, int fildes, off_t off);

34 addr len prot flags fildes off
This is the address we want the file mapped into. len This parameter is the length of the data we want to map into memory. This can be any length you want. (rounded to the page size) prot The "protection" argument allows you to specify what kind of access this process has to the memory mapped region. PROT_READ, PROT_WRITE, and PROT_EXEC, for read, write, and execute permissions, respectively. flags MAP_SHARED if you want to share your changes to the file with other processes, or MAP_PRIVATE otherwise. If you set it to the latter, your process will get a copy of the mapped region, so any changes you make to it will not be reflected in the original file--thus, other processes will not be able to see them. fildes This is the file descriptor opened earlier. off This is the offset in the file that you want to start mapping from. A restriction: this must be a multiple of the virtual memory page size. This page size can be obtained with a call to getpagesize(). mmap() returns -1 on error, and sets errno. Otherwise, it returns a pointer to the start of the mapped data.

35 Stack Mapped File Heap File Mapped Region of file Bss Text Off len

36 An Example of using mmap()
#include <unistd.h> #include <sys/types.h> #include <sys/mman.h> int fd, pagesize; char *data; fd = fopen("foo", O_RDONLY); pagesize = getpagesize(); data = mmap((caddr_t)0, pagesize, PROT_READ, MAP_SHARED, fd, pagesize); Once this code stretch has run, you can access the first byte of the mapped section of file using data[0].

37 Annotations for read-write-1.c:
The mmap procedure (from the <sys/mman.h> library) sets up a shared memory segment and returns the base address for that segment. It has the following form: base address = mmap(0, num bytes, protection, flags, -1, 0); The second parameter, num bytes, specifies the number of bytes to be allocated for the new segment. The third parameter, protection, specifies whether the segment may be used for reading, writing, executing, or other purpose. For typical shared memory, both read and write permission is specified using the combination PROT READ | PROT WRITE . In read-write-1.c, the combination MAP ANONYMOUS | MAP SHARED in the fourth parameter indicates a new memory segment should be allocation (rather than allocating space from a file descriptor) and all writes to the memory segment should be shared with other processes. The value -1 in the next-to-last parameter indicates a new, internal file descriptor is needed - the segment will not be part of an existing file.

38 Example 2 A simple readers/writers program using a shared buffer and spinlocks read-write-2.c

39 Annotations for read-write-2.c:
A logical buffer is allocated in shared memory, and buffer indexes, in and out, are used to identify where data will be stored or read by the writer or reader process. More specifically, *in gives the next free place in the buffer for the writer to enter data. *out gives the first place in the buffer for the reader to extract data. Writing to the buffer may continue unless the buffer is full (i.e., (*in + 1) % BUF SIZE == *out) and reading from the buffer may proceed unless the buffer is empty (i.e., *in == *out). Both conditions are tested in spin locks.


Download ppt "CSNB334 Advanced Operating Systems 4"

Similar presentations


Ads by Google