Presentation is loading. Please wait.

Presentation is loading. Please wait.

Winter 2007SEG2101 Chapter 101 Chapter 10 Concurrency.

Similar presentations


Presentation on theme: "Winter 2007SEG2101 Chapter 101 Chapter 10 Concurrency."— Presentation transcript:

1 Winter 2007SEG2101 Chapter 101 Chapter 10 Concurrency

2 Winter 2007SEG2101 Chapter 102 Contents Concept of concurrency Subprogram-level concurrency Semaphores Monitors Message passing Java thread

3 Winter 2007SEG2101 Chapter 103 10.1: Concept of Concurrency Concurrency is divided into instruction level, statement level, unit level, and program level. Concurrent execution of program units can be either physically on separate processors or logically in some time-sliced fashion on a single processor computer system.

4 Winter 2007SEG2101 Chapter 104 Multiprocessor Architecture One general-purpose processor and one or more other processors that were used for input and output operations. Machines that had multiple complete processors. Several identical partial processors that were fed certain instructions from a single instruction stream. SIMD (Single-Instruction Multiple-Data) MIMD (Multiple-Instruction Multiple-Data) –Distributed –Shared memory The first computer that had multiple processors in late 1950s Early 1960 s Mid 1960s

5 Winter 2007SEG2101 Chapter 105 Why Study Concurrency? It provides a method of conceptualizing program solutions to problems. Multiple processor computers are now being widely used, and creating the need for software to make effective use of that hardware capability.

6 Winter 2007SEG2101 Chapter 106 10:2: Subprogram-Level Concurrency A task is a unit of program that can be in concurrent execution with other units of the same program. Each task in a program can provide one thread of control. A task can communicate with other tasks through shared nonlocal variables, through message passing, or through parameters.

7 Winter 2007SEG2101 Chapter 107 Synchronization A mechanism to control the order in which tasks execute. Cooperation synchronization is required between task A and task B when task A must wait for task B to complete some specific activity before task A can continue its execution. –Producer-consumer problem Competition synchronization is required between two tasks when both require the use of some resource that cannot be simultaneously used.

8 Winter 2007SEG2101 Chapter 108 The Need for Competition Synchronization

9 Winter 2007SEG2101 Chapter 109 Critical Section A segment of code, in which the thread may be changing common variables, updating a table, writing a file, and so on. The execution of critical sections by the threads is mutually exclusive in time.

10 Winter 2007SEG2101 Chapter 1010 Task States New: it has been created, but has not yet begun its execution. Runnable or ready: it is ready to run, but is not currently running. Running: it is currently executing, it has a processor and its code is being executed. Blocked: it has been running, but its execution was interrupted by one of several different events. Dead: no longer active in any sense.

11 Winter 2007SEG2101 Chapter 1011 10.3: Semaphores A semaphore is a data structure consisting of an integer and a queue that stores task descriptors. Semaphore is used to provide limited access to a data structure. P ( proberen, meaning to test and decrement the integer ) and V ( verhogen, meaning to increment ) operation A system of sending messages by holding the arms or two flags in certain positions according to an alphabetic code.

12 Winter 2007SEG2101 Chapter 1012 Semaphores (II) Operating systems often distinguish between counting and binary semaphores. The value of a counting semaphore can range over an unrestricted domain. The value of a binary semaphore can range only between 0 and 1. The general strategy for using a binary semaphore to control access to a critical section is as follows: Semaphore S; P(S); Critical section(); V(S);

13 Winter 2007SEG2101 Chapter 1013 Cooperation Synchronization

14 Winter 2007SEG2101 Chapter 1014 Producer and Consumer

15 Winter 2007SEG2101 Chapter 1015 Competition Synchronization

16 Winter 2007SEG2101 Chapter 1016 Competition Synchronization (II)

17 Winter 2007SEG2101 Chapter 1017 Example: Semaphore in C int sema_init(sema_t *sp, unsigned int count, int type, void * arg); int sema_destroy(sema_t *sp); int sema_wait(sema_t *sp); int sema_trywait(sema_t *sp); int sema_post(sema_t *sp); sema_example.cpp

18 Winter 2007SEG2101 Chapter 1018 Deadlocks A law passed by the Kansas legislature early in 20 th century: “…… When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until other has gone. ……”

19 Winter 2007SEG2101 Chapter 1019 Deadlock

20 Winter 2007SEG2101 Chapter 1020 Conditions for Deadlock Mutual exclusion: the act of allowing only one process to have access to a dedicated resource Hold-and-wait: there must exist a process that is holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes. No preemption: the lack of temporary reallocation of resources, can be released only voluntarily. Circular waiting: result of above three conditions, each process involved in the impasse is waiting for another to voluntarily release the resource.

21 Winter 2007SEG2101 Chapter 1021 Deadlock example

22 Winter 2007SEG2101 Chapter 1022 Deadlock example (cont’)

23 Winter 2007SEG2101 Chapter 1023 Deadlock example (cont’)

24 Winter 2007SEG2101 Chapter 1024 Strategy for Handling Deadlocks Prevention: eliminate one of the necessary conditions Avoidance: avoid if the system knows ahead of time the sequence of resource quests associated with each active processes Detection: detect by building directed resource graphs and looking for circles Recovery: once detected, it must be untangled and the system returned to normal as quickly as possible –Process termination –Resource preemption

25 Winter 2007SEG2101 Chapter 1025 The Dining-Philosophers Problem

26 Winter 2007SEG2101 Chapter 1026 10.4: Monitor A monitor presents a set of programmer defined operations that provide mutual exclusion with the monitor. The internal implementation of a monitor type cannot be accessed directly by the various threads. A procedure defined within a monitor can access only those variables that are declared locally with the monitor and any formal parameters that passed to the procedures.

27 Winter 2007SEG2101 Chapter 1027 Monitor (II) The monitor construct prohibits concurrent access to all procedures defined with the monitor. Only one thread or process at a time can be active within the monitor at any one time. The programmer does not need to code this synchronization (competition) explicitly, it is built into the monitor type.

28 Winter 2007SEG2101 Chapter 1028 Competition Synchronization One of the most important features of monitors is that shared data is resident in the monitor rather in any of the client units. The programmer does not synchronize mutually exclusive access to shared data through the use of semaphores or other mechanism. All accesses are resident in the monitor, the monitor implementation can be made to guarantee synchronized access by simply allowing only one access at a time. Calls to monitor procedures are implicitly queued if the monitor is busy at the time of the call.

29 Winter 2007SEG2101 Chapter 1029 Cooperation Synchronization Although mutually exclusive access to shared data is intrinsic with a monitor, cooperation between processes is still the task of the programmer. Programmer must guarantee that a shared buffer does not experience underflow or overflow.

30 Winter 2007SEG2101 Chapter 1030 Example of Using Monitor

31 Winter 2007SEG2101 Chapter 1031 Example: A Shared Buffer

32 Winter 2007SEG2101 Chapter 1032 Example: A Shared Buffer (II)

33 Winter 2007SEG2101 Chapter 1033 Example: A Shared Buffer (III)

34 Winter 2007SEG2101 Chapter 1034 10.5: Message Passing Message passing means that one process sends a message to another process and then continues its local processing. The message may take some time to get to the other process and may be stored in the input queue of the destination process if the latter is not immediately ready to receive the message. Then the message is received by the destination process, when the latter arrives at a point in its local processing where it is ready to receive messages. This is called asynchronous message passing (because sending and receiving is not at the same time).

35 Winter 2007SEG2101 Chapter 1035 Asynchronous Message Passing Blocking send and receive operations: –A receiver will be blocked if it arrives at the point where it may receive messages and no message is waiting. –A sender may get blocked if there is no room in the message queue between the sender and the receiver; however, in many cases, one assumes arbitrary long queues, which means that the sender will never be blocked.  Non-blocking send and receive operations:  Send and receive operations always return immediately, returning a status value which could indicate that no message has arrived at the receiver.  The receiver may test whether a message is waiting and possibly do some other processing.

36 Winter 2007SEG2101 Chapter 1036 Synchronous Message Passing One assumes that sending and receiving takes place at the same time (there is no need for an intermediate buffer). This is also called rendezvous and implies closer synchronization: the combined send-and-receive operation can only occur if both parties (the sending and receiving processes) are ready to do their part. The sending process may have to wait for the receiving process, or the receiving process may have to wait for the sending one.

37 Winter 2007SEG2101 Chapter 1037 Rendezvous

38 Winter 2007SEG2101 Chapter 1038 10.6: Java Threads The thread class Priorities Competition synchronization Cooperation synchronization Example

39 Winter 2007SEG2101 Chapter 1039 Multiple Threads A thread is a flow of execution, with a beginning and an end, of a task in a program. With Java, multiple threads from a program can be launched concurrently. Multiple threads can be executed in multiprocessor systems, and single-processor systems. Multithreading can make program more responsive, interactive, as well as enhance performance. ?

40 Winter 2007SEG2101 Chapter 1040 Creating threads by extending the thread class Demo - thread

41 Winter 2007SEG2101 Chapter 1041 Creating threads by implementing the runnable interface Demo - runnable

42 Winter 2007SEG2101 Chapter 1042 A Circular Queue

43 Winter 2007SEG2101 Chapter 1043 Java Semaphore Implementation

44 Winter 2007SEG2101 Chapter 1044 Thread States

45 Winter 2007SEG2101 Chapter 1045 Thread Groups A thread group is a set of threads. Some programs contain quite a few threads with similar functionality. We can group them together and perform operations in the entire group. E.g., we can suspend or resume all of the threads in a group at the same time.

46 Winter 2007SEG2101 Chapter 1046 Using Thread Groups Construct a thread group: ThreadGroup g = new ThreadGroup(“thread group”); Place a thread in a thread group: Thread t = new Thread(g, new ThreadClass(),”this thread”); Find out how many threads in a group are currently running: System.out.println(“the number of runnable threads in the group “ + g.activeCount()); Find which group a thread belongs to: theGroup = myThread.getThreadGroup();

47 Winter 2007SEG2101 Chapter 1047 Synchronization Example: Showing resource conflict –This program demonstrates the problem of resource conflicts. Suppose that you create and launch 100 threads, each of which adds a penny to a piggy bank. Assume that the piggy bank is initially empty. You create a class named Piggybank to model the piggy bank, a class named AddPennyThread to add a penny to the piggy bank, and a main class that creates and launched threads. –Demo - PiggyBank

48 Winter 2007SEG2101 Chapter 1048 Competition Synchronization Avoiding resource conflicts using synchronized method –keyword: Synchronized –demo - synchronized Avoiding resource conflicts using synchronized statements –demo - sync_statement

49 Winter 2007SEG2101 Chapter 1049 Cooperation Synchronization Cooperation synchronization in Java is accomplished by using the wait and notify methods that defined in object, the root class of all Java classes. The wait method is placed in a loop that tests the condition for legal access. If the condition is false, the thread is put in a queue to wait. The notify method is called to tell one waiting thread that the thing it was waiting for has happened. wait and notify can only be called from within a synchronized method.

50 Winter 2007SEG2101 Chapter 1050 Priorities The priories of threads need not all be the same. If main creates a thread, its default priority of the constant NORM_PRIORITY(5). MAX_PRIORITY(10) and MIN_PRIORITY(0) setPriority(int) : change the priority of this thread getPriority() : return this thread’s priority Demo - TestThread_Priority


Download ppt "Winter 2007SEG2101 Chapter 101 Chapter 10 Concurrency."

Similar presentations


Ads by Google