Presentation is loading. Please wait.

Presentation is loading. Please wait.

Concurrency. Levels of concurrency  Instructionmachine  Statementprogramming language  Unit/subprogramprogramming language  Programmachine, operating.

Similar presentations


Presentation on theme: "Concurrency. Levels of concurrency  Instructionmachine  Statementprogramming language  Unit/subprogramprogramming language  Programmachine, operating."— Presentation transcript:

1 Concurrency

2 Levels of concurrency  Instructionmachine  Statementprogramming language  Unit/subprogramprogramming language  Programmachine, operating system

3 Kinds of concurrency  Co-routines – multiple execution sequences but only one executing at once  Physical concurrency – separate instruction sequences executing at the same time  Logical concurrency – time-shared simulation of physical

4 Subprogram call compared to unit level concurrency Procedure callTask invocation (sequential)(concurrent) call AA end A B resume B (B suspended) invoke Astart A B

5 Synchronization of concurrent tasks DisjointCooperativeCompetitive BA BA BA (block) C e.g., copy between media of different access speeds e.g., updating elements of a data set

6 A competitive synchronization problem example Modify a bank account: balance $200 Transaction task A – deposit $100 Transaction task B – withdraw $50 Sequence I: A fetch 200 A add 100 A store 300 B fetch 300 B subtract 50 B store 250 Sequence III: B fetch 200 B subtract 50 B store 150 A fetch 150 A add 100 A store 250 Sequence II: A fetch 200 B fetch 200 A add 100 A store 300 B subtract 50 B store 150 Task should have exclusive access

7 (Task) Scheduler  Allocates tasks to processor(s) for a period of time ‘time slice’ Tracks which tasks are ready to run (e.g., not blocked) Maintains priority queue of ready tasks Allocates next task when a processor is free

8 Concurrency control structures  Create, start, stop, destroy tasks  Provide mutually exclusive access to shared resources  Make competing and cooperating tasks wait (for shared resource or other action)  Three models 1.Semaphores 2.Monitors 3.Message passing

9 Scheduler  States a task can be in: new runnablerunning dead blocked deadlock danger

10 semaphores  control statements: wait(s)// s is a semaphore release(s) e.g. competition for shared resource ‘account’ task doDeposit loop get(amount) wait(accountAccess) deposit(amount, account) release(accountAccess) end loop

11 concurrent processes task doDeposit loop get(amount) wait(accountAccess) deposit(amount, account) release(accountAccess) end loop wait(s) deposit release(s) wait(s) deposit release(s)

12 semaphores e.g. cooperative synchronization by producer and consumer sharing buffer (queue) task produce loop getTransaction(amount) wait(queueNotFull) putQueue(amount) release(queueNotEmpty) end loop task consume loop wait(queueNotEmpty) getQueue(amount) release(queueNotFull) doTransaction(amount) end loop

13 task produce loop getTransaction(amount) wait(queueNotFull) wait(queueAccess) putQueue(amount) release(queueNotEmpty) release(queueAccess) end loop task consumeAndDoDep loop wait(queueNotEmpty) wait(queueAccess) getQueue(amount) release(queueNotFull) release(queueAccess) wait(accountAccess) deposit(amount,account) release(accountAccess) end loop complete processes

14 semaphore implementation  counter + queue of waiting tasks queueNotFull queue count* 5 queueNotFull queue count 0 wait(queueNotFull) // count--, proceed release(queueNotFull) // count++ wait(queueNotFull) // blocked, join queue release(queueNotFull) //unblock first on queue * available space in buffer of transaction queue

15 semaphore problems  semaphore is a data structure -> need exclusive access to it too! must be implemented with ‘uninterruptible’ instruction set  vulnerable to deadlock – omitted ‘release’  vulnerable to data corruption or run time error – omitted ‘wait’  can’t check statically for correct control – e.g., different units

16 monitors  The (Concurrent)Pascal / Modula model of concurrency – late 70’s  keywords concurrent tasks: process, init data resource shared: monitor, entry, queue  competitive synchronization strategy: create a monitor to contain all data with shared access and write procedures for accessing; monitor implicitly controls competitive access write process tasks using the monitor procedures  monitor is essentially an object

17 monitor example: competitive synchronization type account = monitor var bal: real; procedure entry deposit (dep: integer); begin bal := bal + dep end; procedure entry withdraw (wd: integer); begin bal := bal - wd end; begin bal := 0.0 end; type acctMgr = process(acct: account); var amt: real; request: integer; begin cycle > if request = 0 then acct.deposit(amt); else acct.withdraw(amt); end end;

18 monitor example: competitive synchronization > var bankAcct account; mgr1, mgr2, mgr3: acctMgr; begin init bankAcct, mgr1(bankAcct), mgr1(bankAcct); end;

19 monitors and cooperative synchronization  type queue : semaphore-like object used inside a monitor  two procedures: delay and continue similar to wait and release BUT delay always blocks process (task) so programmer of monitor must control its use delay and continue override monitor access control

20 monitor example: (Sebesta, p.531) cooperative synchronization new_buffer:databuf buf: array … sender_q: queue receiver_q: queue … … procedure deposit procedure fetch producer: process(buffer) consumer: process(buffer) buffer.deposit()buffer.fetch()

21 monitor problems  central data structure model is not appropriate for distributed systems, a common environment for concurrency  terminating processes occurs at end of program only

22 message passing  message sent from one sender task to another receiver task and returned  message may have parameters – value, result or value-result  message is sent when tasks synchronize (both blocked and need the message to continue)  time between send and return is rendezvous (sender task is suspended)

23 concurrency with messages senderreceiver message statement receive message blocked suspended senderreceiver message statement receive message blocked suspended rendezvous

24 example 1: receiver task structure (Ada)  specification (pseudo-Ada-83) task type acct_access is entry deposit (dep : in integer); end acct_access;  body task body acct_access is balance, count : integer; begin loop accept deposit(dep : in integer) do balance := balance + dep; end deposit; count := count + 1; end loop; end acct_access;

25 extended example 2: receiver task  specification task type acct_access is entry deposit (dep : in integer); entry getBal(bal : out integer); end;  body task body acct_access is balance, count : integer; begin balance := 0; count := 0; loop select accept deposit (dep : in integer) do balance := balance + dep; end deposit; or accept getBal (wd : out integer) do bal := balance; end getBal; end select; count := count + 1; end loop; end acct_access;

26 important points  receiver is only ready to receive a message when execution comes to an ‘accept’ clause  sender is suspended until accept ‘do..end’ is completed  select is a guarded command – eligible cases selected at random  tasks can be both senders and receivers  pure receivers are ‘servers’  pure senders are ‘actors’

27 message example: (Sebesta, p.540) cooperative/competitive synchronization BUF_TASK TASK BUF: array … PRODUCER TASK CONSUMER TASK loop produce k buffer.deposit(k) end loop loop select guarded deposit(k) or guarded fetch(k) end select end loop loop buffer.fetch(k) consume k end loop

28 messages to protected objects  tasks to protect shared data are slow (rendezvous are slow)  protected objects are a simpler, efficient version  similar to monitors  distinction of write access (exclusive) and read access (shared) (SEE ALSO BINARY SEMAPHORES)

29 asynchronous messages  no rendezvous the sender does not block after sending message (therefore, does not know it has been executed) the receiver does not block if a message is not there to be received (continues to some other processing)

30 asynchronous messages senderreceiver message statement receive message senderreceiver message statement receive message (no message) execute message procedure queued

31 Ada’s (95) asynchronous ‘select’ orthen abortselect delay k ‘accept’ message

32 related features of Ada (p.542)  task initiation and termination initiation is like a procedure call account_access(); termination when  ‘completed’ - code finished or exception raised  all dependent tasks terminated OR  stopped at terminate clause  caller task or procedure and sibling complete or at terminate

33 related features of Ada (p.542)  priorities for execution from ready queue pragma priority (System.priority.First) compiler directive does not effect guarded command selection

34 related features of Ada (p.542)  binary semaphores built as tasks to protect data access (pseudocode) task sem is entry wait; entry release; end sem; task body sem begin loop accept wait; accept release; end loop; end sem; aSem : sem; aSem();... aSem.wait(); point.x = xi; aSem.release();...

35 Java concurrency: Threads  classes and interfaces Thread, Runnable ThreadGroup ThreadDeath Timer, TimerTask Object

36 creating a thread  extend Thread class NewThread extends Thread {} NewThread n = new NewThread(); n.start();  implement a Runnable interface NewT implements Runnable {} NewT rn = new NewT() Thread t = new Thread(rn); t.start();

37 terminating a thread  stop();//deprecated throws a ThreadDeath object  set thread reference to null

38 thread states newrunnable running not runnable scheduler start() yield() sleep(.) times out IO block unblock wait() notify[All]() dead terminate

39 priorities  Thread class constants, methods for managing priorities setPriority, getPriority MAX_PRIORITY, MIN_PRIORITY, NORM_PRIORITY  Timeslicing is not in the java runtime scheduler; interrupts for higher priority only yield()

40 competitive synchronization  synchronize keyword for methods or block of code – associates a lock with access to a resource  object acts like a monitor  re-entrant locks one synchronized method can call another without deadlock

41 cooperative synchronization  wait() and notify(), notifyAll() in Object class like delay and continue, wait and release  example code from java.sun.com tutorial tutorial

42 Scheduling tasks  Timer and TimerTask  Timer is a special thread for shceduling a task at some future time

43 thread groups  all threads belong to groups  default group is main  threadgroups form a hierarchy (like a directory structure)  access control (e.g. security) is by management of thread groups

44 statement level concurrency  concurrent execution with minimal communication  useless without multiple processors  SIMD (Single Instruction Multiple Data) simpler, more restricted  MIMD (Multiple Instruction Multiple Data) complex, more powerful

45 e.g. array of points – find closest to origin public int closest(Point[] p) { double minDist=Double.MAX_VALUE; int idx; for (int i=0; i

46 e.g. array of points – find closest to origin – SIMD concurrent execution public int closest(Point[] p) { double minDist=Double.MAX_VALUE; int idx; double[] dist = new double[p.length]; forall (int i=0:p.length) // pseudo-code dist[i] = p[i].distance(); for (int i=0; i

47 sequential vs concurrent i=0 i

48 high performance Fortran HPF  concurrency FORALL – process elements in lockstep parallel INDEPENDENT –iterated statements can be run in any order  distribution to processors DISTRIBUTE – pattern for allocating array elements to processors ALIGN – matching allocation of arrays with eachother

49 FORALL: note synchronization FORALL ( I = 2 : 4 ) A(I) = A(I-1) + A(I+1) C(I) = B(I) * A(I+1) END FORALL get all A[I-1], A[I+1], calc sums assign sums to all A[I] get all B[I], A[I+1], calc products assign products to all A[I]

50 INDEPENDENT compiler directive !HPF$ INDEPENDENT DO J = 1, 3 A(J) = A( B(J) ) C(J) = A(J) * B(A(J)) END DO declares iterations independent and OK to execute in parallel

51 DISTRIBUTE !HPF$ DISTRIBUTE A(BLOCK,*) !HPF$ DISTRIBUTE B(*,CYCLIC) !HPF$ DISTRIBUTE C(BLOCK,BLOCK) !HPF$ DISTRIBUTE D(CYCLIC(2),CYCLIC(3) 2 DIMENSIONAL ARRAYS ON 4 SIMD PROCESSORS (colours)

52 ALIGN – put corresponding elements on same processor !HPF$ ALIGN X(I,J) WITH W(J,I) !HPF$ ALIGN Y(K) WITH W(K,*) XWY


Download ppt "Concurrency. Levels of concurrency  Instructionmachine  Statementprogramming language  Unit/subprogramprogramming language  Programmachine, operating."

Similar presentations


Ads by Google