Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.

Similar presentations


Presentation on theme: "CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015."— Presentation transcript:

1 CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015

2 Arduino Programming is Simple  Single processor, no scheduling  Only concurrency is with interrupts  Not all embedded programming is that simple  More sophisticated systems use a RTOS  Provides many of the features of operating systems  Including  Multiple threads (on multiple cores)  Multiple processes  Scheduling (time sliced, interrupted)

3 Concurrency is an Issue  Many things can go wrong  Deadlocks & race conditions  Memory or cache consistency  Other …  Consider  increment() { ++counter; }  multiple threads call increment  What can counter do?  Is it accurate?  Is it monotonic?

4 Synchronization  Synchronizing interrupts  Disable and enable interrupts  Interrupt priorities  Is this enough?  Declaring variables to be volatile  This isn’t enough for synchronizing threads

5 Mutex Synchronization Primitive  MUTEX  Holder has exclusive action until released  Defines an atomic region  C/C++: Process or system-based mutexes

6 Condition Variables & Monitors  Code region controlled by a mutex  Can contain arbitrary code  Again process or system wide  Wait/notify functionality  Wait – releases the lock  Notify – wakes up waiters  Waiter regains the lock  Very general  Difficult to reason about, predict

7 Semaphores  Synchronization counter  Can’t go below zero  Two operations: P (lock) and V (unlock)  Process or system based  P operation (lock)  If counter > 0, atomically decrement and continue  Otherwise wait  V operation (unlock)  Increment counter  If it was 0, wake up any waiters (one waiter fairly)

8 Synchronization Primitives  Other primitives  Read/write locks  Hardware: test and set, compare and swap, …  All these can be used to implement the others  Semaphores seem to be the most common  Different (older) implementation on linux

9 Problem  Allow one (or several) tasks to send messages to another task  Messages might be fixed sized data structures  Messages might be variable sized data structures  Messages might be arbitrary text  What are the issues or problems  Representing messages  Fix the size of the buffer or allow it to expand (fixed)  Who needs to wait (reader/writer/both)  Minimizing synchronization (and overhead)  Efficient implementation

10 Messages  Fixed size: easy  Variable size  Include the length  Either directly or indirectly  Fixed part + variable part  Fixed part includes length  Alternatives  Message as a sequence of chunks  Each containing its length and end/continue flag

11 General Approach  Use a fixed sized circular buffer  Start data pointer: first used space in the buffer  End data pointer: next free space in the buffer  Assume the buffer is in common (shared) memory  Extend circular buffer read/write routines  To handle variable length messages  To handle waiting

12 Write operation int write(void * data,int len) { if (len == 0) return 0; waitForWrite(len); int aln = top_ptr – write_ptr; if (aln > ln) aln = ln; memcpy(&buffer[write_ptr], data,aln); write_ptr += aln; int bln = ln – aln; if (write_ptr == top_ptr) write_ptr = 0; if (bln > 0) { memcpy(&buffer[write_ptr], data+aln,bln); write_ptr += bln; } return ln; }

13 Wait for Write Operation void waitForWrite(int ln) { for ( ; ; ) { int aln = read_ptr – write_ptr; if (aln < 0) aln += top_ptr; if (aln > ln) return; // wait here }

14 Read Operation int read(void * vd, int ln) { waitForRead(ln); int aln = write_ptr – read_ptr; if (aln > ln) aln = ln; else { aln = top_ptr – read_ptr; if (aln >= ln) aln = ln; } memcpy(vd &buffer[read_ptr],aln); read_ptr += aln; int bln = ln – aln; if (read_ptr == top_ptr) read_ptr = 0; if (bln > 0) { memcpy(vd+aln, &buffer[read_ptr],bln); read_ptr += bln; } return ln; }

15 Wait For Read Operation void waitForRead(int ln) { for ( ; ; ) { int aln = write_ptr – read_ptr; if (aln < 0) aln += top_ptr; if (aln > ln) return; // wait here }

16 What Synchronization is Needed  How many readers and writers?  If more than one, then the whole read/write operation has to be synchronized (atomic)  If multiple writers, might want to enforce a queue hierarchy  To ensure ordered processing  If only one of each, then read/write pointers are safe  Only need to worry about the wait routines

17 Implementing the Wait  How would you do this in Java?  What are the alternatives?

18 Using Semaphores  Assume a P/V operation with a count  What happens if you just do a P operation k times?  Most semaphores have such an operation  Write semaphore count starts with size of buffer  Then you don’t have to check space in the buffer  Read semaphore count starts with 0  Then you don’t have to data available to read

19 Busy Wait  Assume the reader is always running and generally faster than the writer  What happens if this is not the case?  Then the writer can busy wait  In terms of the code, this means doing nothing for wait  The reader can return 0 if nothing is there to process  Can you tell how often waits should occur  Given probabilistic models of the reader/writer

20 Using a Condition Variable  This is the Java approach  If there is not enough space, wait  When freeing space, notify  This is wasteful – only want to notify if someone is waiting  Notify only if the queue is getting full  What does full mean?  Coding is a bit tricky, but it should be much faster

21 Synchronization Problems  Deadlock  Priority Inversion  Performance

22 Dining Philosopher’s Problem (Dijkstra ’71)

23 10/23/2007ecs251, fall 2007 23 Priority Inversion  Thread τ 1 L L L R x L  Thread τ 2 L L... L  Thread τ 3 L L L R x L... L  L: local CPU burst  R: resource required (Mutual Exclusion)

24 Example  Suppose that threads τ 1 and τ 3 share some data.  Access to the data is restricted using semaphore x:  each task executes the following code:  do local work (L)  sem_wait(s) (P(x))  access shared resource (R)  sem_signal(s) (V(x))  do more local work (L)

25 Blocking τ 2 τ 3 t0t+3 t+4 R L L L R R L τ 1 t+6 L L L Blocked! L L L

26 The Middle Thread τ 2 τ 3 t0t+3 L L L R τ 1 L L L Blocked! t+2

27 Unbounded Priority Inversion τ 2 τ 3 t0t+3 t+253 RL L L R R L τ 1 t+254 L L L... L Blocked! t+2

28 Unbounded Priority Inversion τ 2-1 τ 3 t0t+3 t+2530 RL L L R R L τ 1 t+2 t+2540 L L L L Blocked! τ 2-2 τ 2-n L L

29 Homework  For next week (10/26)  Present you project to the class  Plans, progress to date  Project model  Tasks and model(s) for each task  Hand this in  Can be part of presentation


Download ppt "CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015."

Similar presentations


Ads by Google