Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSCI1600: Embedded and Real Time Software

Similar presentations


Presentation on theme: "CSCI1600: Embedded and Real Time Software"— Presentation transcript:

1 CSCI1600: Embedded and Real Time Software
Lecture 19: Concurrent Programming Steven Reiss, Fall 2017

2 Arduino Programming is Simple
Single processor, no scheduling Only concurrency is with interrupts Not all embedded programming is that simple More sophisticated systems use a RTOS Provides many of the features of operating systems Available on the Arduino as well (FreeRTOS) Including Multiple threads (on multiple cores) Multiple processes Scheduling (time sliced, interrupted) Lecture 19: Concurrent Programming 2/22/2019

3 Concurrency is an Issue
Many things can go wrong Deadlocks & race conditions Memory or cache consistency Other … Consider increment() { ++counter; } multiple threads call increment What can counter do? Is it accurate? Is it monotonic? Lecture 19: Concurrent Programming 2/22/2019

4 Synchronization Synchronizing interrupts
Disable and enable interrupts Interrupt priorities Is this enough? Declaring variables to be volatile This isn’t enough for synchronizing threads Or interruptible or time-sliced tasks Lecture 19: Concurrent Programming 2/22/2019

5 Memory synchronization
Main Program disable() x = 0 enable() while (x == 0) wait(); Interrupt Handler: x = 1 return; Lecture 19: Concurrent Programming 2/22/2019

6 Mutex Synchronization Primitive
Holder has exclusive action until released Defines an atomic region C/C++: Process or system-based mutexes Provided by the operating system Different operating systems provide different primitives Lecture 19: Concurrent Programming 2/22/2019

7 Condition Variables & Monitors
Code region controlled by a mutex Can contain arbitrary code Again process or system wide Wait/notify functionality Wait – releases the lock Notify – wakes up waiters Waiter regains the lock Very general Difficult to reason about, predict What do you do with these? Where do you see these? (JAVA) Lecture 19: Concurrent Programming 2/22/2019

8 Semaphores Synchronized counter P operation (lock)
Can’t go below zero Two operations: P (lock) and V (unlock) Process or system based P operation (lock) If counter > 0, atomically decrement and continue Otherwise wait V operation (unlock) Increment counter If it was 0, wake up any waiters (one waiter fairly) Lecture 19: Concurrent Programming 2/22/2019

9 Synchronization Primitives
Other primitives Read/write locks Hardware: test and set, compare and swap, … All these can be used to implement the others Semaphores seem to be the most common Different (older) implementation on Linux Lecture 19: Concurrent Programming 2/22/2019

10 Problem Allow one (or several) tasks to send messages to another task
Messages might be fixed sized data structures Messages might be variable sized data structures Messages might be arbitrary text What are the issues or problems Representing messages Fix the size of the buffer or allow it to expand (fixed) Who needs to wait (reader/writer/both) Minimizing synchronization (and overhead) Efficient implementation Lecture 19: Concurrent Programming 2/22/2019

11 Problem Lecture 19: Concurrent Programming 2/22/2019

12 Messages Fixed size: easy Variable size Alternatives
Include the length Either directly or indirectly Fixed part + variable part Fixed part includes length Alternatives Message as a sequence of chunks Each containing its length and end/continue flag Lecture 19: Concurrent Programming 2/22/2019

13 General Approach Use a fixed sized circular buffer
Start data pointer: first used space in the buffer End data pointer: next free space in the buffer Assume the buffer is in common (shared) memory Is this sufficient? Extend circular buffer read/write routines To handle variable length messages To handle waiting Lecture 19: Concurrent Programming 2/22/2019

14 Write operation int write(void * data,int len) {
if (write_ptr == top_ptr) write_ptr = 0; if (len == 0) return 0; if (bln > 0) { waitForWrite(len); data+aln,bln); int aln = top_ptr – write_ptr; write_ptr += bln; if (aln > ln) aln = ln; } memcpy(&buffer[write_ptr], return ln; data,aln); write_ptr += aln; int bln = ln – aln; Lecture 19: Concurrent Programming 2/22/2019

15 Wait for Write Operation
void waitForWrite(int ln) { for ( ; ; ) { int aln = read_ptr – write_ptr; if (aln < 0) aln += top_ptr; if (aln > ln) return; // wait here } We’ll talk about the actual wait later on Lecture 19: Concurrent Programming 2/22/2019

16 Read Operation int read(void * vd, int ln) { int bln = ln – aln;
waitForRead(ln); if (read_ptr == top_ptr) read_ptr = 0; int aln = write_ptr – read_ptr; if (bln > 0) { if (aln > ln) aln = ln; memcpy(vd+aln, else { &buffer[read_ptr],bln); aln = top_ptr – read_ptr; read_ptr += bln; if (aln >= ln) aln = ln; } return ln; memcpy(vd &buffer[read_ptr],aln); read_ptr += aln; Lecture 19: Concurrent Programming 2/22/2019

17 Wait For Read Operation
void waitForRead(int ln) { for ( ; ; ) { int aln = write_ptr – read_ptr; if (aln < 0) aln += top_ptr; if (aln > ln) return; // wait here } Lecture 19: Concurrent Programming 2/22/2019

18 What Synchronization is Needed
How many readers and writers? If more than one, then the whole read/write operation has to be synchronized (atomic) If multiple writers, might want to enforce a queue hierarchy To ensure ordered processing If only one of each, then read/write pointers are safe Only need to worry about the wait routines Lecture 19: Concurrent Programming 2/22/2019

19 Implementing the Wait How would you do this in Java?
What are the alternatives? Lecture 19: Concurrent Programming 2/22/2019

20 Using Semaphores Assume a P/V operation with a count
What happens if you just do a P operation k times? Most semaphores have such an operation Write semaphore count starts with size of buffer Then you don’t have to check space in the buffer Read semaphore count starts with 0 Then you don’t have to data available to read Lecture 19: Concurrent Programming 2/22/2019

21 Busy Wait Assume the reader is always running and generally faster than the writer What happens if this is not the case? Then the writer can busy wait In terms of the code, this means doing nothing for wait The reader can return 0 if nothing is there to process Can you tell how often waits should occur Given probabilistic models of the reader/writer Lecture 19: Concurrent Programming 2/22/2019

22 Using a Condition Variable
This is the Java approach If there is not enough space, wait When freeing space, notify This is wasteful – only want to notify if someone is waiting Notify only if the queue is getting full What does full mean? Coding is a bit tricky, but it should be faster Lecture 19: Concurrent Programming 2/22/2019

23 Synchronization Problems
Race Conditions Deadlock Priority Inversion Performance Lecture 19: Concurrent Programming 2/22/2019

24 Dining Philosopher’s Problem (Dijkstra ’71)
Lecture 19: Concurrent Programming 2/22/2019

25 Priority Inversion Thread τ1 L L L Rx L Thread τ2 L L ... L
Thread τ3 L L L Rx L ... L T1: High, T2: Medium, T3: Low L: local CPU burst R: resource required (Mutual Exclusion) Lecture 19: Concurrent Programming 2/22/2019

26 Example Suppose that threads τ1 and τ3 share some data.
Access to the data is restricted using semaphore x: Each task executes the following code: do local work (L) sem_wait(s) (P(x)) access shared resource (R) sem_signal(s) (V(x)) do more local work (L) Lecture 19: Concurrent Programming 2/22/2019

27 Blocking τ1 τ2 τ3 t t+3 t+4 t+6 Blocked! L L L R L L L L R R L L L
t t+3 t+4 t+6 Lecture 19: Concurrent Programming 2/22/2019

28 The Middle Thread τ1 τ2 τ3 t t+2 t+3 Blocked! L L L L L L R
t t+2 t+3 Lecture 19: Concurrent Programming 2/22/2019

29 Unbounded Priority Inversion
Blocked! τ1 L L L R L τ2 ... L L τ3 L L L R R t t+2 t+3 t+253 t+254 Lecture 19: Concurrent Programming 2/22/2019

30 Unbounded Priority Inversion
Blocked! τ1 L L L R L τ2-1 L τ2-2 L τ2-n L τ3 L L L R R t t+2 t+3 t+2530 t+2540 Lecture 19: Concurrent Programming 2/22/2019

31 Fixing Priority Inversion
Priority inheritance Assume priority of highest waiter on a lock Priority ceiling protocol Priority when locked = highest priority of any thread that can use that lock Lecture 19: Concurrent Programming 2/22/2019

32 Homework For next week (10/25 and 10/27)
Present you project to the class Plans, progress to date Project model Tasks and model(s) for each task Hand this in Can be part of presentation Lecture 19: Concurrent Programming 2/22/2019


Download ppt "CSCI1600: Embedded and Real Time Software"

Similar presentations


Ads by Google