CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating Systems Part III: Process Management (Process Synchronization)
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
1 Concurrency: Deadlock and Starvation Chapter 6.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 2 (19/01/2006) Instructor: Haifeng YU.
Semaphores, Locks and Monitors By Samah Ibrahim And Dena Missak.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
1 Synchronization Threads communicate to ensure consistency If not: race condition (non-deterministic result) Accomplished by synchronization operations.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
CSCI1600: Embedded and Real Time Software Lecture 16: Advanced Programming with I/O Steven Reiss, Fall 2015.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Process Synchronization
Atomic Operations in Hardware
CSCI1600: Embedded and Real Time Software
Concurrency: Mutual Exclusion and Synchronization
Critical section problem
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
CSCI1600: Embedded and Real Time Software
Implementing Mutual Exclusion
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CS333 Intro to Operating Systems
CSCI1600: Embedded and Real Time Software
CSCI1600: Embedded and Real Time Software
“The Little Book on Semaphores” Allen B. Downey
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015

Arduino Programming is Simple  Single processor, no scheduling  Only concurrency is with interrupts  Not all embedded programming is that simple  More sophisticated systems use a RTOS  Provides many of the features of operating systems  Including  Multiple threads (on multiple cores)  Multiple processes  Scheduling (time sliced, interrupted)

Concurrency is an Issue  Many things can go wrong  Deadlocks & race conditions  Memory or cache consistency  Other …  Consider  increment() { ++counter; }  multiple threads call increment  What can counter do?  Is it accurate?  Is it monotonic?

Synchronization  Synchronizing interrupts  Disable and enable interrupts  Interrupt priorities  Is this enough?  Declaring variables to be volatile  This isn’t enough for synchronizing threads

Mutex Synchronization Primitive  MUTEX  Holder has exclusive action until released  Defines an atomic region  C/C++: Process or system-based mutexes

Condition Variables & Monitors  Code region controlled by a mutex  Can contain arbitrary code  Again process or system wide  Wait/notify functionality  Wait – releases the lock  Notify – wakes up waiters  Waiter regains the lock  Very general  Difficult to reason about, predict

Semaphores  Synchronization counter  Can’t go below zero  Two operations: P (lock) and V (unlock)  Process or system based  P operation (lock)  If counter > 0, atomically decrement and continue  Otherwise wait  V operation (unlock)  Increment counter  If it was 0, wake up any waiters (one waiter fairly)

Synchronization Primitives  Other primitives  Read/write locks  Hardware: test and set, compare and swap, …  All these can be used to implement the others  Semaphores seem to be the most common  Different (older) implementation on linux

Problem  Allow one (or several) tasks to send messages to another task  Messages might be fixed sized data structures  Messages might be variable sized data structures  Messages might be arbitrary text  What are the issues or problems  Representing messages  Fix the size of the buffer or allow it to expand (fixed)  Who needs to wait (reader/writer/both)  Minimizing synchronization (and overhead)  Efficient implementation

Messages  Fixed size: easy  Variable size  Include the length  Either directly or indirectly  Fixed part + variable part  Fixed part includes length  Alternatives  Message as a sequence of chunks  Each containing its length and end/continue flag

General Approach  Use a fixed sized circular buffer  Start data pointer: first used space in the buffer  End data pointer: next free space in the buffer  Assume the buffer is in common (shared) memory  Extend circular buffer read/write routines  To handle variable length messages  To handle waiting

Write operation int write(void * data,int len) { if (len == 0) return 0; waitForWrite(len); int aln = top_ptr – write_ptr; if (aln > ln) aln = ln; memcpy(&buffer[write_ptr], data,aln); write_ptr += aln; int bln = ln – aln; if (write_ptr == top_ptr) write_ptr = 0; if (bln > 0) { memcpy(&buffer[write_ptr], data+aln,bln); write_ptr += bln; } return ln; }

Wait for Write Operation void waitForWrite(int ln) { for ( ; ; ) { int aln = read_ptr – write_ptr; if (aln < 0) aln += top_ptr; if (aln > ln) return; // wait here }

Read Operation int read(void * vd, int ln) { waitForRead(ln); int aln = write_ptr – read_ptr; if (aln > ln) aln = ln; else { aln = top_ptr – read_ptr; if (aln >= ln) aln = ln; } memcpy(vd &buffer[read_ptr],aln); read_ptr += aln; int bln = ln – aln; if (read_ptr == top_ptr) read_ptr = 0; if (bln > 0) { memcpy(vd+aln, &buffer[read_ptr],bln); read_ptr += bln; } return ln; }

Wait For Read Operation void waitForRead(int ln) { for ( ; ; ) { int aln = write_ptr – read_ptr; if (aln < 0) aln += top_ptr; if (aln > ln) return; // wait here }

What Synchronization is Needed  How many readers and writers?  If more than one, then the whole read/write operation has to be synchronized (atomic)  If multiple writers, might want to enforce a queue hierarchy  To ensure ordered processing  If only one of each, then read/write pointers are safe  Only need to worry about the wait routines

Implementing the Wait  How would you do this in Java?  What are the alternatives?

Using Semaphores  Assume a P/V operation with a count  What happens if you just do a P operation k times?  Most semaphores have such an operation  Write semaphore count starts with size of buffer  Then you don’t have to check space in the buffer  Read semaphore count starts with 0  Then you don’t have to data available to read

Busy Wait  Assume the reader is always running and generally faster than the writer  What happens if this is not the case?  Then the writer can busy wait  In terms of the code, this means doing nothing for wait  The reader can return 0 if nothing is there to process  Can you tell how often waits should occur  Given probabilistic models of the reader/writer

Using a Condition Variable  This is the Java approach  If there is not enough space, wait  When freeing space, notify  This is wasteful – only want to notify if someone is waiting  Notify only if the queue is getting full  What does full mean?  Coding is a bit tricky, but it should be much faster

Synchronization Problems  Deadlock  Priority Inversion  Performance

Dining Philosopher’s Problem (Dijkstra ’71)

10/23/2007ecs251, fall Priority Inversion  Thread τ 1 L L L R x L  Thread τ 2 L L... L  Thread τ 3 L L L R x L... L  L: local CPU burst  R: resource required (Mutual Exclusion)

Example  Suppose that threads τ 1 and τ 3 share some data.  Access to the data is restricted using semaphore x:  each task executes the following code:  do local work (L)  sem_wait(s) (P(x))  access shared resource (R)  sem_signal(s) (V(x))  do more local work (L)

Blocking τ 2 τ 3 t0t+3 t+4 R L L L R R L τ 1 t+6 L L L Blocked! L L L

The Middle Thread τ 2 τ 3 t0t+3 L L L R τ 1 L L L Blocked! t+2

Unbounded Priority Inversion τ 2 τ 3 t0t+3 t+253 RL L L R R L τ 1 t+254 L L L... L Blocked! t+2

Unbounded Priority Inversion τ 2-1 τ 3 t0t+3 t+2530 RL L L R R L τ 1 t+2 t+2540 L L L L Blocked! τ 2-2 τ 2-n L L

Homework  For next week (10/26)  Present you project to the class  Plans, progress to date  Project model  Tasks and model(s) for each task  Hand this in  Can be part of presentation