CSCI1600: Embedded and Real Time Software

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Secure Operating Systems Lesson 5: Shared Objects.
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
1 Concurrency: Deadlock and Starvation Chapter 6.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 2 (19/01/2006) Instructor: Haifeng YU.
Semaphores, Locks and Monitors By Samah Ibrahim And Dena Missak.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
CS 3204 Operating Systems Godmar Back Lecture 7. 12/12/2015CS 3204 Fall Announcements Project 1 due on Sep 29, 11:59pm Reading: –Read carefully.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
1 Synchronization Threads communicate to ensure consistency If not: race condition (non-deterministic result) Accomplished by synchronization operations.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
CSCI1600: Embedded and Real Time Software Lecture 16: Advanced Programming with I/O Steven Reiss, Fall 2015.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Module 6: Process Synchronization.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
CSE 120 Principles of Operating
EMERALDS Landon Cox March 22, 2017.
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Process Synchronization
Atomic Operations in Hardware
Atomic Operations in Hardware
CSCI1600: Embedded and Real Time Software
Concurrency: Mutual Exclusion and Synchronization
Critical section problem
Grades.
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
CSCI1600: Embedded and Real Time Software
Implementing Mutual Exclusion
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CS333 Intro to Operating Systems
CSCI1600: Embedded and Real Time Software
CSCI1600: Embedded and Real Time Software
“The Little Book on Semaphores” Allen B. Downey
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

CSCI1600: Embedded and Real Time Software Lecture 19: Concurrent Programming Steven Reiss, Fall 2017

Arduino Programming is Simple Single processor, no scheduling Only concurrency is with interrupts Not all embedded programming is that simple More sophisticated systems use a RTOS Provides many of the features of operating systems Available on the Arduino as well (FreeRTOS) Including Multiple threads (on multiple cores) Multiple processes Scheduling (time sliced, interrupted) Lecture 19: Concurrent Programming 2/22/2019

Concurrency is an Issue Many things can go wrong Deadlocks & race conditions Memory or cache consistency Other … Consider increment() { ++counter; } multiple threads call increment What can counter do? Is it accurate? Is it monotonic? Lecture 19: Concurrent Programming 2/22/2019

Synchronization Synchronizing interrupts Disable and enable interrupts Interrupt priorities Is this enough? Declaring variables to be volatile This isn’t enough for synchronizing threads Or interruptible or time-sliced tasks Lecture 19: Concurrent Programming 2/22/2019

Memory synchronization Main Program disable() x = 0 enable() while (x == 0) wait(); … Interrupt Handler: x = 1 return; Lecture 19: Concurrent Programming 2/22/2019

Mutex Synchronization Primitive Holder has exclusive action until released Defines an atomic region C/C++: Process or system-based mutexes Provided by the operating system Different operating systems provide different primitives Lecture 19: Concurrent Programming 2/22/2019

Condition Variables & Monitors Code region controlled by a mutex Can contain arbitrary code Again process or system wide Wait/notify functionality Wait – releases the lock Notify – wakes up waiters Waiter regains the lock Very general Difficult to reason about, predict What do you do with these? Where do you see these? (JAVA) Lecture 19: Concurrent Programming 2/22/2019

Semaphores Synchronized counter P operation (lock) Can’t go below zero Two operations: P (lock) and V (unlock) Process or system based P operation (lock) If counter > 0, atomically decrement and continue Otherwise wait V operation (unlock) Increment counter If it was 0, wake up any waiters (one waiter fairly) Lecture 19: Concurrent Programming 2/22/2019

Synchronization Primitives Other primitives Read/write locks Hardware: test and set, compare and swap, … All these can be used to implement the others Semaphores seem to be the most common Different (older) implementation on Linux Lecture 19: Concurrent Programming 2/22/2019

Problem Allow one (or several) tasks to send messages to another task Messages might be fixed sized data structures Messages might be variable sized data structures Messages might be arbitrary text What are the issues or problems Representing messages Fix the size of the buffer or allow it to expand (fixed) Who needs to wait (reader/writer/both) Minimizing synchronization (and overhead) Efficient implementation Lecture 19: Concurrent Programming 2/22/2019

Problem Lecture 19: Concurrent Programming 2/22/2019

Messages Fixed size: easy Variable size Alternatives Include the length Either directly or indirectly Fixed part + variable part Fixed part includes length Alternatives Message as a sequence of chunks Each containing its length and end/continue flag Lecture 19: Concurrent Programming 2/22/2019

General Approach Use a fixed sized circular buffer Start data pointer: first used space in the buffer End data pointer: next free space in the buffer Assume the buffer is in common (shared) memory Is this sufficient? Extend circular buffer read/write routines To handle variable length messages To handle waiting Lecture 19: Concurrent Programming 2/22/2019

Write operation int write(void * data,int len) { if (write_ptr == top_ptr) write_ptr = 0; if (len == 0) return 0; if (bln > 0) { waitForWrite(len); data+aln,bln); int aln = top_ptr – write_ptr; write_ptr += bln; if (aln > ln) aln = ln; } memcpy(&buffer[write_ptr], return ln; data,aln); write_ptr += aln; int bln = ln – aln; Lecture 19: Concurrent Programming 2/22/2019

Wait for Write Operation void waitForWrite(int ln) { for ( ; ; ) { int aln = read_ptr – write_ptr; if (aln < 0) aln += top_ptr; if (aln > ln) return; // wait here } We’ll talk about the actual wait later on Lecture 19: Concurrent Programming 2/22/2019

Read Operation int read(void * vd, int ln) { int bln = ln – aln; waitForRead(ln); if (read_ptr == top_ptr) read_ptr = 0; int aln = write_ptr – read_ptr; if (bln > 0) { if (aln > ln) aln = ln; memcpy(vd+aln, else { &buffer[read_ptr],bln); aln = top_ptr – read_ptr; read_ptr += bln; if (aln >= ln) aln = ln; } return ln; memcpy(vd &buffer[read_ptr],aln); read_ptr += aln; Lecture 19: Concurrent Programming 2/22/2019

Wait For Read Operation void waitForRead(int ln) { for ( ; ; ) { int aln = write_ptr – read_ptr; if (aln < 0) aln += top_ptr; if (aln > ln) return; // wait here } Lecture 19: Concurrent Programming 2/22/2019

What Synchronization is Needed How many readers and writers? If more than one, then the whole read/write operation has to be synchronized (atomic) If multiple writers, might want to enforce a queue hierarchy To ensure ordered processing If only one of each, then read/write pointers are safe Only need to worry about the wait routines Lecture 19: Concurrent Programming 2/22/2019

Implementing the Wait How would you do this in Java? What are the alternatives? Lecture 19: Concurrent Programming 2/22/2019

Using Semaphores Assume a P/V operation with a count What happens if you just do a P operation k times? Most semaphores have such an operation Write semaphore count starts with size of buffer Then you don’t have to check space in the buffer Read semaphore count starts with 0 Then you don’t have to data available to read Lecture 19: Concurrent Programming 2/22/2019

Busy Wait Assume the reader is always running and generally faster than the writer What happens if this is not the case? Then the writer can busy wait In terms of the code, this means doing nothing for wait The reader can return 0 if nothing is there to process Can you tell how often waits should occur Given probabilistic models of the reader/writer Lecture 19: Concurrent Programming 2/22/2019

Using a Condition Variable This is the Java approach If there is not enough space, wait When freeing space, notify This is wasteful – only want to notify if someone is waiting Notify only if the queue is getting full What does full mean? Coding is a bit tricky, but it should be faster Lecture 19: Concurrent Programming 2/22/2019

Synchronization Problems Race Conditions Deadlock Priority Inversion Performance Lecture 19: Concurrent Programming 2/22/2019

Dining Philosopher’s Problem (Dijkstra ’71) Lecture 19: Concurrent Programming 2/22/2019

Priority Inversion Thread τ1 L L L Rx L Thread τ2 L L ... L Thread τ3 L L L Rx L ... L T1: High, T2: Medium, T3: Low L: local CPU burst R: resource required (Mutual Exclusion) Lecture 19: Concurrent Programming 2/22/2019

Example Suppose that threads τ1 and τ3 share some data. Access to the data is restricted using semaphore x: Each task executes the following code: do local work (L) sem_wait(s) (P(x)) access shared resource (R) sem_signal(s) (V(x)) do more local work (L) Lecture 19: Concurrent Programming 2/22/2019

Blocking τ1 τ2 τ3 t t+3 t+4 t+6 Blocked! L L L R L L L L R R L L L t t+3 t+4 t+6 Lecture 19: Concurrent Programming 2/22/2019

The Middle Thread τ1 τ2 τ3 t t+2 t+3 Blocked! L L L L L L R t t+2 t+3 Lecture 19: Concurrent Programming 2/22/2019

Unbounded Priority Inversion Blocked! τ1 L L L R L τ2 ... L L τ3 L L L R R t t+2 t+3 t+253 t+254 Lecture 19: Concurrent Programming 2/22/2019

Unbounded Priority Inversion Blocked! τ1 L L L R L τ2-1 L τ2-2 L τ2-n L τ3 L L L R R t t+2 t+3 t+2530 t+2540 Lecture 19: Concurrent Programming 2/22/2019

Fixing Priority Inversion Priority inheritance Assume priority of highest waiter on a lock Priority ceiling protocol Priority when locked = highest priority of any thread that can use that lock Lecture 19: Concurrent Programming 2/22/2019

Homework For next week (10/25 and 10/27) Present you project to the class Plans, progress to date Project model Tasks and model(s) for each task Hand this in Can be part of presentation Lecture 19: Concurrent Programming 2/22/2019