Concurrency in Ada What concurrency is all about Relation to operating systems Language facilities vs library packages POSIX threads Ada concurrency Real.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
COMMUNICATING SEQUENTIAL PROCESSES C. A. R. Hoare The Queen’s University Belfast, North Ireland.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Ch 7 B.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Concurrency in Ada Programming Languages 1 Robert Dewar.
Introduction to Operating Systems – Windows process and thread management In this lecture we will cover Threads and processes in Windows Thread priority.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Monitors CSCI 444/544 Operating Systems Fall 2008.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Chapter 6 – Concurrent Programming Outline 6.1 Introduction 6.2Monitors 6.2.1Condition Variables 6.2.2Simple Resource Allocation with Monitors 6.2.3Monitor.
1 Concurrency: Deadlock and Starvation Chapter 6.
Even More Ada Constructs 9 Oct Today’s Constructs Generics Private types Child library units Tasking Pragmas Elaboration.
Concurrency - 1 Tasking Concurrent Programming Declaration, creation, activation, termination Synchronization and communication Time and delays conditional.
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
Introduction to Embedded Systems
Collage of Information Technology University of Palestine Advanced programming MultiThreading 1.
Concurrency (Based on:Concepts of Programming Languages, 8th edition, by Robert W. Sebesta, 2007)
CS5204 – Operating Systems 1 Communicating Sequential Processes (CSP)
Threads in Java. History  Process is a program in execution  Has stack/heap memory  Has a program counter  Multiuser operating systems since the sixties.
12/1/98 COP 4020 Programming Languages Parallel Programming in Ada and Java Gregory A. Riccardi Department of Computer Science Florida State University.
111 © 2002, Cisco Systems, Inc. All rights reserved.
Semaphores, Locks and Monitors By Samah Ibrahim And Dena Missak.
CSCI-455/552 Introduction to High Performance Computing Lecture 19.
1 Copyright © 1998 by Addison Wesley Longman, Inc. Chapter 12 Concurrency can occur at four levels: 1. Machine instruction level 2. High-level language.
1 Programming Languages and the Software Production Process Informal Cardelli’s metrics of programming languages fitness to real-time applications: Economy.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
Internet Software Development Controlling Threads Paul J Krause.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Program5 Due Friday, March Prog4 user_thread... amount = … invoke delegate transact (amount)... mainThread... Total + = amount … user_thread...
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Using a simple Rendez-Vous mechanism in Java
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Lecture 8 Page 1 CS 111 Online Other Important Synchronization Primitives Semaphores Mutexes Monitors.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
SPL/2010 Guarded Methods and Waiting 1. SPL/2010 Reminder! ● Concurrency problem: asynchronous modifications to object states lead to failure of thread.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
CS533 – Spring Jeanie M. Schwenk Experiences and Processes and Monitors with Mesa What is Mesa? “Mesa is a strongly typed, block structured programming.
Ada Constructs Revisited 21 Oct Constructs to be Expanded Generics Tasking Elaboration.
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Adding Concurrency to a Programming Language Peter A. Buhr and Glen Ditchfield USENIX C++ Technical Conference, Portland, Oregon, U. S. A., August 1992.
Deadlock and Starvation
Background on the need for Synchronization
“Language Mechanism for Synchronization”
Outline Other synchronization primitives
Other Important Synchronization Primitives
Concurrency: Mutual Exclusion and Synchronization
Transactional Memory Semaphores, monitors, and conditional critical regions all suffer from limitations based on lock semantics Naïve synchronization may.
Lecture 14: Pthreads Mutex and Condition Variables
Semaphore Originally called P() and V() wait (S) { while S <= 0
Module 7a: Classic Synchronization
Multithreaded Programming
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Subject : T0152 – Programming Language Concept
Languages and Compilers (SProg og Oversættere)
Lecture 14: Pthreads Mutex and Condition Variables
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CS510 Operating System Foundations
CSE 153 Design of Operating Systems Winter 19
Presentation transcript:

Concurrency in Ada What concurrency is all about Relation to operating systems Language facilities vs library packages POSIX threads Ada concurrency Real time support Distributed programming

What Concurrency is all About Multiple threads of control within one program Fairly closely coupled (single address space) Each thread is independent But can syncrhonize with other threads One program has many threads

Relation to Operating Systems Typical Unix systems provide –Multiple processes separate address spaces, separate scheduling –Light weight processes/kernel threads shared address space, separate scheduling –User level threads shared address space, no separate scheduling

Language Feature vs Libraries Library approach takes a standard sequential language, e.g. C And provides a set of packages that provide concurrency The C program makes calls to the library to create tasks etc.

Problems with Library Method Libraries may not be completely well defined and may not be portable Language was not defined with concurrency in mind –e.g. are library routines “thread-safe” –are constructs well defined –e.g. what is rule with shared variables?

Thread Safety Suppose two threads of control both call a routine such as malloc. In the middle of one malloc call, the thread is interrupted by a higher priority thread, or reaches end of time slice. Another task calls malloc Chaos???

Shared Variables Suppose we have a global variable “a” One task does a := 1; Another task does a := 256; Again we have interrupts causing statements to get intermingled Is result well defined or could we end up with a having the value 257.

POSIX Threads A standardized package of thread primitives –create thread –timer functions –synchronization mechanisms –etc. Several versions Lots left undefined

Add concurrency to Language Algol-68 had simple semaphores and the notion of separate tasks. CSP, not really a PL (although consider OCCAM derived from PL) had a simple channel mechanism Simula-67 identified tasks with objects

More on CSP (OCCAM) Program consists of processes and channels Process is code containing channel operations Channel is a data object All synchronization is via channels

Channel Operations in CSP Read data item D from channel C –D ? C Write data item Q to channel C –Q ! C If reader accesses channel first, wait for writer, and then both proceed after transfer. If writer accesses channel first, wait for reader, and then both proceed after transfer.

Tasking in Ada Declare a task type The specification gives the entries –task type T is entry Put (data : in Integer); entry Get (result : out Integer); end T; The entries are used to access the task

Declaring Task Body Task body gives actual code of task task body T is x : integer; -- local per thread declaration begin … accept Put (M : Integer) do … end Put; … end T;

Creating an Instance of a Task Declare a single task X : T; or an array of tasks P : array (1.. 50) of T; or a dynamically allocated task type AT is access T; P : AT; … P := new T;

Task execution Each task executes independently, until –an accept call wait for someone to call entry, then proceed with rendezvous code, then both tasks go on their way –an entry call wait for addressed task to reach corresponding accept statement, then proceed with rendezvous, then both tasks go on their way.

More on the Rendezvous During the Rendezvous, only the called task executes, and data can be safely exchanged via the task parameters If accept does a simple assignment, we have the equivalent of a simple CSP channel operation, but there is no restriction on what can be done within a rendezvous

Termination of Tasks A task terminates when it reaches the end of the begin-end code of its body. Tasks may either be very static (create at start of execution and never terminate) Or very dynamic, e.g. create a new task for each new radar trace in a radar system.

The Delay Statement Delay statements temporarily pause a task –Delay xyz where xyz is an expression of type duration causes execution of the thread to be delayed for (at least) the given amount of time –Delay until tim where tim is an expression of type time, causes execution of the thread to be delayed until (at the earliest) the given tim

Selective Accept Select statement allows a choice of actions select entry1 (…) do.. end; or when bla entry2 (…); or delay...ddd...; end select; –Take whichever open entry arrives first, or if none arrives by end of delay, do …ddd…stmts.

Timed Entry Call Timed Entry call allows timeout to be set select entry-call-statement or delay xxx; … end select –We try to do the entry call, but if the task won’t accept in xxx time, then do the delay stmts.

Conditional Entry Call Make a call only if it will be accepted select entry-call.. else statements end select; –If entry call is accepted immediately, fine, otherwise execute the else statements.

Task Abort Unconditionally terminate a task abort taskname; –task is immediately terminated –(it is allowed to do finalization actions) –but whatever it was doing remains incomplete –code that can be aborted must be careful to leave things in a coherent state if that is important!

Asynchronous Transfer of Control Execute section of code, aborting after specified time or event select delay or accept statement then abort statements end select; –Statements start executing and are immediately aborted if delay or accept completes.

Shared Variables A shared variable is one accessed by more than one task –if variable is declared atomic, no restrictions –otherwise, we cannot have two tasks access the same variable without synchronizing (e.g. doing a rendezvous). –model is that variables can normally be in registers

Tasking Is Completely General Any possible synchronization problem can be solved using the rendezvous We know this because it is more powerful than CSP/Occam which is itself general That means that any synchronization primitive can be simulated using the RV

An Example, the Semaphore The Idea of a (binary) semaphore Two operations, p and v p grabs semaphore or waits if not available v releases the semaphore Monitor is p (sem); statements v (sem);

A Semaphore using a Task, RV The specification –task type Semaphore is entry p; entry v; end Semaphore;

A Semaphore using RV The body of semaphore is very simple: –task body Semaphore is begin loop accept p; accept v; end loop; end Semaphore;

Using the Semaphore Abstraction Declare an instance of a semaphore Lock : Semaphore; –Now we can use this semaphore to create a monitor, using Lock.P; code to be protected in monitor Lock.V;

The RV Semaphore Very Neat Expression Nice High Level Semantics But Awfully Heavy if a real task is involved A case of “abstraction inversion” We expect to see tasks implemented in terms of low level stuff like semaphores, not the other way round.

Protected Types and Objects A protected type is a data object with locks –specification provides data, like a record, and the locked access routines functions (read the data with read lock) procedures (read/write the data with write lock) entries (wait till some condition is met, then read/write the data with write lock)

Protected Types and Objects There is conceptually no separate thread of control. Body provides code for the functions, procedures and entries These are executed in the calling thread after obtaining necessary locks

Semaphore Using Protected Type Specification has the data and entries: protected type Semaphore is entry P; procedure V; private Grabbed : Boolean := False; end Semaphore; –P is an entry since we may have to wait

Protected Type Semaphore The body provides the code of P and V protected body Semaphore is entry P when not Grabbed is begin Grabbed := True; end P; procedure V is begin Grabbed := False; end; end Semaphore;

Using the Protected Type Semaphore Declare an instance of a semaphore Lock : Semaphore; –Now we can use this semaphore to create a monitor, using Lock.P; code to be protected in monitor Lock.V; –Note: this was cut and paste from the task slide

Requirements for Real Time Eliminate non-determinism –pragma Dispatching_Policy (FIFO_Within_Priorities); means run till blocked, no time slicing reduces non-determinism typical of “real time threads”, e.g. in NT Define priorities of tasks exact specifications for how priorities are respected Define queuing protocols first in, first out, or by priority of caller

Priority Inheritance Guard against priority inversion –low priority task grabs resource X –high priority task needs resource X, waits –medium priority task preempts low priority task, and runs for a long time, holding up high priority task Solution, while high priority task is waiting, lend high priority to low priority task