Proving Correctness and Measuring Performance CET306 Harry R. Erwin University of Sunderland.

Slides:



Advertisements
Similar presentations
Chapter 7 - Resource Access Protocols (Critical Sections) Protocols: No Preemptions During Critical Sections Once a job enters a critical section, it cannot.
Advertisements

Synchronization and Deadlocks
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Concurrent and Distributed Systems Introduction to CET306 Harry R. Erwin, PhD University of Sunderland.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CSC321 Concurrent Programming: §3 The Mutual Exclusion Problem 1 Section 3 The Mutual Exclusion Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Concurrent Programming Problems OS Spring Concurrency pros and cons Concurrency is good for users –One of the reasons for multiprogramming Working.
1 Chapter 2 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2007 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Multiprocessor Synchronization Algorithms ( ) Lecturer: Danny Hendler The Mutual Exclusion problem.
Concurrent Programming James Adkison 02/28/2008. What is concurrency? “happens-before relation – A happens before B if A and B belong to the same process.
1 Operating Systems, 122 Practical Session 5, Synchronization 1.
Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
Critical Section chapter3.
Sept COMP60611 Fundamentals of Parallel and Distributed Systems Lecture 12 The Critical Section problem John Gurd, Graham Riley Centre for Novel.
Chapter 3 The Critical Section Problem
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Rules for Designing Multithreaded Applications CET306 Harry R. Erwin University of Sunderland.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
CPSC 4650 Operating Systems Chapter 6 Deadlock and Starvation
02/23/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Synchronization in Java Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
1 Concurrency: Deadlock and Starvation Chapter 6.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Lecture 9: Synchronization  concurrency examples and the need for synchronization  definition of mutual exclusion (MX)  programming solutions for.
1 Thread Synchronization: Too Much Milk. 2 Implementing Critical Sections in Software Hard The following example will demonstrate the difficulty of providing.
The Critical Section Problem
Multicore Systems CET306 Harry R. Erwin University of Sunderland.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
Parallel Processing Sharing the load. Inside a Processor Chip in Package Circuits Primarily Crystalline Silicon 1 mm – 25 mm on a side 100 million to.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
CY2003 Computer Systems Lecture 04 Interprocess Communication.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CIS 842: Specification and Verification of Reactive Systems Lecture INTRO-Examples: Simple BIR-Lite Examples Copyright 2004, Matt Dwyer, John Hatcliff,
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Debugging Threaded Applications By Andrew Binstock CMPS Parallel.
Lecture 3 Concurrency and Thread Synchronization     Mutual Exclusion         Dekker's Algorithm         Lamport's Bakery Algorithm.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CIS 720 Lecture 5. Techniques to avoid interference Disjoint variables –If the write set of each process is disjoint from the read and write set of other.
Switch off your Mobiles Phones or Change Profile to Silent Mode.
Distributed Algorithms Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.
Introduction to operating systems What is an operating system? An operating system is a program that, from a programmer’s perspective, adds a variety of.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Atomic Operations in Hardware
Atomic Operations in Hardware
Designing Parallel Algorithms (Synchronization)
Semaphores Chapter 6.
Concurrency.
Conditions for Deadlock
EECE.4810/EECE.5730 Operating Systems
CSE 542: Operating Systems
Chapter 5 Mutual Exclusion(互斥) and Synchronization(同步)
CS444/544 Operating Systems II Scheduler
Presentation transcript:

Proving Correctness and Measuring Performance CET306 Harry R. Erwin University of Sunderland

Texts Clay Breshears (2009) The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications, O'Reilly Media. Mordechai Ben-Ari (2006) Principles of Concurrent and Distributed Programming, Second edition, Addison-Wesley.

Verification of Parallel Algorithms (Ben-Ari) Programs are the execution of atomic statements Concurrent programs are interleavings of atomic statements from two or more threads To prove or verify a property of a concurrent program, we must show the property holds for all possible orders of execution. All statements from any thread must eventually execute (fairness).

The Critical Section Problem Accesses—read or write—must be restricted to allow only a single thread to execute code in the critical section. Solutions have to show the code: – Enforces mutual exclusion – Freedom from deadlock. (If some processes are trying to enter their critical sections, one must eventually succeed.) – Freedom from (individual) starvation. (If a process tries to enter its critical section, it must eventually succeed.) We will explore Dekker’s algorithm for two threads. This is pseudocode.

Approach You need a synchronisation mechanism. This consists of a preprotocol before the critical section and a postprotocol after the critical section. Variables used in the pre- and postprotocol are not used elsewhere. No deadlock within the critical section. No constraints in the non-critical code, including termination or infinite loops. Good solutions are efficient.

Ben-Ari Discussion of the Critical Section Problem Instructor’s slides for this text are restricted and are not available to the students. The material covered is available in both texts. Look up Peterson’s Algorithm.

These Conditions Lead to Deadlock Mutual exclusion condition – Individual resources are either available or they are held by no more than one thread at a time. Hold and wait condition – Threads that are already holding some resources may attempt to hold new resources. No preemption condition – Once a thread is holding a resource, that resource can only be removed when the holding thread voluntarily releases the resource. Circular wait condition – A circular chain of threads requesting resources that are held by the next thread in the chain can exist. To prevent the possibility of deadlock from occurring, one of these conditions must not exist. A text on operating systems theory will have a discussion about deadlock.

Measuring Performance You’re concerned about two things after correctness: – How fast? – How efficient? Elapsed time measurements tell you whether your concurrent implementation is better than the serial one. The ratio of the two is the speed-up factor. Report serial/parallel as a multiplier. Don’t cheat—use the best version of the algorithm in each case.

Number of Cores If you can control the number of cores in use, show the dependence of the ratio on that. There will usually be diminishing returns. That relates to scalability. If you don’t get diminishing returns, investigate why. It’s usually an error. The usual cause is that the data fit into the available cache when shared among cores.

Amdahl’s Law Read Multicore-computer-2008.pdf Multicore-computer-2008.pdf Speedup = 1/((1-P)+(P/S)) – P is the proportion of the code that can be parallelised. – S is the speedup for the parallelisable code. Only if P=1.0, will the speedup be what is predicted from S. An infinite number of cores still produces a finite run time.

Gustafson-Barsis’s Law This law says you can’t do as well as Amdahl’s law due to the data overhead per core. That moves some of the parallelised code into the category of unparallelised code.

Efficiency You can rarely use N>1 cores as efficiently as 1 core. Report speedup/cores as the efficiency. Replace cores with number of threads if that is larger.

Next Week Eight Simple Rules for Designing Multithreaded Applications