Pitfalls: Time Dependent Behaviors CS433 Spring 2001 Laxmikant Kale.

Slides:



Advertisements
Similar presentations
Handling Deadlocks n definition, wait-for graphs n fundamental causes of deadlocks n resource allocation graphs and conditions for deadlock existence n.
Advertisements

Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
CS492B Analysis of Concurrent Programs Lock Basics Jaehyuk Huh Computer Science, KAIST.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Mutual Exclusion.
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
INTEL CONFIDENTIAL Deadlock Introduction to Parallel Programming – Part 7.
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Toward Efficient Support for Multithreaded MPI Communication Pavan Balaji 1, Darius Buntinas 1, David Goodell 1, William Gropp 2, and Rajeev Thakur 1 1.
Multithreading The objectives of this chapter are:
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
Real-Time Kernels and Operating Systems. Operating System: Software that coordinates multiple tasks in processor, including peripheral interfacing Types.
Deadlocks in Distributed Systems Deadlocks in distributed systems are similar to deadlocks in single processor systems, only worse. –They are harder to.
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
(Superficial!) Review of Uniprocessor Architecture Parallel Architectures and Related concepts CS 433 Laxmikant Kale University of Illinois at Urbana-Champaign.
Threads in Java. History  Process is a program in execution  Has stack/heap memory  Has a program counter  Multiuser operating systems since the sixties.
Java Threads 11 Threading and Concurrent Programming in Java Introduction and Definitions D.W. Denbo Introduction and Definitions D.W. Denbo.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
CS 152: Programming Language Paradigms May 7 Class Meeting Department of Computer Science San Jose State University Spring 2014 Instructor: Ron Mak
Games Development 2 Concurrent Programming CO3301 Week 9.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Deadlock Detection and Recovery
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
CS510 Concurrent Systems Jonathan Walpole. A Methodology for Implementing Highly Concurrent Data Objects.
Processes Creation and Threads. Shared Resource Example The console is just a place where text can be printed by the application currently running. Program.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
2/20/2016 EEC492/693/793 - iPhone Application Development 12/20/2016 EEC492/693/793 - iPhone Application Development 1 EEC-492/693/793 iPhone Application.
CS 151: Object-Oriented Design November 26 Class Meeting Department of Computer Science San Jose State University Fall 2013 Instructor: Ron Mak
CPE779: More on OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
CS3771 Today: Distributed Coordination  Previous class: Distributed File Systems Issues: Naming Strategies: Absolute Names, Mount Points (logical connection.
Testing Concurrent Programs Sri Teja Basava Arpit Sud CSCI 5535: Fundamentals of Programming Languages University of Colorado at Boulder Spring 2010.
Linux Kernel Development Chapter 8. Kernel Synchronization Introduction Geum-Seo Koo Fri. Operating System Lab.
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
ICS Deadlocks 6.1 Deadlocks with Reusable and Consumable Resources 6.2 Approaches to the Deadlock Problem 6.3 A System Model –Resource Graphs –State.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Multithreading The objectives of this chapter are:
Modularity Most useful abstractions an OS wants to offer can’t be directly realized by hardware Modularity is one technique the OS uses to provide better.
Interprocess Communication Race Conditions
Background on the need for Synchronization
PROTOCOL CORRECTNESS Tutorial 3 Theoretical
PThreads.
Memory Consistency Models
Memory Consistency Models
Multiple Writers and Races
Concurrency Control.
L21: Putting it together: Tree Search (Ch. 6)
Multithreading Chapter 23.
More on MPI Nonblocking point-to-point routines Deadlock
COP 4600 Operating Systems Spring 2011
COT 5611 Operating Systems Design Principles Spring 2014
COP 4600 Operating Systems Fall 2010
Jonathan Walpole Computer Science Portland State University
Shared Memory Programming
Background and Motivation
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
Software Transactional Memory Should Not be Obstruction-Free
Why Threads Are A Bad Idea (for most purposes)
CS333 Intro to Operating Systems
Slide design: Dr. Mark L. Hornick
CONCURRENCY Concurrency is the tendency for different tasks to happen at the same time in a system ( mostly interacting with each other ) .   Parallel.
Why Threads Are A Bad Idea (for most purposes)
Why Threads Are A Bad Idea (for most purposes)
CS 144 Advanced C++ Programming May 7 Class Meeting
“The Little Book on Semaphores” Allen B. Downey
Multithreading The objectives of this chapter are:
Synchronization and semaphores
Presentation transcript:

Pitfalls: Time Dependent Behaviors CS433 Spring 2001 Laxmikant Kale

2 Plan for this week Monday: Pitfalls: time dependent behaviors Wednesday: loop/data parallelism: openMP, Fri: Loop prallelism on distributed memory machines: –High Performance Fortran MP1 is coming out tomorrow Project list will be out tomorrow –Projects and info will be added to the list as and when necessary Next week: parallel application classes

3 What makes parallel programming Many things, but one of them is: time dependent behavior –Other things include performance! This issue makes it harder to get a parallel program running correctly Affects all 3 models: –MPI –Pthreads/ SAS –Charm++

4 SAS model Locks and Deadlocks Race conditions

5 Deadlocks Defined as a “circular wait” Example: –Two threads, A and B, two locks (M and N) –Thread A locks lock M, and tries to lock N –Thread B locks N, and tries to lock M –They may both do there work most of the times, –but once in a while the program locks up… A: repeat compute(); lock(M); lock(N); critical(); unlock both B: repeat compute(); lock(N); lock(M); critical(); unlock both

6 Deadlock solution strategy Lock only one lock at a time : –impractical Order the locks –Always request the locks in a particular order, and release them in the reverse order –This is sometimes hard to do. The statements may not be next to each other as nicely as in this example You lock what you need to as you go along in the code… Break the deadlock when it happens –This strategy is used in databases and distributed computing –Not much in parallel computing

7 Race conditions The result, output or behavior of the parallel program differes from run to run: –Example: queue of tasks Each process: repeat { task = dequeue(); print threadnum, tasknum} Even if queue is protected by lock, the output pairs of numbers may be different

8 Race conditions: Kinds of race conditions and how handle them –Benign (as in previous example): ignore –Priority Queue in a Search program The set of explored nodes may differ from run to run –Asynchronous algorithms: allows race conditions, but the program behaves correctly anyway –Take care of race conditions in the program Explict programming –Prevent race conditions: typically using locks

9 MPI: deadlocks Deadlocks –Two processes issue receive, waiting for messages from each other, before sending their own message –Again, this simple setting is easy to see, but real programs may have more complex situations Strategies: –Move sends ahead of receives whenever possible –Use asynchronous receives irecv: user specifies a buffer in which to recv, and goes on Check from time to time if the recv has completed Wait for all pending recvs when you can’t go on

10 MPI: buffer overflows MPI send and receives are not synchronized across processors –I.e. sender doesn’t wait until recv is executed So, data (message) must be buffered until it is received –May lead to buffer overflows typically, crashes the program –These are time dependent: In some runs, the receiver may be fast and empty the buffers in time –Can’t allow this in critical, real-time application Solutions: –guarantee adequate buffer size Analysis MPI implementations allow specification of buffer sizes –Synchronized data exchange

11 Charm++ Asynchronous model implies that deadlocks are not possible –There is no waiting statement (no recv, or lock..) But asynchronous behavior means you cannot make assumptions about when things will happen –Object A sends a method invocation to object B, then prints “A” –Object B prints “B” when it executes the method –The two prints may happen in any order

12 Charm++ Example: –Object A sends to messages (method invocations) to B –They may arrive at B (and executed) in different order Another example: –You contribute to a reduction, and then send a message to a neighboring chare –There is no particular order in which you can expect the reduction results and and the neighbor message will arrive: must program for both possibilities Solution strategy: –Program for different message orders