Games Development 2 Concurrent Programming CO3301 Week 9.

Slides:



Advertisements
Similar presentations
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Advertisements

Threads, SMP, and Microkernels
Optimistic Methods for Concurrency Control By : H.T. Kung & John T. Robinson Presenters: Munawer Saeed.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6: Process Synchronization
Mutual Exclusion.
1/25 Concurrency and asynchronous computing How do we deal with evaluation when we have a bunch of processors involved?
EEE 435 Principles of Operating Systems Deadlock Avoidance, Prevention, and Wrap-Up (Modern Operating Systems 3.5, 3.6, and 3.7)
Computer Systems/Operating Systems - Class 8
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
IT Systems Operating System EN230-1 Justin Champion C208 –
University of Pennsylvania 9/19/00CSE 3801 Concurrent Processes CSE 380 Lecture Note 4 Insup Lee.
1 Concurrency: Deadlock and Starvation Chapter 6.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Concurrency Recitation – 2/24 Nisarg Raval Slides by Prof. Landon Cox.
Concurrency: Deadlock and Starvation Chapter 6. Goal and approach Deadlock and starvation Underlying principles Solutions? –Prevention –Detection –Avoidance.
1 Concurrency: Deadlock and Starvation Chapter 6.
1 Advanced Computer Programming Concurrency Multithreaded Programs Copyright © Texas Education Agency, 2013.
CSE 486/586 CSE 486/586 Distributed Systems PA Best Practices Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 380 – Computer Game Programming Render Threading Portal, by Valve,
Multi-Threading and Load Balancing Compiled by Paul TaylorCSE3AGR Stolen mainly from Orion Granatir
Nachos Phase 1 Code -Hints and Comments
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Programming & Scratch. Programming Learning to program is ultimately about learning to think logically and to approach problems methodically. The building.
Cosc 4740 Chapter 6, Part 3 Process Synchronization.
Concurrency, Mutual Exclusion and Synchronization.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
Optimistic Design 1. Guarded Methods Do something based on the fact that one or more objects have particular states  Make a set of purchases assuming.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Lecture 8 Page 1 CS 111 Online Other Important Synchronization Primitives Semaphores Mutexes Monitors.
CY2003 Computer Systems Lecture 04 Interprocess Communication.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Copyright © Curt Hill Concurrent Execution An Overview for Database.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
Fall 2008Programming Development Techniques 1 Topic 20 Concurrency Section 3.4.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Lecture 6 Page 1 CS 111 Summer 2013 Concurrency Solutions and Deadlock CS 111 Operating Systems Peter Reiher.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
MULTITHREADED PROGRAMMING Processes vs. Threads Process states and state transitions Hazards and Semaphores 1.
Threads by Dr. Amin Danial Asham. References Operating System Concepts ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, and GREG GAGNE.
Threads prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University 1July 2016Processes.
Processes and threads.
Programming & Scratch.
Background on the need for Synchronization
Advanced OS Concepts (For OCR)
G.Anuradha Reference: William Stallings
Concurrency.
143a discussion session week 3
Dr. Mustafa Cem Kasapbaşı
Multithreaded Programming
Concurrency: Mutual Exclusion and Process Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Foundations and Definitions
Process Management -Compiled for CSIT
Chapter 3: Process Management
Presentation transcript:

Games Development 2 Concurrent Programming CO3301 Week 9

Today’s Lecture 1.Concurrent Programming Parallel Programming 2.Processes & Threads 3.Data / Resource Coordination 4.‘Race’ Conditions 5.Synchronisation –Critical Sections, Mutexes, Semaphores –Timers, Events 6.Deadlocks

Concurrent Programming A concurrent program simultaneously executes multiple interacting computational tasks –Distinct from a parallel program, see later The tasks may be separate programs or a set of processes / threads created by a single program The tasks may execute on a single processor, several processors in close proximity (e.g. multi-core), or distributed across a network. Focus of concurrent programming is the interaction between tasks, and the coordination of shared resources

Parallel Programming By contrast, a parallel program simultaneously executes a single task across several processors –To achieve the same result faster Based on the idea that a complex task can often be broken up into smaller ones –In the simplest cases, the task is naturally made of a set of independent tasks, e.g. ray-tracing – each pixel done separately Level of coordination is typically less of a burden than concurrent programming Few, if any, areas in games are strictly parallel –Graphics perhaps, but mainly on the GPU side and still needs to be synchronised with the game

Processes A program is just a passive set of instructions –Whereas a process is an active instance of a program, actually being executed There may be several instances of the same program running – several processes –E.g. Running two instances of your browser Each process has a distinct set of resources: –An image of the executable program A copy of the instructions, needs to be a copy in case it’s changed –A section of memory (real and/or virtual), including stack, heap etc. –A set of open system resources, files, windows etc. –Security settings: owner, permissions –Processor state: e.g. registers, flags

Threads A process may in turn contain several threads of execution –Multiple parts of the same program running at the same time Threads share the process resources: –The image of the executable program –The process memory, (including global variables) –Some security settings: owner, permissions Processes single-threaded or multi-threaded –All our programs have been single-threaded so far –Multi-threaded processes can be more efficient (done right!) Note that multi-threaded processes are more efficient than multi-process programs –Less setup & communication, since threads share resources

Data Coordination A major issue in concurrent computing is preventing concurrent processes from interfering with each other Consider this pseudo-code: if (Balance >= withdrawal) Balance -= withdrawal return true else return false Given Balance = 500 and two threads simultaneously trying to withdraw 300 & 350 : –Both threads might execute 1 st line before either executes 2 nd –Both withdrawals accepted Might expect this to be very unlikely and rare –Processor speed means it happens much sooner than you think

Resource Coordination Similar issues arise with the sharing of resources Consider a file used by two processes –Problem if both processes attempt to access the file at the exactly the same time –E.g. One process rewrites content of the file –…whilst another is in the process of reading it –Might read part old, part new data – data corruption Equally applies to: –Graphics and other hardware devices –Input – keyboard, controllers One thread reads the key-down event, another reads the key-up… –Output – display, logs etc. Two threads trying to write multiline output to same log at same time…

The previous examples are kinds of race condition –Two processes racing to complete their task first A flaw in a concurrent system where the exact sequence or timing of events affects the output –Race conditions are almost always responsible for bugs in concurrent systems –They can be rare and very difficult to track down –Often impossible to reproduce these bugs consistently Almost always due to shared data / resources being accessed almost simultaneously –Not enough consideration of the effects of multi-threading with shared data / resources Race Conditions

Data Synchronisation Almost all solutions to data coordination / race condition problems involve locking –The idea that a resource, piece of data or section of code can be locked to a single process or thread A range of locking techniques available: –Critical sections, mutexes, semaphores, file locks etc. –Each limiting thread access to code or resources –Get these features in libraries or code your own Conceptually simple, but must be precisely written All these techniques bring with them the potential problem of deadlocks: –Discussed later

Data Synchronisation Objects A critical section is a section of code that can only be accessed by a single thread at a time This section of code is assumed to be accessing data that needs careful synchronisation –Doesn’t lock the data though, just the code The bank balance code could have been made into a critical section –See the example in this week’s lab Typically critical sections apply to multiple threads within a single process –Not multi-process situations, e.g. shared DLLs

Data Synchronisation Objects A mutex is an object that can only be owned (held, signalled) by a single thread on a single process at a time –Threads request a specific mutex associated with task / data –Other threads requesting same mutex will fail –Release mutex when finished –Effectively a more general form of a critical section A semaphore is an object that can be held by up to N threads simultaneously –Allows a section/resource to be shared by a few processes, but not an unlimited number –E.g. Limiting the number of files, windows or other resource that can be opened simultaneously

Data Synchronisation Objects Also possible to synchronise processes and threads using more traditional means: Timers can pause a thread until a certain time or repeatedly wake/sleep a thread –Allow periodic processing, or wait for a resource to be freed Threads can also be made to respond to events –Wake a thread when some other process is complete We can also make our own synchronisation objects: –E.g. A boolean, tested before running code or accessing data –Higher performance for simple, yet time critical cases –Care required – may need atomic operations to test/toggle

Blocking When a thread or process is prevented from accessing data or executing code due to a synchronisation object it is said to be blocked –The thread holding the object is blocking Must decide what to do when a thread is blocked: –Wait for the code / data to become available Allow the thread to stall (sleep) – lose advantages of concurrency Can add a timeout to help limit how long to wait –Skip the task that requires the blocked data/code Better for concurrent performance, but not always possible What to do while waiting? How to cleanly architect the return to the original task?

Deadlocks The main problem with most synchronisation objects is that of deadlock Consider two threads trying to lock two resources –Each manages to lock one, but then stalls waiting for the other one to become available –The deadlock will not be resolved, each thread will wait forever for the other Can be resolved by better use of synchronisation objects –Consider the two files as a group of resources –Associate a single mutex to the ownership of any part of the group –Once a thread owns the mutex (to open the 1 st file), the other thread is blocked from holding it and can’t try to open the 2 nd file