Download presentation
Presentation is loading. Please wait.
Published byAnnis Manning Modified over 9 years ago
1
Games Development 2 Concurrent Programming CO3301 Week 9
2
Today’s Lecture 1.Concurrent Programming Parallel Programming 2.Processes & Threads 3.Data / Resource Coordination 4.‘Race’ Conditions 5.Synchronisation –Critical Sections, Mutexes, Semaphores –Timers, Events 6.Deadlocks
3
Concurrent Programming A concurrent program simultaneously executes multiple interacting computational tasks –Distinct from a parallel program, see later The tasks may be separate programs or a set of processes / threads created by a single program The tasks may execute on a single processor, several processors in close proximity (e.g. multi-core), or distributed across a network. Focus of concurrent programming is the interaction between tasks, and the coordination of shared resources
4
Parallel Programming By contrast, a parallel program simultaneously executes a single task across several processors –To achieve the same result faster Based on the idea that a complex task can often be broken up into smaller ones –In the simplest cases, the task is naturally made of a set of independent tasks, e.g. ray-tracing – each pixel done separately Level of coordination is typically less of a burden than concurrent programming Few, if any, areas in games are strictly parallel –Graphics perhaps, but mainly on the GPU side and still needs to be synchronised with the game
5
Processes A program is just a passive set of instructions –Whereas a process is an active instance of a program, actually being executed There may be several instances of the same program running – several processes –E.g. Running two instances of your browser Each process has a distinct set of resources: –An image of the executable program A copy of the instructions, needs to be a copy in case it’s changed –A section of memory (real and/or virtual), including stack, heap etc. –A set of open system resources, files, windows etc. –Security settings: owner, permissions –Processor state: e.g. registers, flags
6
Threads A process may in turn contain several threads of execution –Multiple parts of the same program running at the same time Threads share the process resources: –The image of the executable program –The process memory, (including global variables) –Some security settings: owner, permissions Processes single-threaded or multi-threaded –All our programs have been single-threaded so far –Multi-threaded processes can be more efficient (done right!) Note that multi-threaded processes are more efficient than multi-process programs –Less setup & communication, since threads share resources
7
Data Coordination A major issue in concurrent computing is preventing concurrent processes from interfering with each other Consider this pseudo-code: if (Balance >= withdrawal) Balance -= withdrawal return true else return false Given Balance = 500 and two threads simultaneously trying to withdraw 300 & 350 : –Both threads might execute 1 st line before either executes 2 nd –Both withdrawals accepted Might expect this to be very unlikely and rare –Processor speed means it happens much sooner than you think
8
Resource Coordination Similar issues arise with the sharing of resources Consider a file used by two processes –Problem if both processes attempt to access the file at the exactly the same time –E.g. One process rewrites content of the file –…whilst another is in the process of reading it –Might read part old, part new data – data corruption Equally applies to: –Graphics and other hardware devices –Input – keyboard, controllers One thread reads the key-down event, another reads the key-up… –Output – display, logs etc. Two threads trying to write multiline output to same log at same time…
9
The previous examples are kinds of race condition –Two processes racing to complete their task first A flaw in a concurrent system where the exact sequence or timing of events affects the output –Race conditions are almost always responsible for bugs in concurrent systems –They can be rare and very difficult to track down –Often impossible to reproduce these bugs consistently Almost always due to shared data / resources being accessed almost simultaneously –Not enough consideration of the effects of multi-threading with shared data / resources Race Conditions
10
Data Synchronisation Almost all solutions to data coordination / race condition problems involve locking –The idea that a resource, piece of data or section of code can be locked to a single process or thread A range of locking techniques available: –Critical sections, mutexes, semaphores, file locks etc. –Each limiting thread access to code or resources –Get these features in libraries or code your own Conceptually simple, but must be precisely written All these techniques bring with them the potential problem of deadlocks: –Discussed later
11
Data Synchronisation Objects A critical section is a section of code that can only be accessed by a single thread at a time This section of code is assumed to be accessing data that needs careful synchronisation –Doesn’t lock the data though, just the code The bank balance code could have been made into a critical section –See the example in this week’s lab Typically critical sections apply to multiple threads within a single process –Not multi-process situations, e.g. shared DLLs
12
Data Synchronisation Objects A mutex is an object that can only be owned (held, signalled) by a single thread on a single process at a time –Threads request a specific mutex associated with task / data –Other threads requesting same mutex will fail –Release mutex when finished –Effectively a more general form of a critical section A semaphore is an object that can be held by up to N threads simultaneously –Allows a section/resource to be shared by a few processes, but not an unlimited number –E.g. Limiting the number of files, windows or other resource that can be opened simultaneously
13
Data Synchronisation Objects Also possible to synchronise processes and threads using more traditional means: Timers can pause a thread until a certain time or repeatedly wake/sleep a thread –Allow periodic processing, or wait for a resource to be freed Threads can also be made to respond to events –Wake a thread when some other process is complete We can also make our own synchronisation objects: –E.g. A boolean, tested before running code or accessing data –Higher performance for simple, yet time critical cases –Care required – may need atomic operations to test/toggle
14
Blocking When a thread or process is prevented from accessing data or executing code due to a synchronisation object it is said to be blocked –The thread holding the object is blocking Must decide what to do when a thread is blocked: –Wait for the code / data to become available Allow the thread to stall (sleep) – lose advantages of concurrency Can add a timeout to help limit how long to wait –Skip the task that requires the blocked data/code Better for concurrent performance, but not always possible What to do while waiting? How to cleanly architect the return to the original task?
15
Deadlocks The main problem with most synchronisation objects is that of deadlock Consider two threads trying to lock two resources –Each manages to lock one, but then stalls waiting for the other one to become available –The deadlock will not be resolved, each thread will wait forever for the other Can be resolved by better use of synchronisation objects –Consider the two files as a group of resources –Associate a single mutex to the ownership of any part of the group –Once a thread owns the mutex (to open the 1 st file), the other thread is blocked from holding it and can’t try to open the 2 nd file
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.