Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSC 480 - Multiprocessor Programming, Spring, 2012 Outline for Chapters 1-3 and 13 Dr. Dale E. Parson.

Similar presentations


Presentation on theme: "CSC 480 - Multiprocessor Programming, Spring, 2012 Outline for Chapters 1-3 and 13 Dr. Dale E. Parson."— Presentation transcript:

1 CSC 480 - Multiprocessor Programming, Spring, 2012 Outline for Chapters 1-3 and 13 Dr. Dale E. Parson

2 Multiple processes (timesharing) and threads on uniprocessors (ch1) A process is program execution in an address space. A process consists of one or more threads. A thread is an executing instruction stream. In a single-threaded program the process is the thread. A multi-threaded process houses one or more executing instruction streams. Why interleaved execution on uniprocessors? Resource utilization – run ready threads while others block. Fairness – share among users at a fine temporal grain. Convenience – concurrency is a form of modularity.

3 Threads on uniprocessors and multiprocessors On a uniprocessor the operating system emulates concurrency by allocating CPU registers to one thread while maintaining the state of other threads in an execution queue. A multiprocessor provides the registers and other CPU resources for genuine concurrency. It is always possible to have more software threads than hardware threads. The O.S. stores state for non- running threads in memory.

4 Benefits of threads Exploiting multiple processors Simplicity of modeling Modular concurrency Simplified handling of asynchronous events Applications can avoid execution scheduling. More responsive user interfaces Handle user events while the application thread is busy. Distribution via remote method daemons

5 Risks of threads Safety hazards caused by critical sections Threads must correctly synchronize sharing data. Liveness hazards caused by interdependence Avoid circular dependence on shared resources. Avoid keeping resources hidden from their threads. Performance hazards caused by poor design. Context switching takes time. Busy polling can pollute shared resource access.

6 Thread safety (ch 2) Unsynchronized multithreaded access to mutable data is broken access. Symptoms may be time dependent. Three ways to fix unsynchronized access: Do not share a state variable across threads. Make the state variable immutable. Use synchronization for every access. Use good object-oriented techniques. Encapsulation, immutability, specification.

7 Thread Safety A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code. (p. 18) Thread-safe classes encapsulate any needed synchronization so that clients need not provide their own. Stateless objects are always thread safe.

8 Atomicity A race condition occurs when the correctness of a computation depends on the relative timing or interleaving of multiple threads by the runtime. (p. 20) Operations A and B are atomic with respect to each other if, from the perspective of a thread executing A, when another thread executes B, either all of B has executed or none of it has. An atomic operation is one that is atomic with respect to all operations, including itself, that operate on the same state. (p22)

9 Thread-safe classes The text says, “Where practical, use existing thread-safe objects, like AtomicLong, to manage your class’s state. It is simpler to reason about the possible states and state transitions for existing thread-safe objects than it is for arbitrary state variables, and this makes it easier to maintain and verify thread safety.” (p 23) I have some reservations about that paragraph. Composing a class using atomic objects does not guarantee atomicity of the class, and it may add unnecessary overhead due to redundant or useless locking. The real point is that thread safety is a class-level responsibility for a properly encapsulated class.

10 Locking To preserve state consistency, update related state variables in a single atomic operation. (p 25) A reentrant lock gives 1 thread multiple accesses. For each mutable state variable that may be accessed by more than one thread, all accesses to that variable must be performed with the same lock held. In this case, we say that the variable is guarded by that lock. shared variable exactly one lock guarding that variable threads

11 Locking related state variables Every shared, mutable variable should be guarded by exactly one lock. Make it clear to maintainers which lock that is. (p28) For every invariant that involves more than one variable, all the variables involved in that invariant must be guarded by the same lock. (p 29) interrelated shared variables exactly one lock guarding those variables threads

12 Liveness and performance There is frequently a tension between simplicity and performance. When implementing a synchronization policy, resist the temptation to prematurely sacrifice simplicity (potentially compromising safety) for the sake of performance. (p 32) Avoid holding locks during lengthy computations or operations at risk of not completing quickly such as network or console I/O. (p 32) Perform compute-bound work using immutable objects or local variables, avoiding serialized access to shared variables. Release locks when performing blocking I/O.

13 Sharing Objects (Ch 3) In the absence of synchronization, the compiler, processor, and runtime can do some downright weird things to the order in which operations appear to execute. Attempts to reason about the order in which memory actions “must” happen in insufficiently synchronized multithreaded programs will almost certainly be incorrect. (p 35) Threading libraries (e.g., POSIX threads) and some languages (e.g., Java) specify a memory model (appendix 16) that specifies conditions such as cache consistency across hardware threads. A call across a memory barrier ensures cache flushing and update.

14 Unsynchronized shared memory Data in registers or caches can be stale. Stale data bugs can manifest intermittently. In special cases stale data can be OK. Nonatomic 64-bit data can be malformed. Declaring volatile restores out-of-thin-air safety to 64- bit data, but such unsynchronized data can still be stale. An intrinsic synchronized lock on a single object acquired by multiple threads in sequence guarantee visibility of changes to that object’s data.

15 Volatile keyword for data When a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations. (p 38), for completion, interruption or status flags. A common C/C++ usage is to ensure that changes made by concurrent interrupt handlers are visible. Volatile variable access constitutes a memory barrier for any other variables accessed by those threads, similar to a synchronized block. Avoid using volatile for complex synchronization. Locking guarantees visibility and atomicity; volatile guarantees only visibility.

16 Publication and escape Publication is the exposing of an object reference to multiple threads. Beware of publishing internal state. Use encapsulation, immutable data and copying. An object published in error has escaped. Do not allow the this reference to escape during construction. Do not start an active object’s service thread in its constructor. Do not subscribe it in its constructor. Let a factory object do those thing after construction.

17 Thread confinement Confining access to a single thread avoid synchronization overhead and complexity. Ad-hoc confinement, e.g., restriction of GUI object access to a single thread, is fragile. It is not supported by language mechanisms. Read-modify-write operations on shared volatile variables are safe as long as only a single thread can write such a variable.

18 Thread confinement strategies Stack confinement uses objects accessed via local variables and method parameters within a single thread. This approach requires discipline and documentation, since the language cannot stop a thread from letting a stack-confined object reference escape. java.lang.ThreadLocal maintains a per-thread object for every thread accessing that apparent object. It is often used in a transaction context where a server sets up server-oriented thread-local data, invokes an application method, which in turn invokes a server method that needs access to that server data.

19 Immutability Immutable objects are always thread-safe. An object is immutable if: (p. 47) Its state cannot be modified after construction; All its fields are final; and It is properly constructed (the this reference does not escape during construction). Just as it is a good practice to make all fields private unless they need greater visibility, it is a good practice to make all fields final unless they need to be mutable. A volatile reference to an immutable object is safe.

20 Safe publication idioms (p. 52) A properly constructed object can be safely published by: Initializing an object reference from a static initializer; Storing a reference to it into a volatile field or AtomicReference; Storing a reference to it into a final field of a properly constructed object; or Storing a reference to it into a field that is properly guarded by a lock. Safely published effectively immutable objects can be used safely by any thread without additional synchronization. Mutable objects must be safely published, and must be either thread-safe or guarded by a lock.

21 Policies for using and sharing objects in concurrent programs Thread confined – all references to an object are confined to its constructing thread. Shared read-only – concurrent access without additional synchronization is safe for immutable and safely published effectively immutable objects. Shared thread-safe – an object that synchronizes its state internally can be used without additional synchronization of its state by client code. Guarded – A guarded object can be accessed only with a specific lock held. (details on p. 54)

22 Explicit Locks (Ch 13) java.util.concurrent.locks interface Lock and class ReentrantLock Lock offers more options than synchronized. Optional fairness in thread acquisition order. Unconditional locking. Lock polling. Timed acquisition. Interruptible acquisition. Acquisition across non-nested method calls and non-block- structured control constructs. A Lock must be released in a correct finally clause. These establish the need for Lock versus synchronized blocks.

23 Lock’s Condition object newCondition() manufactures a new condition variable for a lock. A condition variable supports await / signal / signalAll communication, similar to java.lang.Object’s wait / notify / notifyAll methods for instrinsic locks. We will go over a condition variable example in class.

24 ReentrantReadWriteLock This class supports multiple concurrent readers or a single writer at a time, using a pair of Lock objects. There are configuration parameters for (p287): Release preference Reader barging Reentrancy Downgrading by acquiring the read lock before releasing the previously held write lock.


Download ppt "CSC 480 - Multiprocessor Programming, Spring, 2012 Outline for Chapters 1-3 and 13 Dr. Dale E. Parson."

Similar presentations


Ads by Google