CS 252 Graduate Computer Architecture Lecture 11: Multiprocessors-II Krste Asanovic Electrical Engineering and Computer Sciences University of California,

Slides:



Advertisements
Similar presentations
Cache Coherence. Memory Consistency in SMPs Suppose CPU-1 updates A to 200. write-back: memory and cache-2 have stale values write-through: cache-2 has.
Advertisements

Symmetric Multiprocessors: Synchronization and Sequential Consistency.
1 Episode III in our multiprocessing miniseries. Relaxed memory models. What I really wanted here was an elephant with sunglasses relaxing On a beach,
1 Lecture 20: Synchronization & Consistency Topics: synchronization, consistency models (Sections )
4/16/2013 CS152, Spring 2013 CS 152 Computer Architecture and Engineering Lecture 19: Directory-Based Cache Protocols Krste Asanovic Electrical Engineering.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Snoopy Caches I Steve Ko Computer Sciences and Engineering University at Buffalo.
4/4/2013 CS152, Spring 2013 CS 152 Computer Architecture and Engineering Lecture 17: Synchronization and Sequential Consistency Krste Asanovic Electrical.
CS 162 Memory Consistency Models. Memory operations are reordered to improve performance Hardware (e.g., store buffer, reorder buffer) Compiler (e.g.,
© Krste Asanovic, 2014CS252, Spring 2014, Lecture 12 CS252 Graduate Computer Architecture Spring 2014 Lecture 12: Synchronization and Memory Models Krste.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
CS492B Analysis of Concurrent Programs Consistency Jaehyuk Huh Computer Science, KAIST Part of slides are based on CS:App from CMU.
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Snoopy Caches II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Directory-Based Caches I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Synchronization and Consistency II Steve Ko Computer Sciences and Engineering University at.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Synchronization and Consistency I Steve Ko Computer Sciences and Engineering University at Buffalo.
By Sarita Adve & Kourosh Gharachorloo Review by Jim Larson Shared Memory Consistency Models: A Tutorial.
CS 152 Computer Architecture and Engineering Lecture 19: Synchronization and Sequential Consistency Krste Asanovic Electrical Engineering and Computer.
1 Lecture 21: Synchronization Topics: lock implementations (Sections )
Lecture 13: Consistency Models
CS 152 Computer Architecture and Engineering Lecture 21: Directory-Based Cache Protocols Scott Beamer (substituting for Krste Asanovic) Electrical Engineering.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Nov 14, 2005 Topic: Cache Coherence.
1 Lecture 15: Consistency Models Topics: sequential consistency, requirements to implement sequential consistency, relaxed consistency models.
CS 152 Computer Architecture and Engineering Lecture 19: Synchronization and Sequential Consistency Krste Asanovic Electrical Engineering and Computer.
April 13, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 18: Snoopy Caches Krste Asanovic Electrical Engineering and Computer.
April 8, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 19: Synchronization and Sequential Consistency Krste Asanovic Electrical.
CS 152 Computer Architecture and Engineering Lecture 20: Snoopy Caches Krste Asanovic Electrical Engineering and Computer Sciences University of California,
April 4, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 17: Synchronization and Sequential Consistency Krste Asanovic Electrical.
April 18, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 19: Directory-Based Cache Protocols Krste Asanovic Electrical Engineering.
April 15, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 20: Snoopy Caches Krste Asanovic Electrical Engineering and Computer.
1 Shared-memory Architectures Adapted from a lecture by Ian Watson, University of Machester.
Memory Consistency Models Some material borrowed from Sarita Adve’s (UIUC) tutorial on memory consistency models.
Multi-core systems System Architecture COMP25212 Daniel Goodman Advanced Processor Technologies Group.
Shared Memory Consistency Models: A Tutorial Sarita V. Adve Kouroush Ghrachorloo Western Research Laboratory September 1995.
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
Memory Consistency Models Alistair Rendell See “Shared Memory Consistency Models: A Tutorial”, S.V. Adve and K. Gharachorloo Chapter 8 pp of Wilkinson.
Caltech CS184 Spring DeHon 1 CS184b: Computer Architecture (Abstractions and Optimizations) Day 12: May 3, 2003 Shared Memory.
By Sarita Adve & Kourosh Gharachorloo Slides by Jim Larson Shared Memory Consistency Models: A Tutorial.
Memory Consistency Models. Outline Review of multi-threaded program execution on uniprocessor Need for memory consistency models Sequential consistency.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
CS533 Concepts of Operating Systems Jonathan Walpole.
1 Lecture 19: Scalable Protocols & Synch Topics: coherence protocols for distributed shared-memory multiprocessors and synchronization (Sections )
1 Lecture 3: Coherence Protocols Topics: consistency models, coherence protocol examples.
ECE/CS 552: Shared Memory © Prof. Mikko Lipasti Lecture notes based in part on slides created by Mark Hill, David Wood, Guri Sohi, John Shen and Jim Smith.
CS267 Lecture 61 Shared Memory Hardware and Memory Consistency Modified from J. Demmel and K. Yelick
4/13/2016 CS152, Spring 2016 CS 152 Computer Architecture and Engineering Lecture 18: Snoopy Caches Dr. George Michelogiannakis EECS, University of California.
Multiprocessors – Locks
Symmetric Multiprocessors: Synchronization and Sequential Consistency
COSC6385 Advanced Computer Architecture
CS 152 Computer Architecture and Engineering Lecture 18: Snoopy Caches
Memory Consistency Models
The University of Adelaide, School of Computer Science
Memory Consistency Models
CS 252 Graduate Computer Architecture Lecture 10: Multiprocessors
Krste Asanovic Electrical Engineering and Computer Sciences
CMPE 382 / ECE 510 Computer Organization & Architecture Chapter 4 – Multiprocessors and Multithreading based on text: Computer Architecture : A Quantitative.
The University of Adelaide, School of Computer Science
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Krste Asanovic Electrical Engineering and Computer Sciences
Krste Asanovic Electrical Engineering and Computer Sciences
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Lecture 15 Multi-core Chips
Dr. George Michelogiannakis EECS, University of California at Berkeley
The University of Adelaide, School of Computer Science
CS 152 Computer Architecture and Engineering Lecture 20: Snoopy Caches
CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Lecture 18 Cache Coherence Krste Asanovic Electrical Engineering and.
CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Lecture 22 Synchronization Krste Asanovic Electrical Engineering and.
The University of Adelaide, School of Computer Science
CSE 486/586 Distributed Systems Cache Coherence
CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Lecture 19 Memory Consistency Models Krste Asanovic Electrical Engineering.
Presentation transcript:

CS 252 Graduate Computer Architecture Lecture 11: Multiprocessors-II Krste Asanovic Electrical Engineering and Computer Sciences University of California, Berkeley

10/16/ Recap: Sequential Consistency A Memory Model “ A system is sequentially consistent if the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in the order specified by the program” Leslie Lamport Sequential Consistency = arbitrary order-preserving interleaving of memory references of sequential programs M PPPPPP

10/16/ Recap: Sequential Consistency Sequential consistency imposes more memory ordering constraints than those imposed by uniprocessor program dependencies ( ) What are these in our example ? T1:T2: Store (X), 1 (X = 1) Load R 1, (Y) Store (Y), 11 (Y = 11) Store (Y’), R 1 (Y’= Y) Load R 2, (X) Store (X’), R 2 (X’= X) additional SC requirements

10/16/ Recap: Mutual Exclusion and Locks Want to guarantee only one process is active in a critical section Blocking atomic read-modify-write instructions e.g., Test&Set, Fetch&Add, Swap vs Non-blocking atomic read-modify-write instructions e.g., Compare&Swap, Load-reserve/Store-conditional vs Protocols based on ordinary Loads and Stores

10/16/ Issues in Implementing Sequential Consistency Implementation of SC is complicated by two issues Out-of-order execution capability Load(a); Load(b)yes Load(a); Store(b)yes if a  b Store(a); Load(b)yes if a  b Store(a); Store(b)yes if a  b Caches Caches can prevent the effect of a store from being seen by other processors M PPPPPP SC complications motivates architects to consider weak or relaxed memory models

10/16/ Memory Fences Instructions to sequentialize memory accesses Processors with relaxed or weak memory models (i.e., permit Loads and Stores to different addresses to be reordered) need to provide memory fence instructions to force the serialization of memory accesses Examples of processors with relaxed memory models: Sparc V8 (TSO,PSO): Membar Sparc V9 (RMO): Membar #LoadLoad, Membar #LoadStore Membar #StoreLoad, Membar #StoreStore PowerPC (WO): Sync, EIEIO Memory fences are expensive operations, however, one pays the cost of serialization only when it is required

10/16/ Using Memory Fences Producer posting Item x: Load R tail, (tail) Store (R tail ), x Membar SS R tail =R tail +1 Store (tail), R tail Consumer: Load R head, (head) spin:Load R tail, (tail) if R head ==R tail goto spin Membar LL Load R, (R head ) R head =R head +1 Store (head), R head process(R) Producer Consumer tailhead R tail R head R ensures that tail ptr is not updated before x has been stored ensures that R is not loaded before x has been stored

10/16/ Data-Race Free Programs a.k.a. Properly Synchronized Programs Process 1... Acquire(mutex); Release(mutex); Process 2... Acquire(mutex); Release(mutex); Synchronization variables (e.g. mutex) are disjoint from data variables Accesses to writable shared data variables are protected in critical regions no data races except for locks (Formal definition is elusive) In general, it cannot be proven if a program is data-race free.

10/16/ Fences in Data-Race Free Programs Process 1... Acquire(mutex); membar; membar; Release(mutex); Process 2... Acquire(mutex); membar; membar; Release(mutex); Relaxed memory model allows reordering of instructions by the compiler or the processor as long as the reordering is not done across a fence The processor also should not speculate or prefetch across fences

10/16/ Mutual Exclusion Using Load/Store A protocol based on two shared variables c1 and c2. Initially, both c1 and c2 are 0 (not busy) What is wrong? Process 1... c1=1; L: if c2=1 then go to L c1=0; Process 2... c2=1; L: if c1=1 then go to L c2=0;

10/16/ Mutual Exclusion: second attempt To avoid deadlock, let a process give up the reservation (i.e. Process 1 sets c1 to 0) while waiting. Process 1... L: c1=1; if c2=1 then { c1=0; go to L} c1=0 Process 2... L: c2=1; if c1=1 then { c2=0; go to L} c2=0 What can go wrong now?

10/16/ A Protocol for Mutual Exclusion T. Dekker, 1966 Process 1... c1=1; turn = 1; L: if c2=1 & turn=1 then go to L c1=0; A protocol based on 3 shared variables c1, c2 and turn. Initially, both c1 and c2 are 0 (not busy) turn = i ensures that only process i can wait variables c1 and c2 ensure mutual exclusion Solution for n processes was given by Dijkstra and is quite tricky! Process 2... c2=1; turn = 2; L: if c1=1 & turn=2 then go to L c2=0;

10/16/ Analysis of Dekker’s Algorithm... Process 1 c1=1; turn = 1; L: if c2=1 & turn=1 then go to L c1=0;... Process 2 c2=1; turn = 2; L: if c1=1 & turn=2 then go to L c2=0; Scenario 1... Process 1 c1=1; turn = 1; L: if c2=1 & turn=1 then go to L c1=0;... Process 2 c2=1; turn = 2; L: if c1=1 & turn=2 then go to L c2=0; Scenario 2

10/16/ N-process Mutual Exclusion Lamport’s Bakery Algorithm Process i choosing[i] = 1; num[i] = max(num[0], …, num[N-1]) + 1; choosing[i] = 0; for(j = 0; j < N; j++) { while( choosing[j] ); while( num[j] && ( ( num[j] < num[i] ) || ( num[j] == num[i] && j < i ) ) ); } num[i] = 0; Initially num[j] = 0, for all j Entry Code Exit Code

10/16/ CS252 Administrivia Project meetings next week (10/23-25), same schedule as before (M 1-3PM, Tu/Th 9:40-11AM) –Schedule on website –All in 645 Soda, 20mins/group Hope to see: –Project web site –At least one initial result (some delta from “hello world”) –Grasp of related work Midterm review

10/16/

10/16/

10/16/

10/16/

10/16/

10/16/ EECS Graduate Grading Guidelines A+, A, A- Quality expected from PhD student B+, B Quality expected from MS student, not PhD <= B- < Quality expected from MS student Class average somewhere in range

10/16/ Memory Consistency in SMPs Suppose CPU-1 updates A to 200. write-back: memory and cache-2 have stale values write-through: cache-2 has a stale value Do these stale values matter? What is the view of shared memory for programming? cache-1 A100 CPU-Memory bus CPU-1 CPU-2 cache-2 A100 memory A100

10/16/ Write-back Caches & SC T1 is executed prog T2 LD Y, R1 ST Y’, R1 LD X, R2 ST X’,R2 prog T1 ST X, 1 ST Y,11 cache-2 cache-1memory X = 0 Y =10 X’= Y’= X= 1 Y=11 Y = Y’= X = X’= cache-1 writes back Y X = 0 Y =11 X’= Y’= X= 1 Y=11 Y = Y’= X = X’= X = 1 Y =11 X’= Y’= X= 1 Y=11 Y’= 11 X = 0 X’= 0 cache-1 writes back X X = 0 Y =11 X’= Y’= X= 1 Y=11 Y’= 11 X = 0 X’= 0 T2 executed X = 1 Y =11 X’= 0 Y’=11 X= 1 Y=11 Y’=11 X = 0 X’= 0 cache-2 writes back X’ & Y’ inconsistent

10/16/ Write-through Caches & SC cache-2 Y = Y’= X = 0 X’= memory X = 0 Y =10 X’= Y’= cache-1 X= 0 Y=10 prog T2 LD Y, R1 ST Y’, R1 LD X, R2 ST X’,R2 prog T1 ST X, 1 ST Y,11 Write-through caches don’t preserve sequential consistency either T1 executed Y = Y’= X = 0 X’= X = 1 Y =11 X’= Y’= X= 1 Y=11 T2 executed Y = 11 Y’= 11 X = 0 X’= 0 X = 1 Y =11 X’= 0 Y’=11 X= 1 Y=11

10/16/ Maintaining Sequential Consistency SC is sufficient for correct producer-consumer and mutual exclusion code (e.g., Dekker) Multiple copies of a location in various caches can cause SC to break down. Hardware support is required such that only one processor at a time has write permission for a location no processor can load a stale copy of the location after a write  cache coherence protocols

10/16/ Cache Coherence Protocols for SC write request: the address is invalidated (updated) in all other caches before (after) the write is performed read request: if a dirty copy is found in some cache, a write- back is performed before the memory is read We will focus on Invalidation protocols as opposed to Update protocols

10/16/ Warmup: Parallel I/O (DMA stands for Direct Memory Access) Either Cache or DMA can be the Bus Master and effect transfers DISK DMA Physical Memory Proc. R/W Data (D) Cache Address (A) A D R/W Page transfers occur while the Processor is running Memory Bus

10/16/ Problems with Parallel I/O Memory Disk: Physical memory may be stale if Cache copy is dirty Disk Memory: Cache may hold state data and not see memory writes DISK DMA Physical Memory Proc. Cache Memory Bus Cached portions of page DMA transfers

10/16/ Snoopy Cache Goodman 1983 Idea: Have cache watch (or snoop upon) DMA transfers, and then “do the right thing” Snoopy cache tags are dual-ported Proc. Cache Snoopy read port attached to Memory Bus Data (lines) Tags and State A D R/W Used to drive Memory Bus when Cache is Bus Master A R/W

10/16/ Snoopy Cache Actions for DMA Observed Bus Cycle Cache State Cache Action Address not cached DMA Read Cached, unmodified Memory Disk Cached, modified Address not cached DMA Write Cached, unmodified Disk Memory Cached, modified

10/16/ Shared Memory Multiprocessor Use snoopy mechanism to keep all processors’ view of memory coherent M1M1 M2M2 M3M3 Snoopy Cache DMA Physical Memory Bus Snoopy Cache Snoopy Cache DISKS

10/16/ Cache State Transition Diagram The MSI protocol M SI M: Modified S: Shared I: Invalid Each cache line has a tag Address tag state bits Write miss Other processor intent to write Read miss P 1 intent to write Other processor intent to write Read by any processor P 1 reads or writes Cache state in processor P 1 Other processor reads P 1 writes back

10/16/ Two Processor Example (Reading and writing the same cache line) M SI Write miss Read miss P 1 intent to write P 2 intent to write P 2 reads, P 1 writes back P 1 reads or writes P 2 intent to write P1P1 M SI Write miss Read miss P 2 intent to write P 1 intent to write P 1 reads, P 2 writes back P 2 reads or writes P 1 intent to write P2P2 P 1 reads P 1 writes P 2 reads P 2 writes P 1 writes P 2 writes P 1 reads P 1 writes

10/16/ Observation If a line is in the M state then no other cache can have a copy of the line! – Memory stays coherent, multiple differing copies cannot exist M SI Write miss Other processor intent to write Read miss P 1 intent to write Other processor intent to write Read by any processor P 1 reads or writes Other processor reads P 1 writes back

10/16/ MESI: An Enhanced MSI protocol increased performance for private data ME SI M: Modified Exclusive E: Exclusive, unmodified S: Shared I: Invalid Each cache line has a tag Address tag state bits Write miss Other processor intent to write Read miss, shared Other processor intent to write P 1 write Read by any processor Other processor reads P 1 writes back P 1 read P 1 write or read Cache state in processor P 1 P 1 intent to write Read miss, not shared

10/16/ Snooper Optimized Snoop with Level-2 Caches Processors often have two-level caches small L1, large L2 (usually both on chip now) Inclusion property: entries in L1 must be in L2 invalidation in L2  invalidation in L1 Snooping on L2 does not affect CPU-L1 bandwidth What problem could occur? CPU L1 $ L2 $ CPU L1 $ L2 $ CPU L1 $ L2 $ CPU L1 $ L2 $

10/16/ Intervention When a read-miss for A occurs in cache-2, a read request for A is placed on the bus Cache-1 needs to supply & change its state to shared The memory may respond to the request also! Does memory know it has stale data? Cache-1 needs to intervene through memory controller to supply correct data to cache-2 cache-1 A200 CPU-Memory bus CPU-1 CPU-2 cache-2 memory (stale data) A100

10/16/ False Sharing state blk addr data0data1... dataN A cache block contains more than one word Cache-coherence is done at the block-level and not word-level Suppose M 1 writes word i and M 2 writes word k and both words have the same block address. What can happen?

10/16/ Synchronization and Caches: Performance Issues Cache-coherence protocols will cause mutex to ping-pong between P1’s and P2’s caches. Ping-ponging can be reduced by first reading the mutex location (non-atomically) and executing a swap only if it is found to be zero. cache Processor 1 R  1 L: swap (mutex), R; if then goto L; M[mutex]  0; Processor 2 R  1 L: swap (mutex), R; if then goto L; M[mutex]  0; Processor 3 R  1 L: swap (mutex), R; if then goto L; M[mutex]  0; CPU-Memory Bus mutex=1 cache

10/16/ Performance Related to Bus Occupancy In general, a read-modify-write instruction requires two memory (bus) operations without intervening memory operations by other processors In a multiprocessor setting, bus needs to be locked for the entire duration of the atomic read and write operation expensive for simple buses very expensive for split-transaction buses modern ISAs use load-reserve store-conditional

10/16/ Load-reserve & Store-conditional If the snooper sees a store transaction to the address in the reserve register, the reserve bit is set to 0 Several processors may reserve ‘a’ simultaneously These instructions are like ordinary loads and stores with respect to the bus traffic Can implement reservation by using cache hit/miss, no additional hardware required (problems?) Special register(s) to hold reservation flag and address, and the outcome of store-conditional Load-reserve R, (a):  ; R M[a]; Store-conditional (a), R: if == then cancel other procs’ reservation on a; M[a]  ; status succeed; else status fail;

10/16/ Performance: Load-reserve & Store-conditional The total number of memory (bus) transactions is not necessarily reduced, but splitting an atomic instruction into load-reserve & store- conditional: increases bus utilization (and reduces processor stall time), especially in split- transaction buses reduces cache ping-pong effect because processors trying to acquire a semaphore do not have to perform a store each time

10/16/ Blocking caches One request at a time + CC  SC Non-blocking caches Multiple requests (different addresses) concurrently + CC  Relaxed memory models CC ensures that all processors observe the same order of loads and stores to an address Out-of-Order Loads/Stores & CC Cache Memory pushout (Wb-rep) load/store buffers CPU (S-req, E-req) (S-rep, E-rep) Wb-req, Inv-req, Inv-rep snooper (I/S/E) CPU/Memory Interface