Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Architecture 2011 – coherency & consistency (lec 7) 1 Computer Architecture Memory Coherency & Consistency By Dan Tsafrir, 11/4/2011 Presentation.

Similar presentations


Presentation on theme: "Computer Architecture 2011 – coherency & consistency (lec 7) 1 Computer Architecture Memory Coherency & Consistency By Dan Tsafrir, 11/4/2011 Presentation."— Presentation transcript:

1 Computer Architecture 2011 – coherency & consistency (lec 7) 1 Computer Architecture Memory Coherency & Consistency By Dan Tsafrir, 11/4/2011 Presentation based on slides by David Patterson, Avi Mendelson, Lihu Rappoport, and Adi Yoaz

2 Computer Architecture 2011 – coherency & consistency (lec 7) 2 Coherency - intro u When there’s only one core  Caching doesn’t affect correctness u But what happens when ≥ 2 cores work simultaneously on same memory location?  If both are reading, not a problem  Otherwise, one might use a stale, out-of-date copy of the data  The inconsistencies might lead to incorrect execution u Terminology  Memory coherency Cache coherency Processor 1 L1 cache Processor 2 L1 cache L2 cache (shared) Memory

3 Computer Architecture 2011 – coherency & consistency (lec 7) 3 The cache coherency problem for a single memory location Memory contents for location X Cache contents for CPU-2 Cache contents for CPU-1 EventTime 10 11CPU-1 reads X1 111CPU-2 reads X2 010CPU-1 stores 0 into X3 Stale value, different than corresponding memory location and CPU-1 cache. (The next read by CPU-2 will yield “1”.)

4 Computer Architecture 2011 – coherency & consistency (lec 7) 4 A memory system is coherent if… u Informally, we could say (or we would like to say) that... u A memory system is coherent if… u Any read of a data item returns the most recently written value of that data item u (This definition is intuitive, but overly simplistic) u More formally…

5 Computer Architecture 2011 – coherency & consistency (lec 7) 5 A memory system is coherent if… 1. - Processor P writes to location X, and later - P reads from X, and - No other processor writes to X between above write & read => Read must return value previously written by P 2. - P1 writes to X - Some time – T – elapses - P2 reads from X => For big enough T, P2 will read the value written by P1 3. Two writes to same location by any two processors are serialized => Are seen in the same order by all processors (if “1” and then “2” are written, no processor would read “2” & “1”)

6 Computer Architecture 2011 – coherency & consistency (lec 7) 6 A memory system is coherent if… 1. - Processor P writes to location X, and later - P reads from X, and - No other processor writes to X between above write & read => Read must return value previously written by P 2. - P1 writes to X - Some time – T – elapses - P2 reads from X => For big enough T, P2 will read the value written by P1 3. Two writes to same location X by any two processors are serialized => Are seen in the same order by all processors (if “1” and then “2” are written, no processor would read “2” & “1”) Simply preserves program order (needed even on uniprocessor). Defines notation of what it means to have a coherent view of memory; if X is never updated regardless of the duration of T, than the memory is not coherent. If P1 writes to X and then P2 writes to X, serialization of writes ensures that every processor will see P2’s write eventually; otherwise P1’s value might be maintained indefinitely.

7 Computer Architecture 2011 – coherency & consistency (lec 7) 7 Memory Consistency u The coherency definition is not enough  So as to be able to write correct programs  It must be supplemented by a consistency model  Critical for program correctness u Coherency & consistency are 2 different, complementary aspects of memory systems  Coherency What values can be returned by a read Relates to behavior of reads & writes to the same memory location  Consistency When will a written value be returned by a subsequent read Relates to behavior of reads & writes to different memory locations

8 Computer Architecture 2011 – coherency & consistency (lec 7) 8 Memory Consistency (cont.) u “How consistent is the memory system?”  A nontrivial question u Assume: locations A & B are originally cached by P1 & P2  With initial value = 0 u If writes are immediately seen by other processors  Impossible for both “if” conditions to be true  Reaching “if” means either A or B must hold 1 u But suppose:  (1) “Write invalidate” can be delayed, and  (2) Processor allowed to compute during this delay  => It’s possible P1 & P2 haven’t seen the invalidations of B & A until after the reads, thus, both “if” conditions are true u Should this be allowed?  Determined by consistency model Processor P2Processor P1 B = 0;A = 0; …… B = 1;A = 1; if ( A == 0 ) …if ( B == 0 ) …

9 Computer Architecture 2011 – coherency & consistency (lec 7) 9 Consistency models u From most strict to most relaxed  Strict consistency  Sequential consistency  Weak consistency  Release consistency  […many…] u Stricter models are  Easier to understand  Harder to implement  Slower  Involve more communication  Waste more energy

10 Computer Architecture 2011 – coherency & consistency (lec 7) 10 Strict consistency (“linearizability”) u All memory operations are ordered in time u Any read to location X returns the most recent write op to X u This is the intuitive notion of memory consistency u But too restrictive and thus unused

11 Computer Architecture 2011 – coherency & consistency (lec 7) 11 Sequential consistency u Relaxation of strict (defined by Lamport) u Requires the result of any execution be the same as if memory accesses were executed in some arbitrary order  Can be a different order upon each run u Left is sequentially consistent (can be ordered as in the right) u Q. What if we flip the order of P2’s reads (on left)? W(x)1P1: R(x)2R(x)1P2: R(x)2R(x)1P3: W(x)2P4: W(x)1P1: R(x)2R(x)1P2: R(x)2R(x)1P3: W(x)2P4: time

12 Computer Architecture 2011 – coherency & consistency (lec 7) 12 Weak consistency 1. Access to “synchronization variables” are sequentially consistent 2. No access to a synchronization variable is allowed to be performed until all previous writes have completed everywhere 3. No data access (read or write) is allowed to be performed until all previous accesses to synchronization variables have been performed u In other words, the processor doesn’t need to broadcast values at all, until a synchronization access happens u But then it broadcasts all values to all cores SW(x)2W(x)1P1: R(x)2S R(x)0P2: R(x)2SR(x)1P3:

13 Computer Architecture 2011 – coherency & consistency (lec 7) 13 Release consistency u Before accessing shared variable  Acquire op must be completed u Before a release allowed  All accesses must be completed u Acquire/release calls are sequentially consistent u Serves as “lock”

14 Computer Architecture 2011 – coherency & consistency (lec 7) 14 MESI Protocol u Each cache line can be on one of 4 states  Invalid – Line data is not valid (as in simple cache)  Shared – Line is valid & not dirty, copies may exist in other caches  Exclusive – Line is valid & not dirty, other processors do not have the line in their local caches  Modified – Line is valid & dirty, other processors do not have the line in their local caches u (MESI = Modified, Exclusive, Shared, Invalid) u Achieves sequential consistency

15 Computer Architecture 2011 – coherency & consistency (lec 7) 15 Two classes of protocols to track sharing u Directory based  Status of each memory block kept in just 1 location (=directory)  Directory-based coherence has bigger overhead  But can scale to bigger core counts u Snooping  Every cache holding a copy of the data has a copy of the state  No centralized state  All caches are accessible via broadcast (bus or switch)  All cache controllers monitor (or “snoop”) the broadcasts To determine if they have a copy of what’s requsted

16 Computer Architecture 2011 – coherency & consistency (lec 7) 16 Processor 1 L1 cache Processor 2 L1 cache L2 cache (shared) Memory [1000]: 5 miss Multi-processor System: Example u P1 reads 1000 u P1 writes 1000 [1000]: 5 [1000] miss [1000]: 5 [1000]: 6 EM 00 10

17 Computer Architecture 2011 – coherency & consistency (lec 7) 17 Processor 1 L1 cache Processor 2 L1 cache L2 cache (shared) Memory MS [1000]: 5 Multi-processor System: Example u P1 reads 1000 u P1 writes 1000 u P2 reads 1000 u L2 snoops 1000 u P1 writes back 1000 u P2 gets 1000 [1000]: 5 [1000]: 6 [1000] miss [1000]: 6 S 10 11

18 Computer Architecture 2011 – coherency & consistency (lec 7) 18 Processor 1 L1 cache Processor 2 L1 cache L2 cache (shared) Memory MS [1000]: 5 Multi-processor System: Example u P1 reads 1000 u P1 writes 1000 u P2 reads 1000 u L2 snoops 1000 u P1 writes back 1000 u P2 gets 1000 [1000]: 6 S 10 11 u P2 requests for ownership with write intent [1000] I 01 [1000] E

19 Computer Architecture 2011 – coherency & consistency (lec 7) 19 The alternative: incoherent memory u As core counts grow, many argue that maintaining coherence  Will slow down the machines  Will waste a lot of energy  Will not scale u Intel SCC  Single chip cloud computer – for research purposes  48 cores  Shared, incoherent memory  Software is responsible for correctness u The Barrelfish operating system  By Microsoft & ETH (Zurich)  Assumes no coherency as the base line

20 Computer Architecture 2011 – coherency & consistency (lec 7) 20

21 Computer Architecture 2011 – coherency & consistency (lec 7) 21 Intel SCC Shared (incoherent) memory


Download ppt "Computer Architecture 2011 – coherency & consistency (lec 7) 1 Computer Architecture Memory Coherency & Consistency By Dan Tsafrir, 11/4/2011 Presentation."

Similar presentations


Ads by Google