Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Caches Electronic Computers M Caches. 2 Cache LOCALITY PRINCIPLE (SPATIAL AND TEMPORAL) WORKING SET CPU Registers Cache I lev. Cache II lev. Cache III.

Similar presentations


Presentation on theme: "1 Caches Electronic Computers M Caches. 2 Cache LOCALITY PRINCIPLE (SPATIAL AND TEMPORAL) WORKING SET CPU Registers Cache I lev. Cache II lev. Cache III."— Presentation transcript:

1 1 Caches Electronic Computers M Caches

2 2 Cache LOCALITY PRINCIPLE (SPATIAL AND TEMPORAL) WORKING SET CPU Registers Cache I lev. Cache II lev. Cache III l3v. Memory Disk Tape Cache Caches The cache is a memory with an access time some order of magnitudes shorter than that of che main memory BUT with a size much smaller. It contains a small (see later) replicated portion of the main memory. The CPU, when accessing a data (code or data), tries FIRST to find it in a cache (hit) and then, when the data is not found, in the main memory (miss) In cache there are no single bytes BUT groups of bytes with contiguous addresses (normally 32 or 64 or 128 or more and in any case “aligned” – that is starting at an address multiple of the group size): each group is called a «line»

3 Cache 3 n+2 Memory access time >100 clock cycles Cache access time : 1 to 4 clock cycles Data line 0 Cache 2 5 m m+1 n n+2 Number of line: the address of the lower byte of a line divided by the size of the line (aligned). In other words the line number is the complete address of the first line byte minus the LSbits which are zeros (alignment!) 0 Memory 32-256 bytes per line 2 5 m m+1 n Data line address Line numberIn line offset Processor generated address In cache position detection Cache line Data Accessed data range: single byte to the entire line Caches

4 Cache 4 In line offset (0,1,2…31) 5 Lsbits of the data/instruction address Caches Let’ consider a cache line of 32 bytes In line offset In a cache access the processor can read/write  a byte at any offset  a half word starting at even offset  a word starting at multiple of 4 offset The cache read/write data MUST be aligned (address multiple of the data size) In Risc computers this is mandatory for the memory too which is not the case for many Cisc computers (this implies that two consecutive accesses must be performed – and therefore two cache accesses). Why? Because this implies that the most significant part of the address must be incremented 01234567891011121314153031 The cache consists of bytes Please notice that cache offset has nothing to do with page offset

5 5 Memorie associative (Content Addressable Memories) Associative memories : they include BOTH data lines and their lower byte address (line number - TAG) A data is found not through the decoding of the CPU address BUT by mean of a parallel comparison between all cache lines numbers (TAGs) and the CPU MSB address. The comparison can be either successfull (hit) or not (miss) Line number Data Line number Caches

6 6 Full-associative cache 315 TAG 0 Slot 1 Validity 7225 1 0 7226 2 1 57 m 1 88 n 1 Line Cache Line 0 Line 1 Line 2 Line k Line w Line w+1 Line z Memory In each slot any memory line can be stored. The TAG is the line number For instance: 64GB memory (36 bit address) and 256 byte lines. Offset in line: 8 bit. Tag=36-8= 28 bit 256 bytes/line Cache size is always a power of 2 as the line size Line Number The line number is compared with all cache TAGs. In case of HIT (and if the validity bit is 1) the requested data is present. The address offset is the position of the first byte in the line (requested data can be a byte, a word, a double word and so on provided it is within the line boundary). This cache organization makes the best use of the cache but it is terribly complex since it requires many comparators (if the cache has 1024 slots - in this case the cache size is 256 Kbytes - 1024 28 bit comparators are required!) and normally caches have 64K slots and more. Line number (28 bit) In line offset (8 bit) Processor generated address Caches Each cache line has status bits (2 or more). In this case the cache memory is (in bits) 1024 x (28 + 256 x 8 + 2) bits Tag Data Status

7 Caches7 Directly mapped cache TAG 0 Slot 1 Validity 1 0 2 1 m 1 n 1 Line Lina Line Cache In each cache slot only a subset of all memory lines can be stored. For instance in slot 0 only those whose line number divided by the slots number has a remainder 0 in slot 1 those with remainder 1 and so on. Obviously the initial memory address of data in each slot is the line number joined with zeroes. Line 0 Line 1 Line 2 Line k Line w Line w+1 Line z Memory For instance: 1 MB main memory, 64 bytes lines => 16K different lines. If the cache is has 128 slots (the cache size is therefore 128 x 64B = 8KB) in slot 0 lines number 0, 128, 256, etc., in slot 1 lines number 1, 129, 257 etc.

8 Caches8 Line 0 Line 1 Line 2 Memory Line 3 Line 4 Line 5 Line 6 Line 7 Line 8 Line 9 Line 10 Line 11 Line 12 Line 13 Line 14 Line 15 TAG 0 Slot 1 Validity 1 0 2 1 Line Cache 3 1Line Cache directly mapped An example (line 4 bytes)

9 9 Directly mapped cache The LSBs of the line number indicate the only cache slot where the line can be stored. Consider a processor with 36 bit address (64 GB), 256 byte line (8 bit): the line number is 28 bit (how many lines ? 2 28 -> 2 10 x 2 10 x 2 8 ). If the cache has 1024 slots (256KB) the 10 LSBs (2 10 = 1024) of the line number (index) indicate the slot where a line must be stored Only one 10 bit decoder (to detect the involved slot) and only one 18 bit comparator are needed Very little flexibility In line offset (8 bit) TAG (18 bit) Slot (10bit) Index Line number (28 bit) In line offset (8 bit) Processor generated address Caches

10 Directly mapped cache 10 TAGDATA Line Offset TAG Slot Cache In each slot only one line for each index can be stored Index Processor generated address Caches

11 Cache 11 A compromise n-way set-associative cache Sometimes speculative mechanisms (way 0 data is provided then check) TAGDATA Line Offset TAG Slot Processor generated address Caches N-way set-associative : many lines for each index N comparators for n-way. Parallelism of the comparators identical to that of directly mapped cache In the directly mapped caches data can be provided before validity and TAG check. In the set-associative caches only after the check

12 12 Cache set associative ADDRESSCACHE LINE TagIndexOffsetStatusTagDatum Data word Hit/miss TAG check and data selection according to the data type requested by the CPU (byte, word, DW etc.) Way 0 Way 1Way n Way 0 Way 1Way n Way 0 Way 1Way n Way 0 Way 1Way n Caches

13 Therefore... 13Caches In a fully associative cache a line can be stored in any slot In a directly mapped cache in only one slot, that corresponding to the INDEX In a set-associative cache in any way of the slot corresponding to the INDEX http://www.ecs.umass.edu/ece/koren/architecture/Cache/defa ult.htm http://www.ecs.umass.edu/ece/koren/architecture/Cache/page 3.htm http://www.ecs.umass.edu/ece/koren/architecture/Cache/fram e2.htm

14 14 Replacement algorithms Caches are of limited size and therefore it is necessary (i.e. in case of a read miss) to select a line which must be discarded (overwritten if not modified, written back in memory and then overwritten if modified) There are basically three possible policies: RAND (Random), LRU (Least Recently Used), and FIFO (First In First Out) with different efficiency and complexity RAND: in this case the logical network must first detect whether invalid lines are present (and therefore overwrite one on them): if not according to a random number generator (i.e. a shift register feedbacked by an EX-OR gate) must select a line to be replaced. The algorithms can be refined selecting first the non-modified lines. Although non-optimal this algorithm is very cost-effective Caches

15 15 Replacement algorithms NB: the same network for each set. When a“hit”occurs the hit way must become the most recent way and all others become of a lower rank with no rank change among them. Let’s suppose there are 4 ways and that all lines of the set are valid. The way (its number) in position Ra is the most recently hit. The other lines were hit in the past according to their positions. Na, Nb, Nc, Nd the hit ways numbers (0,1,2,3 in any order according to the set history!) Rx,Ra, Rb,Rc,Rd: 2 bit registers Rx stores the present hit way number (if any – no miss) NaNbNcNd AND CLK X Ex-OR RaRbRcRd Caches Rd stores the way number least recently hit (it stores the oldest line). Its line number is the candidate for replacement in case of miss for the set. Ra stores the way number most recently hit. For each hit the contents of the 2-bits registers are richt shifted one position Rx Right shift register

16 16 Replacement algorithms When a line is invalidated its way number is stored in Rd and all other ways numbers which were hit less recently than the invalidated line are left shifted one position The mechanism is symmetrical to the hit mechanism. For instance in presence of the situation depicted in figure the previous figure (top of this page) line 0 (in register Rb) is invalidated. Line 0 is stored in Rd while line 2 is stored in Rb and line 3 in Rc. In order to deal with the invalidation a symmetrical circuit must be added (the network must shift left until the position of way 0 is reached that is the clock is blocked at Rb). Caches Let’s now suppose a HIT for way 2 and that the way numbers in R i registers from left to right are 1, 0, 2, e 3. (Way 3 is the replacement candidate in case of set miss). The shift register right shifts until Rc (whose way number is 2) and not beyond because Rd clock is blocked by the Ex-OR. After the clock the R i registers store (in sequence) 2, 1, 0, 3 (way 3 is still the candidate for replacement while all other way numbers are correctly updated with way 2 as the most recently hit) 2 1023 AND CLK X Ex-OR RaRbRcRd Present status 2103 AND CLK X Ex-OR RaRbRcRd Next status

17 17 COUNTERS: a counter for each way of each set The counter walues correspond to the way ranking position for replacement: 0-> most recently hit, 3-> least recently hit In most implementation the counters can be incremented or reset. In case of hit of a way number the counters with a lower value are incremented and is reset (valure zero) the counter corresponding to the hit way. In case of miss and replacement the way whose counter is three is selected and then the system behaves as if that way was hit. In case of invalidation the invalidated way counter becomes 3 and all other counters with a greater number are decremented Events 01) Hit Way 0 02) Miss (line fill – Way 2 count 3 replaced 03) Way 1 invalidated 04) Hit Way 0 05) Way 3 invalidated 06) Miss (line fill – Way 3 count3 replaced) 07) Hit Way 2 08) Miss (line fill – Way 2 count 3 replaced) 09) Miss (line fill – Way 0 count 3 replaced) 10) Miss (line fill – Way 0 count 3 replaced) Validity 1 1 1 0 1 1 1 0 1 0 1 1 1 1 W0 W1 W2 W3 0 1 3 2 1 2 0 3 1 3 0 2 0 3 1 2 0 2 1 3 1 3 2 0 2 3 0 1 3 0 1 2 0 1 2 3 1 2 3 0 Final status W0 W1 W2 W3 1 0 3 2 0 1 3 2 1 2 0 3 1 3 0 2 0 3 1 2 0 2 1 3 1 3 2 0 2 3 0 1 3 0 1 2 0 1 2 3 Initial status Replacement algorithms It must be noted that the counter algorithm is equivalent to the shift register network. There the position indicates the age rank, here is the counter Caches Counters values

18 18 Replacement algorithms PSEUDO-LRU (in this example 4 ways) The 4 set ways are indicated by I0, I1, I2 e I3 In case of miss an invalid line is replaced There are three bits (B0, B1 e B2) for each set If the last set access was for I0 or I1 then B0 =1 otherwise B0=0 If the last access for the two ways I0 and I1 was for I0 then B1=1 otherwise B1=0. If the last access for the two ways I2 and I3 was for I2 then B2=1 otherwise B2=0 In case of replacement According to B0 the cache selects first which couple (I0:I1 or I2:I3) was least recently accessed then selects within the couple the way to be replaced according to B1 or B2 B0=0 ? B1=0 ? B2=0 ? Yes No Yes No I2I3 I0 I1 Replace The algorithm is pseudo-optimal because I1 could be the way least recently accessed but could be «blackened» by I0 if this is the most recently accessed. Yes (I0:I1) least recently accessed No (I2:I3) least recently accessed Caches

19 19 Replacement algorithms FIFO In this implementation there is a single counter for each set which starting from 0 is incremented for each read miss (that is for each replacement). The new line id inserted in the way pointed by the value of counter. Caches This algorithms has a singularity because it does not consider the invalidations. If the counter has value 3 and line in way 2 is invalidated, way 2 and not 3 should be used in case of read miss. Although suboptimal this algorithm has a very good cost/effectiveness ratio. http://www.ecs.umass.edu/ece/koren/architecture/Cache/frame1.htm http://www.ecs.umass.edu/ece/koren/architecture/PReplace/

20 What is then a TLB? 20 Processor Cache RAM Miss Virtual Address Physical Address Hit TLB Dati Caches The TLB is a cache which instead of providing memory data provides memory addresses (physical addresses) since it is addressed by processor virtual addresses The TLB access time is similar to that of the 1st level cache. In the modern processors the TLB (like the caches) has two levels NB: the processors (theoretically today) could be not paged. In this case the TLB does not exist since the virtual addresses are also the physical addresses As for the the caches the TLB can be fully associative, directly mapped or set associative with the same replacement problems. TLBs are normally 8-16 ways set associative 64- 1024 slots

21 21 Cache write Two possible policies: Yes : write-allocate No: no-write-allocate N.B.: Write operation are VERY less frequent than the read operations and with a high probability of sparse addresses. How are lines dealt with in case of write miss ? Read (with possible replacement) and then write ? In case of write-allocate the operation is a read/replacement followed by a line write in cache. In the other case data are written on the following cache level (if any and containing the line, otherwise in memory) Caches

22 Cache write 22 What when a write hit occurs ? Must data be written also in the following cache levels? Two policies Yes : write through No : write back Write-back policy implies that a bit for each line must be present in order to indicate whether the line has been modified (dirty bit) Caches In the first case the line is overwritten and data are also written in the following cache levels (down to the memory). In the second case a line is overwritten without forwarding the data to the next level cache (or memory - unless for coherency problems – see later). When a line must be replaced an already overwritten line must be first written back in the following cache level (which could be the memory) since data in the first level are more recent. The data traffic is much smaller (smaller bandwith use) but hadware is more complex It must be underlined that a line is a consistent data structure and therefore even in case of a single byte modification the entire line must be written back.

23 Posted write 23 Very often in order to reduce the access time impact the posted-write methodology is used Processor Cache FIFO RAM NB When the write buffer is used the cache read system must first check whether the requested data are in the FIFO Caches Data to be written in RAM are inserted in a FIFO write buffer which is accessed by the processor (or by the cache in case of write back for replacement) with no delay. The memory controller transfers then data from the buffer to the memory at the memory speed (much lower) Normally the FIFO slots are 4-32. When the FIFO is full, processor (or cache) is delayed.

24 Coherency 24 The presented mechanism can be easily extended to the case of n-levels caches Caches This means that the system must grant the most recent data to a system «agent» (processor, DMA, graphic processor..) upon a read request (please notice that a write of a non present line is preceded by the reading of the line). The coherency problem arises not only between caches of processors belonging to a multiprocessor system but also between different levels caches of the same processor For sake of simplicity let’s consider that all processors of the same multiprocessor system have two levels caches (L1 and L2) and the common memory. In most cases L2 is bigger than L1 (the cache directly connected to the processor). Let’s suppose the caches are inclusive that is if a line is present in L1 it is present also in L2 (but not viceversa). P1 Cache 1 Cache 2 P2 Cache 1 Cache 2 BUS Memory Caches have coherency problems

25 25 Coeherency policies (general) READ Write-back In this case the memory is updated only when necessary (i.e. a replacement). For each external agent access the cache (or the caches) mut be checked in order to verify whether it (they) stores the requested data and if the aswer is positive the agent memory access must be blocked until the requested data are forcedly written back to the memory. Cache snoop mechanism Caches How can we grant that an external agent (not the processor) reads from memory the most recent version of data (the data in memory could be stale that is «old») ? Let’ consider the write policies Write-through For each processor data write (both data present or not present in cache) the data is written also in memory: the coherency is therefore granted but the system is slowed by the memory access time. Posted write-through Similar to the previous case. The processor efficiency is improved (the processor is not normally delayed by the memory access time). No access is allowed to the external agent until data are written to the memory (not easy to implement and little efficient) )

26 26 Coeherency policies (general) WRITE Write-back The cache controller must monitor the system bus, and in case of an agent attempt to write must perform the following operations: a)If data are present in cache in a modified state line (or lines) the controller must stop the agent, must trigger a write-back of the modified line and then invalidates the line (lines). It must be noticed that the write-back operation is needed because since a line is made of several bytes there is no way of detecting which byte (or bytes) were modified. The new master could write bytes different from those which were modified b)If line data are not in modified state (the lline is coeherent with the memory) upon a write from another master the line must be only invalidated in cache. Caches What happens when another agent wants to write data in memory (or in its caches) ? Write-through The cache controller must monitor the system bus and invalidates the lines (if any) containing the data overwritten by the agent (until then coherent)

27 27 Two levels caches coherency policies (general) N.B. The processor has no way of determining whether a secondary cache is present. Signals exchanged with the system must be the same whether a secondary caches exists or not. The same applies for the secondary cache if a third level caches is present. How can we grant that another agent reads from memory the most updated data (if those data were also in a cache, the corresponding data in memory could be «stale» that is «older» than those in cache) ? Caches L1 e L2 write-through For each processor write (both in case data is present in cache or absent) data are written down to the memory. This obviously has a great impact on the bus, the most important bottleneck. The write operation by L2 could be deferred. In case of write of another agent data are invalidated (if present) in both cache levels L1 write-through e L2 write-back In this case L2 must monitor the bus and when another agent tries to read a data must first write back the modified data (if any – data are in any case the same in L2 and in L1 – if in L1 are present) in memory. In case of write acces by another agent, modified data must be first written back to memory then invalidated in both caches.

28 28 Two levels caches coherency policies (general) L1 and L2 both write-back When the processor reads data (line fill) upon a miss in L1, L2 checks whether it stores the requested data. If yes data are transferred to L1 (with a possible replacement). If the data are present in L2 this means that they are «cacheable». If data are not available in L2, data are requested to the memory controller (MC). If data are «cacheable» a line fill takes place both in L2 and L1. If not, data are simply read by the processor. Caches In case of a processor write operation with both L1 and L2 write- back there are many cases which depend whether the system is mono- or multi-processor : in any case the system must provide the most update data when they are requested How are these policies implemented? MESI PROTOCOL

29 29 M.E.S.I. (monoprocessor - write back) I – invalid (L1 and L2) The requested line is not available in cache N.B. Lines of a code cache can be only in S or I state At the system start-up all lines in all caches are invalid Caches M – modified (L1 and L2) The requested line is available in cache where it was modified without write-back downstream (which is L2 for L1 and memory for L2). The considered cache stores updated data. Notice that if a line is in modified state in L1 and L2 the line in L1 is more updated than the same line of L2. A write operation triggers a transition from M to M state without downstream write E – exclusive (L1 and L2) The considered line is present and identical to the same line present in the device downstream (which is L2 for L1 and memory for L2). A write operation triggers a state change from E to M without downstream write. (Careful: the name can be misleading – in case of a multiprocessor it means that the data are present in only one processor) ) S – shared (state possible only for L1 in a monoprocessor system) The line is present in L1 (S), L2 (E) and memory. A write operation triggers a a downstream write upon which L1 state becomes E and L2 state is changed from E to M (no memory write.. see state E). L2 in mono processor systems is never in shared state because there are no agents which need to be informed of the state of the (single) processor internal line (which is not the case of multiprocessor systems)

30 30 Possible States of the same line L2L1 II M E M Not present E S Not present NB: Not present: line not present because we consider inclusive caches (and L2>L1). L2 never shared in mono processor systems!!! L1 is always in a state which is related to the state of L2. A line can’t be in M-state in L1 if it isn’t in M –state in L2 Monoprocessor case (with two levels caches) Caches

31 31 Coherency policies monoprocessor (L1 and L2 both write back) NB: Since the size of L2 is bigger than the size of L1 it is possible (because of replacements) that a line is not present in L1 but in L2 only either in E or M state. A line fill, therefore in L1 stores the line in L1 respectively in S or E state. In the following slides we assume that all caches are inclusive. The MESI protocol is however applicable also to other cases Caches In case of monoprocessor systems a line-fill when data are not present both in L1 and L2 triggers the following state change: L1 state becomes S and L2 state E. A successive write operation to L1 triggers a state change of L1 to E and L2 to M (the L1 written data are also written to L2 because its state is shared). Data are not written to memory A successive write operation affects only L1 whose state becomes M

32 32 Coherency policies (monoprocessor) Write operation It triggers an “enquiry” of the Memory Controller in L2. If the line (if any) containing the data is present in L2 in E- state then the same line is in S-state in L1 (if any). The line in L1 and L2 is invalidated. If the line (if any) in L2 is in M-state, L2 must check in L1 whether the line is present in L1 and is in M-state. In any case the most recently updated version of the line is written back to the memory and the line is invalidated in both caches. The the external agent can then write its data in memory. The following cases apply to external agents without private cache (i.e. DMA controller, graphic processor etc.) accessing memory Caches Read operation. If L2 line containing the requested data is in E- state then the same line is in S-state in L1 (if any). Memory data are therefore the most updated. Cached data status not changed. If L2 modified, then a check must be made in L1. If in L1 not present or in E state then data in L2 must be written back to the memory, L1 becomes S (if present) and L2 becomes E. If data in L1 modified then its data must be written back to L2 and memory. L1 becomes S and L2 E.

33 33 PROCESSOR READ COERENCY Monoprocessor 1) Miss in L1 and not in L2. Line fill from L2 to L1. L1 state depends on L2-state. If L2 state is exclusive, L1 becomes shared; if L2 state is modified L1 becomes exclusive. No chance of a line present in L1 and not in L2 N.B. Why must L1-> S if L2 is exclusive ? Because in case of write if L1 were in exclusive state no write-back to L2 would take place (L1 E->M) and a memory enquiry would find that the requested data in L2 are identical to those in memory (although stale) and no further enquiry on L1 would take place. A read or write data of an external agent would operate on the memory data without write-back of L1 data (the most recent data) Caches 2) Miss in L1 e L2 -> double line fill. L1 > shared and L2 -> exclusive

34 34 PROCESSOR WRITE COERENCY L1 hit. Three cases (the line is surely in L2 too) a) L1 shared (and therefore L2 exclusive). Write to L1 and L2. L2->M and L>-E. b) L1 exclusive (and therefore necessarily L2 modified). Write to L1 only. L1->M (and L2 M) c) L1 modified (and therefore L2 modified): write to L1 only. L1 remains in M-state (as L2) Caches Monoprocessor Miss in L1 and L2: line fill from memory in L2 (->E) and L1 (- >S) then write to both caches (L1-> E and L2->M) Miss in L1 and not in L2. Line fill from L2 in L1 then write. If L2 in E state L1->S otherwise L1-E (L2 can only be in E or M state). Final states as per point 1

35 35 External agent READ/WRITE coherency Cacheless external agent External agent READ 1)Miss in L1 and L2 or HIT in LI or L2 both not modified: NOP 2)Hit in L1 modified (and therefore L2 modified): L1 write back to memory and L2. L1->S and L2->E 3)Hit in L2 modified (e L1 exclusive or line not present in L1): L2 write back to memory. L2->E and L1 (if any) ->S Caches External agent WRITE 1)Miss in L1 and L2: NOP 2)Hit in L2 and possibly in L1 both not modified: L2->I and L1 ((if any) ->I 3)Hit both in L2 and L1 (both modified): write back to memory of L1 then L1->I and L2->I

36 36 M.E.S.I. (multiprocessor) I – invalid The requested line is not available in cache. A read operation causes a LINE-FILL. A write operation causes a WRITE-THROUGH in case of non write-allocate policy otherwise a line fill followed by the write operation Caches M – modified The line is present only in the caches of one processor and in the specified cache it was modified without being written back to the downstream device (is is different form the same line in the downstream device). The line can be read and written without any downstream cycle. E – exclusive The line is present only in the caches of one processor and its content is identical to the downstream device. The line can be read and written without any downstream cycle. A processor write operation provokes a transition to M state. S – shared As before but now L2 too can be in Sstate. The line is in fact possibly in the caches of many processors. (Possibly because it could be present, for instance, in two processors and then one of them has replaced the line) A write operation causes a downstream write and invalidates the same line in the caches of the other processors, if any.

37 37 Possibile States of the same line Multiprocessor case (with two levels caches) L2L1 II M E M Not present E S Not present S S Not present In case of multilevel caches a lower level cache stores a reduced set of the lines of the upper level (inclusive caches). But not always (not inclusive caches) ! Caches

38 38 READ COHERENCY Multiprocessor (only L1 and L2) 1)Miss in L1 but not in L2. When L2 shared or exclusive L1 the read line becomes shared When L2 modified, L1 the read line becomes exclusive. NB: Similar to monoprocessor case but notice that in this case is it possible that both L1 and L2 are in shared state (while in case of a monoprocessor L2 IS NEVER in S state) Caches

39 39 READ COHERENCY Multiprocessor 2) Miss in L1 and L2. Bus snoop When the line in neither cache a double line fill occurs. If not present in caches of another processor in L1 the read line is in shared state and in L2 is in exclusive state When the line is present in some other caches not modified (that is is in shared or exclusive state) upon the snoop all become shared state, The line is read into L1 and L2: in both caches of the requesting processor (as in all caches of the other processors) the state become shared If the line is present in the caches of only one processor and is in modified state (a line can be in modified state ONLY in one processor !) back-off on the bus, write back of the line in memory, the hit caches state becomes shared. The line is read into L1 and L2: in both caches the state become shared. Notice that if a line is in modified state in a L1 is in modified state in the corresponding L2 too !! N.B. A bus snoop is a snoop on L2 which is forwarded to L1 if L2 is in modified state Caches

40 40 WRITE COHERENCY 2)Miss in L1 and not in L2. The line stored in L2 is forwarded to L1 and then written. Three cases a)L2 exclusive. No bus snoop, The line is written in L2 and L1. At the end L2 modified and L1 exclusive. b)L2 modified. L2 modified and L1 exclusive c)L2 shared. Bus snoop with invalidation, read in L1 and L2 and then write operation. L1 exclusive and L2 modified Caches Multiprocessor 1)Miss in L1 e L2. Three cases a)The line is not in caches of other processors: as for the monoprocessor b)The line is present in other caches not modified: all caches containing the line are invalidated. Read in L1 and L2 and then write; final state L1 exclusive and L2 modified c)The line is present in another processor (only one !) in modified state. Bus back-off, the modified line is written back to the memory and the caches storing the line invalidated (do not forget that both L1 and L2 can be in modified state). The modified line must be first written back because it is not known which data of the line will be rewritten. Then as in the case of the monoprocessor. In any case at the end of the operation L2 modified and L1 exclusive.

41 41 WRITE COHERENCY 3) Hit in L1 (and therefore in L2). Three cases: a)L1 modified. Only L1 is written b)L1 exclusive. Only L1 is written. L1 modified c)L1 shared. Two cases : I.L2 is shared. Bus snoop with invalidation then write on L1 and L2. L1 exclusive and L2 modified II.L2 exclusive. No bus snoop- Write on L1 and L2. L1 exclusive and L2 modified N.B. There are no cases with L1 shared and L2 modified, Caches

42 A three levels cache case Caches42 Only entire lines are transferred S S E Read non shared line 123123 Level I I I Write line S E M Write line E M M Read from another processor S S S I I I S S S S E M Read shared line Brodcast Write line Brodcast New read Replecemnt 1st level (I) E M Cache is inclusive: higher levels have a superset of the lines of the lower levels. (I) means that the line doesn’t exist any more on the level. The slot is filled with a new line which is stored in the other levels too. The line remains in its previous state in the other levels E M M Write line Write from other processor Brodcast

43 43 Other coherency policies Each line can be in one the following states Shared: one or more processors caches have the line coeherent with the memory Non cached: no processor cache has the line Modified: Only one processor cache has the modified line. In this case the processor is the temporary owner of the line M P1 C1 I/O D M P2 C2 I/O D M P3 C3 I/O D M P4 C4 I/O D Directories Caches (possibly multilivel) Caches “Directory based” coherency protocol Local Memories (dual port memories) The total memory is the sum of the processors local memories (accessible also from other processors). There is therefore an unique memory addressing system for all memories (true in all modern multiprocessors). Local memories are normally dual-port memories Information about lines of each local memories are stored in directories Each directory stores the information about each line of the local memory and the processors whose caches (if any) store the line

44 44 Directory based protocol In the line directory there is a bit for each processor, which is 1 if the processor cache stores the line. Two or more 1’s mean that the line is in shared state. A single 1 means that there is a possible owner of the line (the line could be in modified state). If a processor modifies a line in a cache a message is sent to the directory which it belongs to, which in turn sends a message to invalidate the same line in the other caches (if any). In case of a read a message is sent to the owner (if any) which must write back its modified data and the line become shared. In case of write a write-back takes place and then line invalidation of the previous owner (if any).. M P C I/O D M P C D M P C D M P C D L L L Home Remote Local L=line request message Caches The transitions are simililar to those of MESI but the implementation is different. This system is very useful if there are multiple connections between the processors by reducing the global use of the busses.

45 Caches 45 There are two types of caches: unififed and not-unified. Not-unified means that data and instructions are not mixed. Unified means the contrary In general in the modern processors the first level caches are not-unified (Harward architecture). Other levels are unified

46 46 Branch Target Buffer In order to avoid stalls derived from branches, a branch prediction is necessary in the first stage of the pipeline. The prediciton can be either correct or wrong. In any case the branch is tested in the execution stage. In case of miss a line fill occurs and a replacement procedure is activated. The initial prediction is that occurred in the execution stage P C PC address Destination address T/U PC address Destinatinon address T/U PC address Destination address T/U PC address Destination address T/U PC address Destination address T/U PC address Destination address T/U Branch Target Buffer Taken Untaken Caches The BTB is therefore a cache whose TAGs are the addresses of the PCs of instructions detected as branches. The line in this case is the branch destination address and among the status bits there are those who predict whether the branch is Taken or Untaken

47 Branch prediction 47 How is a prediction managed ? On a statistical basis ? Caches Static prediction according to the direction of branch (forward or backward) In this case the prediction is taken for backward branches (see loops) and the prediction is untaken for forward branches In SPEC benchmarks, however, the majority of branches id forward and the prediction is taken, therefore the prediction gives better results Dynamic prediction on the basis of the history of the branch The prediction error varies between 5 to 22% Simple case: static prediction. The prediction is always «taken» The error probability with this policy, according to SPEC benchmarks, is 34% (fairly high)

48 48 Branch Target Buffer With only one prediction bit which records the last verified branch. In this case for loop1 there are two successive prediction errors Loop1 Loop2 When loop2 ends (predicted as taken but untaken) there is a following error because in the first following loop loop1 will be predicted as untaken Caches

49 49 Branch Target Buffer Normally two bits are used. Two possible schemes TAKEN UNTAKEN TAKEN UNTAKEN TAKEN UNTAKEN TAKEN In this case after two «mispredictions» the prediction is changed (low pass filter) UNTAKEN TAKEN UNTAKEN TAKEN UNTAKEN TAKEN UNTAKEN TAKEN UNTAKEN In this case after two «mispredictions» the prediction is changed but ready to go back to the previous prediction in case of a further change With both schemes the accuracy is higher than 80% Caches

50 Advanced algorithms for BTB 50 Two levels adaptive prediction Two registers: BHR (Branch History Table) and PHT (Pattern History Table) First case: globale approach 00101110 Ex: BHR (Shift Register) (content = 2E h = 47 10 ) 00101 01100 11100 10101 (00) (01) (2E) (FF) PHT 1 -> Branch taken 0 -> Branch not taken History of the most recent n (8 in this example) branches (what really happened, that is whether the branch was verified as either taken or untaken) What was predicted with the same global succession (BHR) ? Decision: taken Decisione: untaken Caches 2 8 =64 10 =FF HEX

51 51 Advanced algorithms for BTB In case of a branch the most recent event succession is analysed (whether the branch was really taken or untaken), For each configuration of this succession a pattern is selected which reflects the decisions taken with this succession configuration. After each branch execution the resulted value is stored in the right-shifted BHR A function must be defined which according to the contents of the BHR and the PHT predicts the branch (see next slide) This prediction system (which uses n + (2**n x m) FF - where n is the size of the BHR and m that of each PHT slot) is not particularly significant because there is no difference among all branches. Effective but not very precise. Caches

52 Advanced algorithms for BTB Caches52 In the previous example a BHT consisting of 8 bits and a PHT of 5 bits was chosen. There us no corrleation between the two sizes which can be indivually arbitrarily chosen by the designer. The prediction policy based on these to elements has to be defined and is where the efficiency of the BTB relies. Here some examples of possible policies a.The number of «ones» in the BHT and PHT is counted: if it is greater that 7 than the next prediction is «taken» – 1 – otherwise is «untaken» - 0 - b.The first 5 bits of the BHT and PHT are ex-ored and the number of «ones» is counted: if even than «taken» otherwise «untaken» c.……….

53 53 Advanced algorithms for BTB Second case: mixed preditor In this case there is a BHR for each branch made of K shift registers each one of n bits (one for each branch) while there only one PHT. m K n 2**n K branches considered Branch (address) BHT Same pointed PHT Caches

54 54 Advanced algorithms for BTB N.B.: registers related to different branches can point to the same PHT register In this case too there is a lack of consistency: while the history of each branch is different the originating pattern is the same Used FFs: k x n + (2**n x m) Caches How many FFs?

55 55 Advanced algorithms for BTB Third case: omogeneous predictor A (complex) refinement of the second case m Required FFs: k x n + (2**n x m x k) k n 2**n k Caches Branch (address) BHT How many FFs?


Download ppt "1 Caches Electronic Computers M Caches. 2 Cache LOCALITY PRINCIPLE (SPATIAL AND TEMPORAL) WORKING SET CPU Registers Cache I lev. Cache II lev. Cache III."

Similar presentations


Ads by Google