Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hit or Miss ? !!!.  Cache RAM is high-speed memory (usually SRAM).  The Cache stores frequently requested data.  If the CPU needs data, it will check.

Similar presentations


Presentation on theme: "Hit or Miss ? !!!.  Cache RAM is high-speed memory (usually SRAM).  The Cache stores frequently requested data.  If the CPU needs data, it will check."— Presentation transcript:

1 Hit or Miss ? !!!

2  Cache RAM is high-speed memory (usually SRAM).  The Cache stores frequently requested data.  If the CPU needs data, it will check in the high- speed cache memory first before looking in the slower main memory.  Cache memory may be three to five times faster than system DRAM.

3  Most computers have two separate memory caches; L1 cache, located on the CPU, and L2 cache, located between the CPU and DRAM.  L1 cache is faster than L2, and is the first place the CPU looks for its data. If data is not found in L1 cache, the search continues with L2 cache, and then on to DRAM.

4  Shared cache: is a cache which shared among several processors.  In multi-core system, the shared cache is usually overloaded with many accesses from the different cores.  Our goal is to reduce the load from the shared cache.  To achieve this goal we will build a predictor which predict if we going to get a hit or miss when we access the shared cache.

5  Small size.  Simple and fast.  Implementable with hardware.  Does not need too much power.  Does not predict miss if we have a hit.  Have a high hit rate especially on misses. Hit or Miss ? !!!

6  Bloom filter: is a method representing a set of N elements ( a 1,…,a n ) to support membership queries.  The idea is to allocate a vector v of m bits, initially all set to 0.  Choose k independent hash functions, h 1,…, h k,each with range 1… m.  For each element a, the bits at positions h 1 ( a ),..., h k ( a ) in v are set to 1.

7  Given a query for b we check the bits at positions h 1 ( b ), h 2 ( b ),..., h k ( b ).  If any of them is 0, then certainly b is not in the set A.  Otherwise we conjecture that b is in the set although there is a certain probability that we are wrong. This is called a “false positive”.  The parameters k and m should be chosen such that the probability of a false positive (and hence a false hit) is acceptable.

8 0000000000000000 0123456789121110131514 A = {123, 456, 764, 227} H(x) = x % 16A = {123, 456, 764}A = {123, 456}A = {123}A = {} Insert (123) Insert (456) Insert (764)Insert (227)H(123) = 11 H(456) = 8 H(764) = 12H(227) = 3 0000000000010000 0123456789121110131514 0000000010010000 0123456789121110131514 0000000010011000 0123456789121110131514 0001000010011000 0123456789121110131514 Is 227 in A ? Bloom Array H(227)=3 Bloom[3]=1 I think, Yes it is. Right Prediction Is 151 in A ? H(151)=7 Bloom[7]=0 Right Prediction Certainly No. Is 504 in A ? H(504)=8 Bloom[8]=1 I think, Yes it is. !!Ops ! False Positive

9  We used a separate predictor for each set in the L2 cache. Set 0 Set 1 Set N Set 0 Set 1 Set N Set 0 Set 1 Set N Set 0 Set 1 Set N Array 0 Array 1 Array N 0 0 1 1 0 0 1 1 0 0 0 0 1 1 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0

10 SSmall size. SSimple and fast. IImplementable with hardware. DDoes not need too much power. DDoes not predict miss if we have a hit.

11  If A is a dynamic group, and in our case it is a dynamic one, it will be too hard to update the array when removing an element “e” from A, we can’t simply turn off Bloom[H(e)], to do so we must check that there is no element “e1” in A such that H(e)=H(e1). And this take a lot of time.   If we don’t update the array the hit rate will become low. 

12  Using counters instead of binary cells, so when removing an element we simply reduce the appropriate counters.  The problem with this solution is:  The size will become large.

13  Note that the number of elements in each set is usually small (cache associative), allow us to use limited counters, for example 2 bit counters.  In this way we get a small predictor, but we still have problem when the counter reached its saturation, and it happened with low probability.

14  Adding an overflow flag for each bloom array allow us to reduce the counter when it reach its saturation in few cases.  Overflow flag = 1, if and only if we tried to increase a saturated counter in the appropriate array.  How does it help?  If the overflow flag is 0, we can reduce a saturated counter, we were unable to do this before.

15  How can we solve the problem of the not updates arrays?  Entering the arrays that need update to a queue and every N cycles we update one of them, (in this way the lines in the DRAM updates)  When we enter an array to the queue?  After K failed attempts to reduce a counter in the array due to overflow.

16  We don’t have an infinity queue in the hardware, so what can we do if the queue is full and we need to enter an array to it?  We turn on a flag which indicate that the array need update and it not entered to the queue yet, and in the next time that we access the array we will try again to enter it to the queue.

17  We get all the L2 accesses from simics for 9 benchmarks.  We implemented a simulator to the cache and the predictor with Perl.  In the command line we can choose the configuration that we want, by changing the following parameters:

18  Cache parameters:  Lines number – the number of the lines in the cache.  Line size – the size of each line in the cache.  Associative – the associative of the cache.

19  Predictor parameters:  Bloom array size – The number of entries in bloom array.  Bloom max counter – The counter limit for each entry.  Number of hashes – The number of hash functions that the algorithm use.

20  Predictor parameters:  Bloom max not updated - Number of times of fails to decrement the Bloom counter in a specific entry, and failed due to the fact that the counter is saturated.  Enable bloom update – Enable array update.  Bloom update period – Number of L2 accesses between 2 updates.

21  In the following graphs we see the hit rate of the predictor versus the cache hit rate.  We configured the predictor and the cache with the following parameters.  Bloom array size = 64  Bloom max counter = 3  Associative = 16  Line size = 64  Update period = 1

22

23

24

25  Project goal achieved:  We saw in the above graphs that we get a high hit rate on misses, for example the average hit rate on misses with 16M cache is 93.5%.  What’s next?  Using the predictor idea to other units in the computer, for example in the DRAM.

26  http://pages.cs.wisc.edu/~cao/papers/summ ary-cache/node8.html http://pages.cs.wisc.edu/~cao/papers/summ ary-cache/node8.html  http://www.simmtester.com/page/memory/ show_glossary.asp http://www.simmtester.com/page/memory/ show_glossary.asp  http://i284.photobucket.com/albums/ll32/kw ashecka/thanks.gif http://i284.photobucket.com/albums/ll32/kw ashecka/thanks.gif

27


Download ppt "Hit or Miss ? !!!.  Cache RAM is high-speed memory (usually SRAM).  The Cache stores frequently requested data.  If the CPU needs data, it will check."

Similar presentations


Ads by Google