Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cache memory Replacement Policy, Virtual Memory Prof. Sin-Min Lee Department of Computer Science.

Similar presentations


Presentation on theme: "Cache memory Replacement Policy, Virtual Memory Prof. Sin-Min Lee Department of Computer Science."— Presentation transcript:

1 Cache memory Replacement Policy, Virtual Memory Prof. Sin-Min Lee Department of Computer Science

2 Q 2x4 Decoder X Y Z JKJK Q CLK XYZQJKQ+Q+ 0000010 0010010 0100010 0110010 1000111 1010111 1100010 1110010 XYZQJKQ+Q+ 0001101 0011101 0101101 0111110 1001101 1011101 1101101 1111110 JKQ+Q+ 00Q 010 101 11Q 01 000,001, 010,011, 110,111 000,001, 010,100, 101,110 100,101 011,111

3

4 There are three methods in block placement: Direct mapped : if each block has only one place it can appear in the cache, the cache is said to be direct mapped. The mapping is usually (Block address) MOD (Number of blocks in cache) Fully Associative : if a block can be placed anywhere in the cache, the cache is said to be fully associative. Set associative : if a block can be placed in a restricted set of places in the cache, the cache is said to be set associative. A set is a group of blocks in the cache. A block is first mapped onto a set, and then the block can be placed anywhere within that set. The set is usually chosen by bit selection; that is, (Block address) MOD (Number of sets in cache)

5 A pictorial example for a cache with only 4 blocks and a memory with only 16 blocks.

6 Direct mapped cache: A block from main memory can go in exactly one place in the cache. This is called direct mapped because there is direct mapping from any block address in memory to a single location in the cache. cache Main memory

7 Fully associative cache : A block from main memory can be placed in any location in the cache. This is called fully associative because a block in main memory may be associated with any entry in the cache. cache Main memory

8 Memory/Cache Related Terms Set associative cache : The middle range of designs between direct mapped cache and fully associative cache is called set-associative cache. In a n-way set- associative cache a block from main memory can go into n (n at least 2) locations in the cache. 2-way set-associative cache Main memory

9 Replacing Data Initially all valid bits are set to 0 As instructions and data are fetched from memory, the cache is filling and some data need to be replaced. Which ones? Direct mapping – obvious

10 Replacement Policies for Associative Cache 1.FIFO - fills from top to bottom and goes back to top. (May store data in physical memory before replacing it) 2.LRU – replaces the least recently used data. Requires a counter. 3.Random

11 Replacement in Set-Associative Cache Which if n ways within the location to replace? FIFO Random LRU Accessed locations are D, E, A

12 Writing Data If the location is in the cache, the cached value and possibly the value in physical memory must be updated. If the location is not in the cache, it maybe loaded into the cache or not (write-allocate and write-noallocate) Two methodologies: 1.Write-through Physical memory always contains the correct value 2.Write-back The value is written to physical memory only it is removed from the cache

13 Cache Performance Cache hits and cache misses. Hit ratio is the percentage of memory accesses that are served from the cache Average memory access time T M = h T C + (1- h)T P Tc = 10 ns Tp = 60 ns

14 Associative Cache Access order A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2 H7 I6 A0 B0 Tc = 10 ns Tp = 60 ns FIFO h = 0.389 T M = 40.56 ns

15 Direct-Mapped Cache Access order A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2 H7 I6 A0 B0 Tc = 10 ns Tp = 60 ns h = 0.167 T M = 50.67 ns

16 2-Way Set Associative Cache Access order A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2 H7 I6 A0 B0 Tc = 10 ns Tp = 60 ns LRU h = 0.31389 T M = 40.56 ns

17 Associative Cache (FIFO Replacement Policy) DataABCADBEFACDBGCHIAB CACHECACHE AAAAAAAAAAAAAAAIII BBBBBBBBBBBBBBBAA CCCCCCCCCCCCCCCB DDDDDDDDDDDDDD EEEEEEEEEEEE FFFFFFFFFFF GGGGGG HHHH Hit? * * **** * Hit ratio = 7/18 A 0 B 0 C 2 A 0 D 1 B 0 E 4 F 5 A 0 C 2 D 1 B 0 G 3 C 2 H 7 I 6 A 0 B 0

18 Two-way set associative cache (LRU Replacement Policy) Hit ratio = 7/18 A 0 B 0 C 2 A 0 D 1 B 0 E 4 F 5 A 0 C 2 D 1 B 0 G 3 C 2 H 7 I 6 A 0 B 0 DataABCADBEFACDBGCHIAB CACHECACHE 0A-0A-1 A-0 A-1E-0 E-1 B-0 B-1B-0 0 B-1 B-0B-1 A-0 A-1 A-0A-1 1 D-0 D-1 D-0 1 F-0 F-1 2 C-0 C-1 2 I-0 3 G-0 G-1 3 H-0 Hit? * * ** * **

19 Associative Cache with 2 byte line size (FIFO Replacement Policy) Hit ratio = 11/18 A 0 B 0 C 2 A 0 D 1 B 0 E 4 F 5 A 0 C 2 D 1 B 0 G 3 C 2 H 7 I 6 A 0 B 0 A and J; B and D; C and G; E and F; and I and H DataABCADBEFACDBGCHIAB CACHECACHE AAAAAAAAAAAAAAIIII JJJJJJJJJJJJJJHHHH BBBBBBBBBBBBBBBAA DDDDDDDDDDDDDDDJJ CCCCCCCCCCCCCCCB GGGGGGGGGGGGGGGD EEEEEEEEEEEE FFFFFFFFFFFF Hit? *** ******* *

20 Direct-mapped Cache with line size of 2 bytes Hit ratio 7/18 DataABCADBEFACDBGCHIAB CACHECACHE 0ABBABBBBAABBBBBBAB 1JDDJDDDDJJDDDDDDJD 2 CCCCCCCCCCCCCCCC 3 GGGGGGGGGGGGGGGG 4 EEEEEEEEEEEE 5 FFFFFFFFFFFF 6 IIII 7 HHHH Hit? * * * *** * A 0 B 0 C 2 A 0 D 1 B 0 E 4 F 5 A 0 C 2 D 1 B 0 G 3 C 2 H 7 I 6 A 0 B 0 A and J; B and D; C and G; E and F; and I and H

21 Two-way set Associative Cache with line size of 2 bytes Hit ratio = 12/18 Data ABCADBEFACDBGCHIAB CACHECACHE 0A-0A-1 A-0A-1 E-0 E-1B-0 B-1B-0 1J-0J-1 J-0J-1 F-0 F-1D-0 D-1D-0 0 B-0 B-1B-0 B-1 A-0 A-1 A-0A-1 1 D-0 D-1D-0 D-1 J-0 J-1 J-0J-1 2 C-0 C-1 3 G-0 G-1 2 I-0 3 H-0 Hit? *** * * **** *** A 0 B 0 C 2 A 0 D 1 B 0 E 4 F 5 A 0 C 2 D 1 B 0 G 3 C 2 H 7 I 6 A 0 B 0 A and J; B and D; C and G; E and F; and I and H

22 Page Replacement - FIFO FIFO is simple to implement –When page in, place page id on end of list –Evict page at head of list Might be good? Page to be evicted has been in memory the longest time But? –Maybe it is being used –We just don’t know FIFO suffers from Belady’s Anomaly – fault rate may increase when there is more physical memory!

23

24 Parkinson's law : "Programs expand to fill the memory available to hold them" Idea : Manage the storage available efficiently between the available programs.

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41 Before VM… Programmers tried to shrink programs to fit tiny memories Result: –Small –Inefficient Algorithms

42 Solution to Memory Constraints Use a secondary memory such as disk Divide disk into pieces that fit memory (RAM) –Called Virtual Memory

43 Implementations of VM Paging –Disk broken up into regular sized pages Segmentation –Disk broken up into variable sized segments

44 Memory Issues Idea: Separate concepts of –address space Disk –memory locations RAM Example: –Address Field = 2 16 = 65536 memory cells –Memory Size = 4096 memory cells How can we fit the Address Space into Main Memory?

45 Paging Break memories into Pages NOTE: normally Main Memory has thousands of pages page 1 page = 4096 bytes New Issue: How to manage addressing?

46 Address Mapping Mapping 2ndary Memory addresses to Main Memory addresses page 1 page = 4096 bytes physical addressvirtual address

47 Address Mapping Mapping 2ndary Memory ( program/virtual ) addresses to Main Memory ( physical ) addresses page 1 page = 4096 bytes physical address used by hardware virtual address used by program 40958191 0 4096 virtualphysical

48 Paging page 40958191 0 4096 virtualphysical 0 4095 / 0 Illusion that Main Memory is Large Contiguous Linear Size(MM) = Size(2ndry M) Transparent to Programmer

49 Paging Implementation Virtual Address Space (Program) & Physical Address Space (MM) –Broken up into equal pages (just like cache & MM!!) Page size  Always a power of 2 Common Size: –512 to 64K bytes

50 Paging Implementation Page Frames Page Tables Programs use Virtual Addresses

51 Memory Mapping Note: 2ndry Mem = 64K; Main Mem = 32K Page Frame: home of VM pages in MM Page Table: home of mappings for VM pages Page #Page Frame #

52 Memory Mapping Memory Management Unit (MMU): Device that performs virtual-to-physical mapping MMU 15-bit Physical Address 32-bit VM Address

53 Memory Management Unit 32-bit Virtual Address Broken into 2 portions 20-bit 12-bit Virtual page # offset in page (since our pages are 4KB) How to determine if page is in MM? Present/Absent Bit in Page Table Entry MMU

54 Demand Paging Possible Mapping of pages Page Fault: Requested page is not in MM Demand Paging: Page is demanded by program Page is loaded into MM

55 Demand Paging Possible Mapping of pages Page Fault: Requested page is not in MM Demand Paging: Page is demanded by program Page is loaded into MM But… What to bring in for a program on start up?

56 Working Set Set of pages used by a process Each process has a unique memory map Importance in regards to a multi-tasked OS At time t, there is a set of all k recently used pages References tend to cluster on a small number of pages Put this set to Work!!! Store & Load it during Process Switching

57 Page Replacement Policy Working Set: –Set of pages used actively & heavily –Kept in memory to reduce Page Faults Set is found/maintained dynamically by OS Replacement: OS tries to predict which page would have least impact on the running program Common Replacement Schemes: Least Recently Used (LRU) First-In-First-Out (FIFO)

58 Replacement Policy Placement Policy –Which page is replaced? –Page removed should be the page least likely to be referenced in the near future –Most policies predict the future behavior on the basis of past behavior

59 Basic Replacement Algorithms Least Recently Used (LRU) –Replaces the page that has not been referenced for the longest time –By the principle of locality, this should be the page least likely to be referenced in the near future –Each page could be tagged with the time of last reference. This would require a great deal of overhead.

60

61

62

63 SRAM DRAM DRAMs use only one transistor, plus a capacitor. DRAMs are smaller and less expensive because SRAMs are made from four to six transistors (flip flops) per bit. SRAMs don't require external refresh circuitry or other work in order for them to keep their data intact. SRAM is faster than DRAM

64 It has been discovered that for about 90% of the time that our programs execute only 10% of our code is used! This is known as the Locality Principle –Temporal Locality When a program asks for a location in memory, it will likely ask for that same location again very soon thereafter –Spatial Locality When a program asks for a memory location at a memory address (lets say 1000)… It will likely need a nearby location 1001,1002,1003,10004 … etc.

65 fastest possible access (usually 1 CPU cycle) Registers <1 ns often accessed in just a few cycles, usually tens – hundreds of kilobytes ~$80/MB Level 1 (SRAM) cache 2-8ns higher latency than L1 by 2× to 10×, now multi-MB ~$80/MB Level 2 (SRAM) cache 5-12ns may take hundreds of cycles, but can be multiple gigabytes eg.2GB $11 ($0.0055/MB) Main memory (DRAM) 10- 60ns millions of cycles latency, but very large eg.1TB $139 ($.000139/MB) Disk storage 3,000,000 - 10,000,000 ns several seconds latency, can be huge Tertiary storage (really slow) For a 1 GHz CPU a 50 ns wait means 50 wasted clock cycles Main Memory and Disk estimates Fry’s Ad 10/16/2008

66 We established that the Locality principle states that only a small amount of Memory is needed for most of the program’s lifetime… We now have a Memory Hierarchy that places very fast yet expensive RAM near the CPU and larger – slower – cheaper RAM further away… The trick is to keep the data that the CPU wants in the small expensive fast memory close to the CPU … and how do we do that???

67 Hardware and the Operating System are responsible for moving data throughout the Memory Hierarchy when the CPU needs it. Modern programming languages mainly assume two levels of memory, main memory and disk storage. Programmers are responsible for moving data between disk and memory through file I/O. Optimizing compilers are responsible for generating code that, when executed, will cause the hardware to use caches and registers efficiently.

68 A computer program or a hardware-maintained structure that is designed to manage a cache of information When the smaller cache is full, the algorithm must choose which items to discard to make room for the new data The "hit rate" of a cache describes how often a searched-for item is actually found in the cache The "latency" of a cache describes how long after requesting a desired item the cache can return that item

69 Each replacement strategy is a compromise between hit rate and latency. Direct Mapped Cache –The direct mapped cache is the simplest form of cache and the easiest to check for a hit. –Unfortunately, the direct mapped cache also has the worst performance, because again there is only one place that any address can be stored. Fully Associative Cache –The fully associative cache has the best hit ratio because any line in the cache can hold any address that needs to be cached. –However, this cache suffers from problems involving searching the cache –A replacement algorithm is used usually some form of a LRU "least recently used" algorithm N-Way Set Associative Cache –The set associative cache is a good compromise between the direct mapped and set associative caches.

70 Virtual Memory is basically the extension of physical main memory (RAM) into a lower cost portion of our Memory Hierarchy (lets say… Hard Disk) A form of the Overlay approach, managed by the OS, called Paging is used to swap “pages” of memory back and forth between the Disk and Physical Ram. Hard Disks are huge, but to you remember how slow they are??? Millions of times slower that the other memories in our pyramid.


Download ppt "Cache memory Replacement Policy, Virtual Memory Prof. Sin-Min Lee Department of Computer Science."

Similar presentations


Ads by Google