Presentation is loading. Please wait.

Presentation is loading. Please wait.

COSC2410: LAB 19 INTRODUCTION TO MEMORY/CACHE DIRECT MAPPING 1.

Similar presentations


Presentation on theme: "COSC2410: LAB 19 INTRODUCTION TO MEMORY/CACHE DIRECT MAPPING 1."— Presentation transcript:

1 COSC2410: LAB 19 INTRODUCTION TO MEMORY/CACHE DIRECT MAPPING 1

2 Introduction  What is Cache?  What is main memory?  Extra Question: Why are we currently moving towards 64 bit processors instead of 32 bit? 2

3 Types of Cache Broad Range of Caches:  Disk Cache  Web Cache  CPU Cache  DNS Caching  Database Caching, etc. We are only going to be dealing with CPU Cache. 3

4 Type of Cache Mapping  Direct  Fully Associative  Set Associative 4

5 Facts we are assuming  RAM is divided into blocks of memory locations. Memory is grouped into 2 n byte blocks, where n is number of bits used to uniquely identify where a data is within a block.  Cache is organized into lines, containing enough space to store EXACTLY ONE block of data and a tag uniquely identifying where the block came from (It may also include some extra bits, as flags, etc.) 5

6 What this means – Main Memory 6 Block 0 Block 1 Block 2 N -2 Block 2 N -1 … Memory We can see that our memory is separated into different blocks (each a different color) N is the number of bits that are used to identify the particular block. If we have N = 4, that means we can have a total of 2 N = 16 blocks (i.e. block number 0 to block number 15) In this case, there are 4 DIFFERENT memory addresses belonging to the same block. This means we will need 2 bits to figure out where the address is within the block.

7 What this means – Cont’d 7 BlockAddressBlock identification bitsOffset Block 0 0x000000000 0000 0000 0000 0000 0x000010000 0000 0000 0000 0001 0x000020000 0000 0000 0000 0010 0x000030000 0000 0000 0000 0011 Block 1 0x000040000 0000 0000 0000 0100 0x000050000 0000 0000 0000 0101 0x000060000 0000 0000 0000 0110 0x000070000 0000 0000 0000 0111 And so on...until we get to the last row Block 2 n -1 0xFFFFC1111 1111 1111 1111 1100 0xFFFFD1111 1111 1111 1111 1101 0xFFFFE1111 1111 1111 1111 1110 0xFFFFF1111 1111 1111 1111 1111

8 Different Parts of an Address  The address is a set of bits that all together point to a specific memory location. Generally, it is split into 3 parts:  1. Tag – Identifies the block from among all the blocks that belong to the same index  2. Index – Identifies which line of the cache we write the block into  3. Offset – Identifies where in the block our memory location is. 8 TagIndexOffset Address 11001001010100011001000001100100

9 Simplifying it  Each block may have more than one byte of data. That means many memory locations could be present in a single block of memory.  For simplicity, we assume that a block is the smallest part of memory (in case of byte addressable memory, 1 byte). This way, we can see how the cache works easier. (This also means we don’t require any bits for offset, so our memory address is just split into tag and index) 9

10 Direct Mapping  Each memory block is assigned a specific line in the cache. If a line is already taken up by a block when a new block needs to be loaded, the old block is replaced. 10

11 Direct Mapping 11 In this case, all the reds will only go to the red line in the cache. All blues will go only to the blue line, and so on. If the cache contains 2 k blocks, the k least significant bits are used as the index (as we are assuming there is no offset. Its easy to find where a memory address, i, will go. We simply use: i mod 2 k

12 Direct Mapping – Tags and Validity  We need a way of checking if the cache has a valid entry, or if it does not, so we use a flag. If the flag is set, the entry is valid, if it is no, the entry doesn’t exist.  Since multiple memory blocks can be written to the same cache line, we need a way to identify from where the block came from. To do this, we introduce a tag. 12

13 Tags and Validity 13

14 Question:  If we increase the number of memory block locations in the previous slide from 16 to 32, but kept the cache size the same, how many bits are required for the tag? A) Number of tag bits won’t change. It will still be 2. B) We needed 2 bits for 16, since we are adding 16 more blocks, we need another 2 bits. So 4. C) We just need another bit to represent the additional 16 locations, so the answer is 3. D) We need 2k bits to represent the new number of locations, so the answer should be 5 14

15 Question 2:  Which will have more number of blocks? Cache or Main Memory? 15 Ans: Memory

16 Question 3:  Consider a byte addressable machine with 16 bit addresses having a cache with the following characteristics:  It is direct-mapped  Each block holds exactly one byte  The cache index is 4 bits long How many blocks does the cache hold? How many bits of storage are required to build the cache? 16

17 Question 3a:  How many blocks does the cache hold?  Ans: 4 bit index -> 2 4 = 16 blocks 17

18 Question 3b:  How many bits of storage are required to build the cache?  Ans: Tag bits = 12 bits ( 16 bit address – 4 bit index) (12 tag bits + 1 validity bit + 8 data bits) x 16 blocks = 21 bits X 16 blocks = 336 bits 18

19 Note: Using >1 byte/ block  If we consider a block to be exactly 1 byte in size, the block becomes the smallest part of memory. In this case, we don’t need to have an offset, however number of bits required for tag and index will increase.  If we use 32 bit addressing, then: Tag bits + Index Bits + Offset Bits = 32 (If block is 1 byte, number of offset bits = 0)  How will we get number of bits for offset?  Based off the block size  If block is 4 bytes, this means 4 addresses belong to the block.  How many bits will you need? 19

20 Question 4: Multiple bytes/block  Suppose we have a 32 bit processor. Thus we have main memory of 4GB (2 32 ), with each byte directly addressable by a 32 bit address. We divide our memory into 32 byte blocks, how many blocks do we have in memory? Answer can be in a power of 2. 20 Ans: 32 = 2 5 Thus number of blocks = 2 32 /2 5 = 2 27

21 Question 5:  Suppose we are using the memory from question 4. We now have a cache size of 512KB (2 19 ). How many cache lines do we have? Answer can be a power of 2. Ans: Cache line is same size as block size. 2 19 /2 5 = 2 14 21

22 Question 6:  Following the Previous Question. How many bits are required to represent the index? Ans: 14 22

23 Question 7:  Following the previous question. How many memory blocks are mapped to the same position in the cache? Answer can be a power of 2. Ans: Number of Memory Blocks / Number of Cache lines 2 27 /2 14 = 2 13 23

24 Question 8:  Following the previous questions. How many bits are required to represent the tag? Ans: 13 24

25 Question 9:  Continuing from the previous problem, If we consider using a validity bit, and 8 bits (1 byte) of data. How many bits long would each cache line be? Ans: = 13 tag bits + 1 validity bit + (8 data bits x 2 5 bytes in one block) = 270 bits 25

26 Spatial Locality WHAT IS IT? WHY IS IT IMPORTANT? HOW CAN WE TAKE ADVANTAGE OF IT? 26

27 Taking advantage of Spatial Locality 27 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 01 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Offset Index 0 1 000 001 010 011 100 101 110 111

28 Taking advantage of Spatial Locality 28 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 01 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 1(16)(17) Index 01 000 001 010 011 100 101 110 111 Suppose we need to access memory location 17. We would write entire block containing 17 to corresponding cache line. Tag

29 Taking advantage of Spatial Locality 29 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 01 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 1(16)(17) Index 01 000 001 010 011 100 101 110 111 Now when we check for address 16 (10000), we know our tag is going to be 1 bit (i.e. 1) as 1 bit for offset, and 3 bits for index. We can check for tag in index location 000 (from address). If the tag matches, we have a hit. Tag

30 Taking advantage of Spatial Locality  Now the important thing to notice, is each block consists of 2 different data (each data is one byte)  Each individual byte still needs 5 bits to represent. 4 bits to determine which block and which cache line it writes to, 1 bit as offset.  We transfer the entire block into the cache line. As we can see, the cache is split the same way as the memory. (ex. If we need to put the data in address 2 into the cache, data from both 2 and 3 are transferred. If we need to put the data in address 7 into the cache, data from both 6 and 7 are transferred.)  Thus, we do not need the offset to determine which line of cache the block needs to write into, just tag. The entire data from all memory locations in a block is written to cache line. 30


Download ppt "COSC2410: LAB 19 INTRODUCTION TO MEMORY/CACHE DIRECT MAPPING 1."

Similar presentations


Ads by Google