Presentation on theme: "CS 430 – Computer Architecture"— Presentation transcript:
1 CS 430 – Computer Architecture William J. Taffeusing slides ofDavid PattersonGreet class
2 OutlineMemory HierarchyDirect-Mapped CacheTypes of Cache MissesA (long) detailed examplePeer - to - peer education exampleBlock Size (if time permits)
3 Memory Hierarchy (1/4) Processor Disk executes programs runs on order of nanoseconds to picosecondsneeds to access code and data for programs: where are these?DiskHUGE capacity (virtually limitless)VERY slow: runs on order of millisecondsso how do we account for this gap?
4 Memory Hierarchy (2/4) Memory (DRAM) smaller than disk (not limitless capacity)contains subset of data on disk: basically portions of programs that are currently being runmuch faster than disk: memory accesses don’t slow down processor quite as muchProblem: memory is still too slow (hundreds of nanoseconds)Solution: add more layers (caches)
5 Increasing Distance from Proc., Decreasing cost / MB Memory Hierarchy (3/4)ProcessorIncreasing Distance from Proc., Decreasing cost / MBLevels in memory hierarchyHigherLevel 1Level 2Level nLevel 3. . .LowerSize of memory at each level
6 If level is closer to Processor, it must be: Memory Hierarchy (4/4)If level is closer to Processor, it must be:smallerfastersubset of all higher levels (contains most recently used data)contain at least all the data in all lower levelsLowest Level (usually disk) contains all available data
7 Memory Hierarchy Purpose: Faster access to large memory from processor (active)ComputerControl(“brain”)Datapath(“brawn”)Memory(passive)(whereprograms,data livewhenrunning)DevicesInputOutputKeyboard, MouseDisplay, PrinterDisk,NetworkPurpose:Faster access to large memory from processor
8 Memory Hierarchy Analogy: Library (1/2) You’re writing a term paper (Processor) at a table in DoeResearch Library is equivalent to diskessentially limitless capacityvery slow to retrieve a bookTable is memorysmaller capacity: means you must return book when table fills upeasier and faster to find a book there once you’ve already retrieved it
9 Memory Hierarchy Analogy: Library (2/2) Open books on table are cachesmaller capacity: can have very few open books fit on table; again, when table fills up, you must close a bookmuch, much faster to retrieve dataIllusion created: whole library open on the tabletopKeep as many recently used books open on table as possible since likely to use againAlso keep as many books on table as possible, since faster than going to library
10 Memory Hierarchy Basis Disk contains everything.When Processor needs something, bring it into to all lower levels of memory.Cache contains copies of data in memory that are being used.Memory contains copies of data on disk that are being used.Entire idea is based on Temporal Locality: if we use it now, we’ll want to use it again soon (a Big Idea)
11 Cache DesignHow do we organize cache?Where does each memory address map to? (Remember that cache is subset of memory, so multiple memory addresses map to the same cache location.)How do we know which elements are in cache?How do we quickly locate them?
12 Direct-Mapped Cache (1/2) In a direct-mapped cache, each memory address is associated with one possible block within the cacheTherefore, we only need to look in a single location in the cache for the data if it exists in the cacheBlock is the unit of transfer between cache and memory
13 Direct-Mapped Cache (2/2) 4 Byte DirectMapped CacheCacheIndex123MemoryMemory Address123456789ABCDEFCache Location 0 can be occupied by data from:Memory location 0, 4, 8, ...In general: any memory location that is multiple of 4Let’s look at the simplest cache one can build. A direct mapped cache that only has 4 bytes.In this direct mapped cache with only 4 bytes, location 0 of the cache can be occupied by data form memory location 0, 4, 8, C, ... and so on.While location 1 of the cache can be occupied by data from memory location 1, 5, 9, ... etc.So in general, the cache location where a memory location can map to is uniquely determined by the 2 least significant bits of the address (Cache Index).For example here, any memory location whose two least significant bits of the address are 0s can go to cache location zero.With so many memory locations to chose from, which one should we place in the cache?Of course, the one we have read or write most recently because by the principle of temporal locality, the one we just touch is most likely to be the one we will need again soon.Of all the possible memory locations that can be placed in cache Location 0, how can we tell which one is in the cache?+2 = 22 min. (Y:02)
14 Issues with Direct-Mapped Since multiple memory addresses map to same cache index, how do we tell which one is in there?What if we have a block size > 1 byte?Result: divide memory address into three fieldsttttttttttttttttt iiiiiiiiii ooootag index byte to check to offset if have select within correct block block block
15 Direct-Mapped Cache Terminology All fields are read as unsigned integers.Index: specifies the cache index (which “row” of the cache we should look in)Offset: once we’ve found correct block, specifies which byte within the block we wantTag: the remaining bits after offset and index are determined; these are used to distinguish between all the memory addresses that map to the same location
16 Direct-Mapped Cache Example (1/3) Suppose we have a 16KB of data in a direct-mapped cache with 4 word blocksDetermine the size of the tag, index and offset fields if we’re using a 32-bit architectureOffsetneed to specify correct byte within a blockblock contains 4 words bytes bytesneed 4 bits to specify correct byte
17 Direct-Mapped Cache Example (2/3) Index: (~index into an “array of blocks”)need to specify correct row in cachecache contains 16 KB = 214 bytesblock contains 24 bytes (4 words)# rows/cache = # blocks/cache (since there’s one block/row) = bytes/cache bytes/row = 214 bytes/cache bytes/row = 210 rows/cacheneed 10 bits to specify this many rows
18 Direct-Mapped Cache Example (3/3) Tag: use remaining bits as tagtag length = mem addr length offset - index = bits = 18 bitsso tag is leftmost 18 bits of memory addressWhy not full 32 bit address as tag?All bytes within block need same address (- 4bits)Index must be same for every address within a block, so its redundant in tag check, thus can leave off to save memory (- 10 bits in this example)
19 Computers in the News: Sony Playstation 2 "Scuffles Greet PlayStation 2's Launch""If you're a gamer, you have to have one,'' one who pre-ordered the $299 consoleJapan: 1 Million on 1st day
20 Sony Playstation 2 Details Emotion Engine: 66 million polygons per secondMIPS core + vector coprocessor + graphics/DRAM (128 bit data)I/O processor runs old gamesI/O: TV (NTSC) DVD, Firewire (400 Mbit/s), PCMCIA card, USB, Modem, ..."Trojan Horse to pump a menu of digital entertain-ment into homes"? PCs temperamental, and "no one ever has to reboot a game console."
21 Accessing data in a direct mapped cache MemoryEx.: 16KB of data, direct-mapped, 4 word blocksRead 4 addresses0x ,0x C,0x ,0xMemory values on right:only cache/memory level of hierarchyAddress (hex)Value of WordCabcd...CefghCijkl
22 Accessing data in a direct mapped cache 4 Addresses:0x , 0x C, 0x , 0x4 Addresses divided (for convenience) into Tag, Index, Byte Offset fieldsTag Index Offset
23 Accessing data in a direct mapped cache So lets go through accessing some data in this cache16KB data, direct-mapped, 4 word blocksWill see 3 types of events:cache miss: nothing in cache in appropriate block, so fetch from memorycache hit: cache block is valid and contains proper address, so read desired wordcache miss, block replacement: wrong data is in cache at appropriate block, so discard it and fetch desired data from memory
24 16 KB Direct Mapped Cache, 16B blocks Valid bit: determines whether anything is stored in that row (when computer initially turned on, all entries are invalid)...ValidTag0x0-30x4-70x8-b0xc-f123456710221023Example BlockIndex
37 So read Cache Block 1, Data is Valid Tag fieldIndex fieldOffset...ValidTag0x0-30x4-70x8-b0xc-f123456710221023Index1abcd1efgh
38 Cache Block 1 Tag does not match (0 != 2) Tag fieldIndex fieldOffset...ValidTag0x0-30x4-70x8-b0xc-f123456710221023Index1abcd1efgh
39 Miss, so replace block 1 with new data & tag Tag fieldIndex fieldOffset...ValidTag0x0-30x4-70x8-b0xc-f123456710221023Index12ijkl1efgh
40 And return word j 000000000000000010 0000000001 0100 Tag field Index fieldOffset...ValidTag0x0-30x4-70x8-b0xc-f123456710221023Index12ijkl1efgh
41 Do an example yourself. What happens? Chose from: Cache: Hit, Miss, Miss w. replace Values returned: a ,b, c, d, e, ..., k, lRead address 0x ?Read address 0x c ?CacheValid0x0-30x4-70x8-b0xc-fIndexTag112ijkl231efgh4567......
42 Since reads, values must = memory values whether or not cached: Answers0x a hitIndex = 3, Tag matches, Offset = 0, value = e0x c a missIndex = 1, Tag mismatch, so replace from memory, Offset = 0xc, value = dSince reads, values must = memory values whether or not cached:0x = e0x c = dMemoryAddressValue of Wordcabcd...cefghcijkl
43 So we create a memory hierarchy: Things to RememberWe would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible.So we create a memory hierarchy:each successively lower level contains “most used” data from next higher levelexploits temporal localitydo the common case fast, worry less about the exceptions (design principle of MIPS)Locality of reference is a Big Idea