CSE 241 Computer Engineering (1) هندسة الحاسبات (1) Lecture #3 Ch. 6 Memory System Design Dr. Tamer Samy Gaafar Dept. of Computer & Systems Engineering.

Slides:



Advertisements
Similar presentations
Chapter 6 Computer Architecture
Advertisements

Computer Memory System
Computer Organization and Architecture
Characteristics of Computer Memory
Associative Cache Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word (or sub-address in line) Tag.
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Cache memory Direct Cache Memory Associate Cache Memory Set Associative Cache Memory.
Characteristics of Computer Memory
Overview Booth’s Algorithm revisited Computer Internal Memory Cache memory.
Computer Organization and Architecture
CH05 Internal Memory Computer Memory System Overview Semiconductor Main Memory Cache Memory Pentium II and PowerPC Cache Organizations Advanced DRAM Organization.
CSNB123 coMPUTER oRGANIZATION
Faculty of Information Technology Department of Computer Science Computer Organization and Assembly Language Chapter 4 Cache Memory.
Memory and Caching Chapter 6.
Cache Memory. Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
School of Computer Science G51CSA 1 Memory Systems.
Computer Architecture Lecture 3 Cache Memory. Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics.
03-04 Cache Memory Computer Organization. Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics.
Cache Memory.
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
Computer system & Architecture
2007 Sept. 14SYSC 2001* - Fall SYSC2001-Ch4.ppt1 Chapter 4 Cache Memory 4.1 Memory system 4.2 Cache principles 4.3 Cache design 4.4 Examples.
Memory and Caching Chapter The Memory Hierarchy.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Memory Hierarchy. Hierarchy List Registers L1 Cache L2 Cache Main memory Disk cache Disk Optical Tape.
Performance – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: Cache Memory Reading: Stallings, Chapter 4.
Chapter 8: System Memory Dr Mohamed Menacer Taibah University
Memory Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Cache Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module.
Chapter 4 Cache Memory. Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Associative Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of.
Cache Memory.
William Stallings Computer Organization and Architecture 7th Edition
William Stallings Computer Organization and Architecture 8th Edition
William Stallings Computer Organization and Architecture 8th Edition
Yunju Baek 2006 spring Chapter 4 Cache Memory Yunju Baek 2006 spring.
Cache Memory Presentation I
William Stallings Computer Organization and Architecture 7th Edition
Cache memory Direct Cache Memory Associate Cache Memory
BIC 10503: COMPUTER ARCHITECTURE
Chapter 6 Memory System Design
Chap. 12 Memory Organization
Cache Memory.
William Stallings Computer Organization and Architecture 8th Edition
William Stallings Computer Organization and Architecture 8th Edition
William Stallings Computer Organization and Architecture 8th Edition
William Stallings Computer Organization and Architecture 8th Edition
Presentation transcript:

CSE 241 Computer Engineering (1) هندسة الحاسبات (1) Lecture #3 Ch. 6 Memory System Design Dr. Tamer Samy Gaafar Dept. of Computer & Systems Engineering

Course Web Page —

Characteristics of Memory Systems 1.Location 2.Capacity 3.Unit of transfer 4.Access method 5.Performance 6.Physical type 7.Physical characteristics 8.Organization

You want it fast? It is possible to build a computer which uses only static RAM This would be very fast This would need no cache This would cost a very large amount

Design Constraints on Memory How much? Capacity: bigger is better! How fast? Time is money. Best: keep up with CPU. How expensive? Reasonable compared to other components. Trade-off among the three characteristics. Solution: memory hierarchy.

Memory Hierarchy - Diagram Less cost Higher capacity Greater access time Less access frequency Registers L1 Cache L2 Cache Main memory Disk Optical Tape

Locality of Reference CPU Main memory

Locality of Reference During the course of execution of a program, memory references tend to cluster (for both instructions and data). e.g., loops, subroutines. e.g., operations on tables and arrays. Over a short period of time, CPU is working with fixed clusters of memory references. Over a long period of time, the clusters in use change. This principle can be applied across all levels of the memory hierarchy.

Cache Memory – Concept Small amount of fast memory. Sits between normal main memory and CPU. May be located on CPU chip. Not usually visible to the programmer or CPU Volatile, uses semiconductor technology.

CPU requests contents of memory location. Check cache for this data. If present  cache hit, get from cache (fast). Because of locality of reference, this location, or a close one, is likely to be referenced soon. If not present  cache miss, read required block from main memory to cache. Then deliver from cache to CPU. Cache includes tags to identify which block of main memory is in each cache slot. Cache Memory – Operation

Cache – Read Operation Miss Hit

Typical Cache Organization

Cache Memory – Design 1.Mapping function 2.Replacement algorithm 3.Write policy 4.Number of caches 5.Addresses 6.Size 7.Block/line size

Cache Memory – Design 1.Mapping function 2.Replacement algorithm 3.Write policy 4.Number of caches 5.Addresses 6.Size 7.Block/line size

Example (running) Main memory: —byte-addressable  Each location is 1-byte long. —16 MB  Length of address (s+w) = log 2 16M = 24 —4-byte blocks  # of blocks (M) = 16M / 4 = 4M Cache: —64 KB —4-bytes lines –# of lines (m) = 64K / 4 = 16K

Block 0 Block 1 Block 2 Block m-1... Block m Block m+1 Block 2m-1... Block M-m Block M-1... Block M-m+1 Block M-m+2... Line 0 Line 1 Line 2 Line m-1 Main memory Cache memory Block m+2 Direct Mapping Each main memory block maps to only one cache line (B modulo m). i.e. if a block is in cache, it must be in one specific place.

Direct Mapping Cache Line Table Cache line assigned Main memory blocks 0 0, m, 2m, 3m, …, 2 s -m 11,m+1, 2m+1, …, 2 s -m+1 m-1 m-1, 2m-1, 3m-1, …, 2 s -1 ………………………………. 22,m+2, 2m+2, …, 2 s -m+2 m cache lines. 2 s main memory blocks.

MM Address-Block Number Relationship MM address Block 0 Block 1 Block 2 Block 3

MM Address-Line Number-tag Relationship ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :::::::::: :::::::::: :::::::::: :::::::::: MM address 16 k blocks Tag = 0 Tag = 1 Tag = 2 Tag = 3

Direct Mapping Address is in two parts. Least significant w bits identify unique word. Most significant s bits specify one memory block. The MSBs are split into a cache line field r and a tag of s-r (most significant).

Direct Mapping Address Structure Tag s-r Line or Slot rWord w 8 bits 14 bits 2 bits 24-bit address 2-bit word identifier (4 byte block) 22-bit block identifier 14-bit slot or line 8-bit tag (=22-14) No two blocks that map to the same line have the same Tag field Check contents of cache by finding line and checking Tag

Direct Mapping Cache Organization

Direct Mapping Example Main memory size = 16 MB Cache size = 64 kB Block size = 4 bytes (16339C) 16 = word 0CE7 Line Tag

Direct Mapping Summary Address length = (s + w) bits. Number of addressable units = 2 s+w words. Block size = line size = 2 w words. — w = log 2 (# of words in block). Number of blocks in MM = M = 2 s+ w /2 w = 2 s. — s = log 2 (# of words in MM / # of words in block). Number of lines in cache = m = 2 r. — r = log 2 (# of words in cache / # of words in line). Size of tag = (s – r) bits. — (s-r) = log 2 (# of words in MM / # of words in cache).

Direct Mapping pros & cons Simple. Inexpensive. Fixed location for given block. If a program accesses 2 blocks that map to the same line repeatedly, cache misses are very high.

Associative Mapping A main memory block can load into any line of cache. Memory address is interpreted as tag and word. Tag uniquely identifies block of memory. Every line’s tag is examined for a match. Cache searching gets expensive.

Associative Mapping Address Structure 22 bit tag stored with each 32 bit block of data. Compare tag field with tag entry in cache to check for hit. Least significant 2 bits of address identify which 1-byte word is required from 32 bit data block. 22 bits 2 bits Tag s Word w

Fully Associative Cache Organization

Associative Mapping Example Main memory size = 16 MB Cache size = 64 kB Block size = 4 bytes (16339C) 16 = word 058CE7 Tag

Associative Mapping Summary Address length = (s + w) bits. Number of addressable units = 2 s+w words. Block size = line size = 2 w words. — w = log 2 (# of words in block). Number of blocks in MM = M = 2 s+ w /2 w = 2 s. — s = log 2 (# of words in MM / # of words in block). Number of lines in cache = m = undetermined. Size of tag = s bits.

Set-Associative Mapping Cache is divided into a number of sets (v) of equal size. Each set contains a number of lines (k).  k-way set-associative mapping! A block b could map to any line in a set i if and only if i = b modulo v.

Block 0 Block 1 Block 2 Block m-1... Block m Block m+1 Block 2m-1... Block M-m Block M-1... Block M-m+1 Block M-m+2... Main memory Cache memory Block m+2 Set-Associative Mapping Set 0 Set 1 Set 2 Set v-1 ………… way set associative mapping

Example (running) Main memory: —byte-addressable  Each location is 1-byte long. —16 MB  Length of address (s+w) = log 2 16M = 24 —4-byte blocks  # of blocks (M) = 16M / 4 = 4M Cache: —64 KB —4-byte lines –# of lines (m) = 64K / 4 = 16K —2-line sets  k=2  2-way set-associative –# of sets (v) = 16K / 2 = 8K

Set Associative Mapping Address Structure Use set field to determine cache set to look in. Compare tag field to see if we have a hit. 9 bits 13 bits 2 bits Tag s-d Set dWord w

k-Way Set Associative Cache

Two-Way Set Associative Mapping - Example Main memory size = 16 MB Cache size = 64 kB Block size = 4 bytes 2-way set associative mapping

Set Associative Mapping Summary Address length = (s + w) bits. Number of addressable units = 2 s+w words. Block size = line size = 2 w words. — w = log 2 (# of words in block). Number of blocks in MM = M = 2 s. — s = log 2 (# of words in MM / # of words in block). Number of lines in set = k  k-way set-associative Number of sets = v = 2 d. Number of lines in cache = m = k * v = k * 2 d. — d = log 2 (# of words in cache / # of words in set). Size of tag = (s – d) bits. — (s-d) = log 2 (# of blocks in MM / # of sets in cache).

Cache Memory – Design 1.Mapping function 2.Replacement algorithm 3.Write policy 4.Number of caches 5.Addresses 6.Size 7.Block/line size

Replacement Algorithms (1) Direct mapping No choice. Each block only maps to one line. Replace that line.

Hardware implemented algorithm (speed) Least Recently Used (LRU) e.g. in 2-way set associative: which of the 2 blocks is LRU? First In First Out (FIFO) Replace block that has been in cache longest. Least Frequently Used Replace block which has had fewest hits. Random Replacement Algorithms (2) Associative and Set Associative

Cache Memory – Design 1.Mapping function 2.Replacement algorithm 3.Write policy 4.Number of caches 5.Addresses 6.Size 7.Block/line size

Write Policy – Multiple CPU Organization Several CPUs sharing the same MM, each has its own cache memory. Main Memory Cache 1 Cache 2 CPU 1 CPU 2 Cache n CPU n ::

Write Policy – Constraints Multiple CPUs may have individual caches. I/O may access main memory directly. Must not overwrite a cache block unless main memory is up to date.

Write through All writes go to main memory as well as cache. Multiple CPUs can monitor main memory traffic to keep local (to CPU) cache up to date. Lots of traffic. Slows down writes.

Write back Updates initially made in cache only. Update bit for cache line is set when update occurs If block is to be replaced, write to main memory only if update bit is set. Other caches get out of sync. I/O must access main memory through cache. N.B. 15% of memory references are writes