Presentation is loading. Please wait.

Presentation is loading. Please wait.

Module IV Memory Organization.

Similar presentations


Presentation on theme: "Module IV Memory Organization."— Presentation transcript:

1 Module IV Memory Organization

2 Paging Hardware Support
Problem with paging is that, extra memory references to access translation tables can slow down program. Solution to this problem is to use a special, small fast lookup hardware cache, called translation look-aside buffer (TLB). It stores a few of the translation table entries. It perform the mapping from virtual address to physical address. On each memory reference, First ask TLB if it knows about the page. If so, the reference proceeds fast. If TLB has no information for page, must go through page and segment table to get information and place it in TLB for next reference

3

4 TLB

5 Cache Memory

6 Cache Organization Cache: special type of high speed SRAM to speedup accesses to memory and reduce traffic on processor’s buses. Two types: Internal & External Internal (On-chip): also known as primary cache It is located inside the CPU chip. External cache: also known as secondary cache It is located on the motherboard outside the CPU It is referred to on PC specifications. When an instruction/data is required, on-chip cache will be searched first and then external cache

7 Cache Organization

8 L1,L2,L3 Cache LI cache: built directly in the processor chip.
has capacity, ranging from 8 KB to 128 KB Common sizes for personal computers are 32 KB or 64 KB.

9 L1,L2,L3 Cache L2 cache: It is slightly slower than L1 cache
Has a larger capacity, ranging from 64 KB to 16MB Current processors include advanced transfer cache (ATC) built directly on the processor chip. Capacity range from 512 KB to 12 MB for PC Servers and workstations have from 12 MB to 16 MB of ATC

10 L1,L2,L3 Cache L3 cache : It is a cache on the motherboard that is separate from the processor chip Personal computers often have up to 8 MB of L3 cache Servers and workstations have from 8 MB to 24 MB of L3 cache.

11 L1,L2,L3 Cache

12 How does this activity increases speed?
Consider a system with Internal Cache Access Time = 10ns Main Memory Access Time = 70ns Time for hit = 10ns Time for miss = = 80ns Hit Ratio: specifies the percentage of hits to total cache accesses

13 How does this activity increases speed?
Thus hit ratio affects the average access time The average access time is given by the following equation: If Hit ratio = 0.9 Tcache = 10ns Tcache+Tram =80ns Average Access Time, Tacc = 17ns

14 How does this activity increases speed?
Hit Ratio is governed by many factors like Size of the program Type and amount of data used by the program Addressing activity during execution

15 How does this activity increases speed?
Two characteristics of running program improves performance with cache When we access a m/m location, there is a good chance we will access it again. When we access one location, there is a good possibility that we access the next location also. In general, it is called locality of reference

16 How does this activity increases speed?
Instruction Cache Access: Consider the following loop of instructions: MOV CX,1000 SUB AX,AX NEXT: ADD AX,[SI] MOV [SI],AX INC SI LOOP NEXT If cache is empty, the first pass will fill the cache (MISS) and the next 999 passes will generate hits for each instruction fetch

17 How does this activity increases speed?
When a MISS occurs, the cache reads a group of location from main memory and this group is called line of data. So after fetching the first instruction, the rest of the loop is already in the cache(prefetch buffer too) before we finish the first pass.

18 How does this activity increases speed?
Data Cache Access: The loop example also contains accesses to data operands ADD AX,[SI] & MOV [SI],AX In this case, a line of data is to be read from main memory and guarantees faster access.

19 How does this activity increases speed?
Data writes: MOV [SI],AX Cache Access Time = 10ns Main Memory Access Time = 70ns Which to choose : Cache/ Main Memory is based on the cache policy used by a particular system There are 2 policies : Writeback Writethrough

20 How does this activity increases speed?
Writeback: writing results only to cache Adv : faster writes Disadv: out-of-date main memory data Writethrough: writing to cache and main memory Adv : maintain valid data in main memory Disadv: requires long write times


Download ppt "Module IV Memory Organization."

Similar presentations


Ads by Google