Module IV Memory Organization.

Slides:



Advertisements
Similar presentations
Caching IV Andreas Klappenecker CPSC321 Computer Architecture.
Advertisements

Cs 325 virtualmemory.1 Accessing Caches in Virtual Memory Environment.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
Overview of Cache and Virtual MemorySlide 1 The Need for a Cache (edited from notes with Behrooz Parhami’s Computer Architecture textbook) Cache memories.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
The Memory Hierarchy II CPSC 321 Andreas Klappenecker.
Translation Buffers (TLB’s)
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy (Part II)
11/10/2005Comp 120 Fall November 10 8 classes to go! questions to me –Topics you would like covered –Things you don’t understand –Suggestions.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Systems I Locality and Caching
Lecture 21 Last lecture Today’s lecture Cache Memory Virtual memory
Lecture 19: Virtual Memory
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
IT253: Computer Organization
0 High-Performance Computer Architecture Memory Organization Chapter 5 from Quantitative Architecture January 2006.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Fall 2000M.B. Ibáñez Lecture 17 Paging Hardware Support.
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
Lecture#15. Cache Function The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that.
Memory Architecture Chapter 5 in Hennessy & Patterson.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
1  1998 Morgan Kaufmann Publishers Chapter Seven.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 33 Paging Read Ch. 9.4.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
Virtual memory.
Virtual Memory Chapter 7.4.
Cache Memory.
The Memory System (Chapter 5)
Memory COMPUTER ARCHITECTURE
Section 9: Virtual Memory (VM)
How will execution time grow with SIZE?
Architecture Background
CS-301 Introduction to Computing Lecture 17
Cache memory Direct Cache Memory Associate Cache Memory
Chapter 8 Digital Design and Computer Architecture: ARM® Edition
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Virtual Memory 4 classes to go! Today: Virtual Memory.
Module IV Memory Organization.
Introduction to Computing
Computer Architecture
Memory Management-I 1.
Lecture 29: Virtual Memory-Address Translation
Module IV Memory Organization.
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Practical Session 9, Memory
Translation Buffers (TLB’s)
TLB Performance Seung Ki Lee.
If a DRAM has 512 rows and its refresh time is 9ms, what should be the frequency of row refresh operation on the average?
Translation Buffers (TLB’s)
Cache - Optimization.
Fundamentals of Computing: Computer Architecture
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Review What are the advantages/disadvantages of pages versus segments?
ARM920T Processor This training module provides an introduction to the ARM920T processor embedded in the AT91RM9200 microcontroller.We’ll identify the.
Presentation transcript:

Module IV Memory Organization

Paging Hardware Support Problem with paging is that, extra memory references to access translation tables can slow down program. Solution to this problem is to use a special, small fast lookup hardware cache, called translation look-aside buffer (TLB). It stores a few of the translation table entries. It perform the mapping from virtual address to physical address. On each memory reference, First ask TLB if it knows about the page. If so, the reference proceeds fast. If TLB has no information for page, must go through page and segment table to get information and place it in TLB for next reference

TLB

Cache Memory

Cache Organization Cache: special type of high speed SRAM to speedup accesses to memory and reduce traffic on processor’s buses. Two types: Internal & External Internal (On-chip): also known as primary cache It is located inside the CPU chip. External cache: also known as secondary cache It is located on the motherboard outside the CPU It is referred to on PC specifications. When an instruction/data is required, on-chip cache will be searched first and then external cache

Cache Organization

L1,L2,L3 Cache LI cache: built directly in the processor chip. has capacity, ranging from 8 KB to 128 KB Common sizes for personal computers are 32 KB or 64 KB.

L1,L2,L3 Cache L2 cache: It is slightly slower than L1 cache Has a larger capacity, ranging from 64 KB to 16MB Current processors include advanced transfer cache (ATC) built directly on the processor chip. Capacity range from 512 KB to 12 MB for PC Servers and workstations have from 12 MB to 16 MB of ATC

L1,L2,L3 Cache L3 cache : It is a cache on the motherboard that is separate from the processor chip Personal computers often have up to 8 MB of L3 cache Servers and workstations have from 8 MB to 24 MB of L3 cache.

L1,L2,L3 Cache

How does this activity increases speed? Consider a system with Internal Cache Access Time = 10ns Main Memory Access Time = 70ns Time for hit = 10ns Time for miss = 10+70 = 80ns Hit Ratio: specifies the percentage of hits to total cache accesses

How does this activity increases speed? Thus hit ratio affects the average access time The average access time is given by the following equation: If Hit ratio = 0.9 Tcache = 10ns Tcache+Tram =80ns Average Access Time, Tacc = 17ns

How does this activity increases speed? Hit Ratio is governed by many factors like Size of the program Type and amount of data used by the program Addressing activity during execution

How does this activity increases speed? Two characteristics of running program improves performance with cache When we access a m/m location, there is a good chance we will access it again. When we access one location, there is a good possibility that we access the next location also. In general, it is called locality of reference

How does this activity increases speed? Instruction Cache Access: Consider the following loop of instructions: MOV CX,1000 SUB AX,AX NEXT: ADD AX,[SI] MOV [SI],AX INC SI LOOP NEXT If cache is empty, the first pass will fill the cache (MISS) and the next 999 passes will generate hits for each instruction fetch

How does this activity increases speed? When a MISS occurs, the cache reads a group of location from main memory and this group is called line of data. So after fetching the first instruction, the rest of the loop is already in the cache(prefetch buffer too) before we finish the first pass.

How does this activity increases speed? Data Cache Access: The loop example also contains accesses to data operands ADD AX,[SI] & MOV [SI],AX In this case, a line of data is to be read from main memory and guarantees faster access.

How does this activity increases speed? Data writes: MOV [SI],AX Cache Access Time = 10ns Main Memory Access Time = 70ns Which to choose : Cache/ Main Memory is based on the cache policy used by a particular system There are 2 policies : Writeback Writethrough

How does this activity increases speed? Writeback: writing results only to cache Adv : faster writes Disadv: out-of-date main memory data Writethrough: writing to cache and main memory Adv : maintain valid data in main memory Disadv: requires long write times