Revision Mid 2, Cache Prof. Sin-Min Lee Department of Computer Science.

Slides:



Advertisements
Similar presentations
COMPUTER SYSTEMS An Integrated Approach to Architecture and Operating Systems Chapter 9 Memory Hierarchy ©Copyright 2008 Umakishore Ramachandran and William.
Advertisements

C SINGH, JUNE 7-8, 2010IWW 2010, ISATANBUL, TURKEY Advanced Computers Architecture, UNIT 2 Advanced Computers Architecture UNIT 2 CACHE MEOMORY Lecture7.
MEMORY popo.
Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory.
1 Parallel Scientific Computing: Algorithms and Tools Lecture #2 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
5-1 Memory System. Logical Memory Map. Each location size is one byte (Byte Addressable) Logical Memory Map. Each location size is one byte (Byte Addressable)
M. Mateen Yaqoob The University of Lahore Spring 2014.
Chapter 12 Memory Organization
Cache Memory Locality of reference: It is observed that when a program refers to memory, the access to memory for data as well as code are confined to.
The Memory Hierarchy CPSC 321 Andreas Klappenecker.
Revision Mid 2 Prof. Sin-Min Lee Department of Computer Science.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 20 - Memory.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 13, 2002 Topic: Main Memory (DRAM) Organization.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
Overview: Memory Memory Organization: General Issues (Hardware) –Objectives in Memory Design –Memory Types –Memory Hierarchies Memory Management (Software.
Revision Mid 2 Prof. Sin-Min Lee Department of Computer Science.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Physical Memory and Physical Addressing By: Preeti Mudda Prof: Dr. Sin-Min Lee CS147 Computer Organization and Architecture.
CH05 Internal Memory Computer Memory System Overview Semiconductor Main Memory Cache Memory Pentium II and PowerPC Cache Organizations Advanced DRAM Organization.
Prof. Sin-Min Lee Department of Computer Science
Physical Memory By Gregory Marshall. MEMORY HIERARCHY.
Faculty of Information Technology Department of Computer Science Computer Organization and Assembly Language Chapter 5 Internal Memory.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
Lecture 14 Memory Hierarchy and Cache Design Prof. Mike Schulte Computer Architecture ECE 201.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
Memory System Unit-IV 4/24/2017 Unit-4 : Memory System.
1 Memory Hierarchy The main memory occupies a central position by being able to communicate directly with the CPU and with auxiliary memory devices through.
Computer Architecture And Organization UNIT-II Structured Organization.
What is cache memory?. Cache Cache is faster type of memory than is found in main memory. In other words, it takes less time to access something in cache.
+ CS 325: CS Hardware and Software Organization and Architecture Memory Organization.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
MEMORY ORGANIZATION - Memory hierarchy - Main memory - Auxiliary memory - Cache memory.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Excellence Publication Co. Ltd. Volume Volume 1.
Memory Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
A memory is just like a human brain. It is used to store data and instructions. Computer memory is the storage space in computer where data is to be processed.
Lecture 5: Memory Performance. Types of Memory Registers L1 cache L2 cache L3 cache Main Memory Local Secondary Storage (local disks) Remote Secondary.
Revision Mid 1 Prof. Sin-Min Lee Department of Computer Science.
What is it and why do we need it? Chris Ward CS147 10/16/2008.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
Hierarchical Memory Systems Prof. Sin-Min Lee Department of Computer Science.
COMPUTER SYSTEMS ARCHITECTURE A NETWORKING APPROACH CHAPTER 12 INTRODUCTION THE MEMORY HIERARCHY CS 147 Nathaniel Gilbert 1.
5-1 ECE 424 Design of Microprocessor-Based Systems Haibo Wang ECE Department Southern Illinois University Carbondale, IL
Introduction to computer architecture April 7th. Access to main memory –E.g. 1: individual memory accesses for j=0, j++, j
Memory Hierarchy and Cache. A Mystery… Memory Main memory = RAM : Random Access Memory – Read/write – Multiple flavors – DDR SDRAM most common 64 bit.
Computer Architecture Lecture 25 Fasih ur Rehman.
Primary Storage The Triplets – ROM & RAM & Cache.
Cache Advanced Higher.
CS161 – Design and Architecture of Computer
CS161 – Design and Architecture of Computer
Finite State Machine, Memory Systems
Ramya Kandasamy CS 147 Section 3
Exam 2 Review Two’s Complement Arithmetic Ripple carry ALU logic and performance Look-ahead techniques, performance and equations Basic multiplication.
CACHE MEMORY.
The Triplets – ROM & RAM & Cache
Computer Memory BY- Dinesh Lohiya.
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Memory Organization.
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Memory System Performance Chapter 3
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache Memory and Performance
Memory Principles.
Presentation transcript:

Revision Mid 2, Cache Prof. Sin-Min Lee Department of Computer Science

Implementing with a D AND a T flip-flop Using this FSM with three states, an operating only on inputs and transitions from one state to another, we will be using both D and T flip-flops.

Implementing with a D AND a T flip-flop Since we have no state “11”, our Q(t+1) is “don't care” = “XX” for both of these transitions. Consider the first column of the Q(t+1) values to be “D” and the second to be “T” and then we derive two corresponding charts. DT

Implementing with a D AND a T flip-flop Then we need to derive the corresponding equations.

Implementing with a D AND a T flip-flop We assume that Q(t) is actually a pair of Q D Q T. Now, with these equations, we can graph the results.

Implementing with a D AND a T flip-flop

Memory Hierarchy  Can only do useful work at the top  rule: 90% of time is spent of 10% of program  Take advantage of locality  temporal locality keep recently accessed memory locations in cache  spatial locality keep memory locations nearby accessed memory locations in cache

The connection between the CPU and cache is very fast; the connection between the CPU and memory is slower

The Root of the Problem: Economics  Fast memory is possible, but to run at full speed, it needs to be located on the same chip as the CPU Very expensive Limits the size of the memory  Do we choose: A small amount of fast memory? A large amount of slow memory?

Memory Hierarchy Design (1)  Since 1987, microprocessors performance improved 55% per year and 35% until 1987  This picture shows the CPU performance against memory access time improvements over the years Clearly there is a processor-memory performance gap that computer architects must take care of

Memory Hierarchy Design (1)  Since 1987, microprocessors performance improved 55% per year and 35% until 1987  This picture shows the CPU performance against memory access time improvements over the years Clearly there is a processor-memory performance gap that computer architects must take care of

The Root of the Problem: Economics  Fast memory is possible, but to run at full speed, it needs to be located on the same chip as the CPU Very expensive Limits the size of the memory  Do we choose: A small amount of fast memory? A large amount of slow memory?

Memory Hierarchy Design (1)  Since 1987, microprocessors performance improved 55% per year and 35% until 1987  This picture shows the CPU performance against memory access time improvements over the years Clearly there is a processor-memory performance gap that computer architects must take care of

Memory Hierarchy Design (1)  Since 1987, microprocessors performance improved 55% per year and 35% until 1987  This picture shows the CPU performance against memory access time improvements over the years Clearly there is a processor-memory performance gap that computer architects must take care of

The Cache Hit Ratio  How often is a word found in the cache?  Suppose a word is accessed k times in a short interval 1 reference to main memory (k-1) references to the cache  The cache hit ratio h is then

Reasons why we use cache Cache memory is made of STATIC RAM – a transistor based RAM that has very low access times (fast) STATIC RAM is however, very bulky and very expensive Main Memory is made of DYNAMIC RAM – a capacitor based RAM that has very high access times because it has to be constantly refreshed (slow) DYNAMIC RAM is much smaller and cheaper

Performance (Speed)  Access time Time between presenting the address and getting the valid data (memory or other storage)  Memory cycle time Some time may be required for the memory to “recover” before next access cycle time = access + recovery  Transfer rate rate at which data can be moved for random access memory = 1 / cycle time (cycle time) -1

Comparison of Placement Algorithms