Introduction to Computer Systems Topics: Theme Five great realities of computer systems (continued) “The class that bytes”

Slides:



Advertisements
Similar presentations
C SINGH, JUNE 7-8, 2010IWW 2010, ISATANBUL, TURKEY Advanced Computers Architecture, UNIT 2 Advanced Computers Architecture UNIT 2 CACHE MEOMORY Lecture7.
Advertisements

CSCE 3121 Computer Organization Lecture 1. CSCE 3122 Course Overview Topics: –Theme –Five great realities of computer systems –Computer System Overview.
Carnegie Mellon 1 Cache Memories : Introduction to Computer Systems 10 th Lecture, Sep. 23, Instructors: Randy Bryant and Dave O’Hallaron.
Introduction to Computer Systems Topics: Theme Five great realities of computer systems How this fits within CS curriculum S ‘08 class01a.ppt
CSCE 3121 Computer Organization Lecture 1 Bryant’s Book Chapter 1.
Memory System Performance October 29, 1998 Topics Impact of cache parameters Impact of memory reference patterns –matrix multiply –transpose –memory mountain.
CMPUT Computer Organization and Architecture I1 CMPUT229 - Fall 2003 Topic D: The Memory Hierarchy José Nelson Amaral.
Cache Memories May 5, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance EECS213.
Introduction to Computer Systems Topics: Theme Five great realities of computer systems How this fits within CS curriculum CS 213 F ’03 class01a.ppt
Introduction to Computer Systems* Topics: Theme Five great realities of computer systems How this fits within CS curriculum F ’07 class01a.ppt
CPSC 312 Cache Memories Slides Source: Bryant Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on.
Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance CS213.
Introduction to Computer Systems Topics: Theme Five great realities of computer systems How this fits within CS curriculum F ’04 class01a.ppt
Systems I Locality and Caching
Memory and cache CPU Memory I/O.
ECE Dept., University of Toronto
Memory Hierarchy 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
University of Amsterdam Computer Systems – a guided tour Arnoud Visser 1 Computer Systems A guided Tour.
1 Carnegie Mellon The course that gives CMU its “Zip”! Course Overview (18-213): Introduction to Computer Systems 1 st Lecture, Aug. 26, 2014 Instructors:
University of Washington Memory and Caches I The Hardware/Software Interface CSE351 Winter 2013.
Assembly Language and Computer Organization Topics: Theme Programming in C Great realities of computer systems How this fits within CS curriculum Logistical.
1 Cache Memories Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
Introduction to Computer Systems Topics: Theme Five great realities of computer systems How this fits within CS curriculum Logistics intro.ppt CS 105 “Tour.
– 1 – , F’02 Caching in a Memory Hierarchy Larger, slower, cheaper storage device at level k+1 is partitioned into blocks.
Lecture 13: Caching EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr. Rozier.
Introduction to Computer Systems Topics: Theme Five great realities of computer systems How this fits within CS curriculum F ’06 class01a.ppt
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
Computer Organization II Topics: Theme Five great realities of computer systems How this fits within CS curriculum CS 2733.
Assembly Language and Computer Organization Topics: Theme Programming in C Great realities of computer systems How this fits within CS curriculum Logistical.
1 Seoul National University Cache Memories. 2 Seoul National University Cache Memories Cache memory organization and operation Performance impact of caches.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
Cache Memories February 28, 2002 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance Reading:
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
Programming for Performance CS 740 Oct. 4, 2000 Topics How architecture impacts your programs How (and how not) to tune your code.
1 Cache Memories. 2 Today Cache memory organization and operation Performance impact of caches  The memory mountain  Rearranging loops to improve spatial.
Lecture 5: Memory Performance. Types of Memory Registers L1 cache L2 cache L3 cache Main Memory Local Secondary Storage (local disks) Remote Secondary.
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
University of Washington Today Midterm topics end here. HW 2 is due Wednesday: great midterm review. Lab 3 is posted. Due next Wednesday (July 31)  Time.
Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance cache.ppt CS 105 Tour.
Memory Hierarchy 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Introduction to Computer Systems Class “The Class That Gives CMU Its Zip!” Randal E. Bryant August 30, 2005.
Optimizing for the Memory Hierarchy Topics Impact of caches on performance Memory hierarchy considerations Systems I.
COMPUTER SYSTEMS ARCHITECTURE A NETWORKING APPROACH CHAPTER 12 INTRODUCTION THE MEMORY HIERARCHY CS 147 Nathaniel Gilbert 1.
1 Writing Cache Friendly Code Make the common case go fast  Focus on the inner loops of the core functions Minimize the misses in the inner loops  Repeated.
Vassar College 1 Jason Waterman, CMPU 224: Computer Organization, Fall 2015 Cache Memories CMPU 224: Computer Organization Nov 19 th Fall 2015.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Cache Memories CENG331 - Computer Organization Instructors:
1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Carnegie Mellon Course Overview Introduction to Computer Systems 1.
Cache Memories.
Course Overview CSE 238/2038/2138: Systems Programming
Chapter 1: A Tour of Computer Systems
Cache Memories CSE 238/2038/2138: Systems Programming
Today How’s Lab 3 going? HW 3 will be out today
CS 105 Tour of the Black Holes of Computing
CS 105 Tour of the Black Holes of Computing
Local secondary storage (local disks)
The Memory Hierarchy : Memory Hierarchy - Cache
Authors: Adapted from slides by Randy Bryant and Dave O’Hallaron
Assembly Language and Computer Organization
Introduction to Computer Systems
Cache Memories Topics Cache memory organization Direct mapped caches
CS 105 Tour of the Black Holes of Computing
Introduction to Computer Systems
Introduction to Computer Systems
Introduction to Computer Systems
Introduction to Computer Systems
Introduction to Computer Systems
Cache Memories Professor Hugh C. Lauer CS-2011, Machine Organization and Assembly Language (Slides include copyright materials from Computer Systems:
Computer Organization and Assembly Languages Yung-Yu Chuang 2006/01/05
Cache Memory and Performance

Presentation transcript:

Introduction to Computer Systems Topics: Theme Five great realities of computer systems (continued) “The class that bytes”

– 2 – COMP 21000, S’15 Course Theme Abstraction is good, but don’t forget reality! Courses to date emphasize abstraction Abstract data types Asymptotic analysis These abstractions have limits Especially in the presence of bugs Need to understand underlying implementations What did I say about Ali?

– 3 – COMP 21000, S’15 Course Theme Useful outcomes Become more effective programmers Able to find and eliminate bugs efficiently Able to tune program performance Prepare for later “systems” classes in CS Compilers, Operating Systems, Computer Networks, Computer Architecture

– 4 – COMP 21000, S’15 Great Reality #3 Memory Matters: Random Access Memory is an un-physical abstraction Memory (RAM) is all the processor knows It is used to store instructions It is used to store data Memory is not unbounded It must be allocated and managed Many applications are memory dominated Memory referencing bugs especially pernicious Effects are distant in both time and space Memory performance is not uniform Cache and virtual memory effects can greatly affect program performance Adapting program to characteristics of memory system can lead to major speed improvements

– 5 – COMP 21000, S’15 Memory Referencing Bug Example #include int main() { double d[1] = {3.14}; int i; long int a[2]; for(i = 0; i < 10; i++) { a[i] = ; /* Possibly out of bounds */ printf("fun(%d) = %f\n", i, d[0]); } #include int main() { double d[1] = {3.14}; int i; long int a[2]; for(i = 0; i < 10; i++) { a[i] = ; /* Possibly out of bounds */ printf("fun(%d) = %f\n", i, d[0]); } mb fun(0) = fun(1) = fun( ) = drops out of loop because i > 10!

– 6 – COMP 21000, S’15 Memory Referencing Errors C and C++ do not provide any memory protection Out of bounds array references Invalid pointer values Abuses of malloc/free Can lead to nasty bugs Whether or not bug has any effect depends on system and compiler Action at a distance Corrupted object logically unrelated to one being accessed Effect of bug may be first observed long after it is generated How can I deal with this? Program in Java, Python, or C# (but not C++) Understand what possible interactions may occur Use or develop tools to detect referencing errors

– 7 – COMP 21000, S’15 Great Reality #3 (cont) Memory Matters: Memory forms a hierarchy. Smaller memory is faster, larger memory is cheaper. Memory forms a hierarchy L0: Register file are part of the processor; fastest memory L1, L2: Cache is small fast memory either on-chip or on separate chip L3: Main memory is currently RAM L4: Local disks (HD, CD, etc) L5: Remote secondary storage (distributed file systems, web servers)Example Typical disk drive is 100 times larger than RAM Takes a typical disk drive 10,000,000 times longer to read a word than RAM

– 8 – COMP 21000, S’15 Memory Hierarchy Registers On-chip L1 cache (SRAM) Main memory (DRAM) Local secondary storage (local disks) Larger, slower, and cheaper (per byte) storage devices Remote secondary storage (distributed file systems, Web servers) Local disks hold files retrieved from disks on remote network servers. Main memory holds disk blocks retrieved from local disks. Off-chip L2 cache (SRAM) L1 cache holds cache lines retrieved from the L2 cache. CPU registers hold words retrieved from cache memory. L2 cache holds cache lines retrieved from memory. L0: L1: L2: L3: L4: L5: Smaller, faster, and costlier (per byte) storage devices

– 9 – COMP 21000, S’15 Example: cache memory Cache Matters: Small fast memory can make a big BIG difference in performance Processor speed is increasing faster than memory speed Over the years this gap in speed has increasedCache As we've seen, larger devices are slower, faster devices expensive Each level stores the most used data from the level below This type of storage is called a cache for the level below We can take advantage of this physical reality to make programs faster by an order of magnitude Chapter 6 discusses this in detail. Chapter 6 discusses this in detail.

– 10 – COMP 21000, S’15 Cache An L1 cache holds 10's of thousands of bytes and is nearly as fast as a register file An L2 cache holds 100's of thousands to millions of bytes and is connected to the CPU via a special bus Takes 5 times longer to access the L2 cache than L1 cache Takes 5 to 10 times longer to access RAM than the L2 cache Main memory (DRAM) Memory bridge Bus interface L2 cache (SRAM) ALU Register file CPU chip Cache busSystem busMemory bus L1 cache (SRAM)

– 11 – COMP 21000, S’15 Memory System Performance Example Hierarchical memory organization Performance depends on access patterns Including how step through multi-dimensional array void copyji(int src[2048][2048], int dst[2048][2048]) { int i,j; for (j = 0; j < 2048; j++) for (i = 0; i < 2048; i++) dst[i][j] = src[i][j]; } void copyij(int src[2048][2048], int dst[2048][2048]) { int i,j; for (i = 0; i < 2048; i++) for (j = 0; j < 2048; j++) dst[i][j] = src[i][j]; } 59,393,288 clock cycles1,277,877,876 clock cycles 21.5 times slower! (Measured on 2GHz Intel Pentium 4)

– 12 – COMP 21000, S’15 The Memory Mountain Read throughput (MB/s) Stride (words) Working set size (bytes) Pentium III Xeon 550 MHz 16 KB on-chip L1 d-cache 16 KB on-chip L1 i-cache 512 KB off-chip unified L2 cache L1 L2 Mem xe copyij copyji

– 13 – COMP 21000, S’15 Memory Performance Example Implementations of Matrix Multiplication Multiple ways to nest loops /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum }

– 14 – COMP 21000, S’15 jki kij kji Matmult Performance (Alpha 21164) Too big for L1 CacheToo big for L2 Cache throughput Matrix size n

– 15 – COMP 21000, S’15 Great Reality #4 There’s more to performance than asymptotic complexity Constant factors matter too! Easily see 10:1 performance range depending on how code written Must optimize at multiple levels: algorithm, data representations, procedures, and loops

– 16 – COMP 21000, S’15 Great Reality #4 There’s more to performance than asymptotic complexity Must understand system to optimize performance How programs compiled and executed How to measure program performance and identify bottlenecks How to improve performance without destroying code modularity and generality