Presentation is loading. Please wait.

Presentation is loading. Please wait.

18-447: Computer Architecture Lecture 19: Main Memory

Similar presentations


Presentation on theme: "18-447: Computer Architecture Lecture 19: Main Memory"— Presentation transcript:

1 18-447: Computer Architecture Lecture 19: Main Memory
Prof. Onur Mutlu Carnegie Mellon University Spring 2012, 4/2/2012

2 Reminder: Homeworks Homework 5 Due today
Topics: Out-of-order execution, dataflow, vector processing, memory, caches

3 Homework 4 Grades Average 83.14 Median 83 Max 105 Min 51
Max Possible Points Total number of students 47

4 Reminder: Lab Assignments
Implementing caches and branch prediction in a high-level timing simulator of a pipelined processor Due April 6 Extra credit: Cache exploration and high performance with optimized caches

5 Lab 4 Grades Average 665.3 Median 695 Max 770 Min 230
Max Possible Points (w/o EC) 700 Total number of students 46

6 Lab 4: Correct Designs and Extra Credit
Rank Student Crit. Path (ns) Cycles Execution Time (ns) Relative Execution Time 1 Eric Brunstad 10.425 34568 1.00 2 Arthur Chang 10.686 34804 1.03 3 Alex Crichton 10.85 34636 1.04 4 Jason Lin 11.312 34672 1.09 5 Anish Phophaliya 10.593 37560 1.10 6 James Wahawisan 9.16 44976 1.14 7 Prerak Patel 11.315 37886 1.19 8 Greg Nazario 12.23 35696 1.21 9 Kee Young Lee 10.019 1.25 10 Jonathan Loh 13.731 33668 1.28 11 Vikram Rajkumar 13.823 34932 1.34 12 Justin Wagner 15.065 33728 1.41 13 Daniel Jacobs 13.593 37782 1.43 14 Mike Mu 14.055 36832 1.44 15 Qiannan Zhang 13.484 38764 1.45 16 Andrew Tan 16.754 34660 1.61 17 Dennis Liang 16.722 37176 1.73 18 Dev Gurjar 12.864 57332 2.05 19 Winnie Woo 23.281 33976 2.19

7 Lab 4 Extra Credit Rank Student Crit. Path (ns) Cycles Execution Time
Relative Execution Time 1 Eric Brunstad 10.425 34568 1.00 2 Arthur Chang 10.686 34804 1.03 3 Alex Crichton 10.85 34636 1.04

8 Reminder: Midterm II Next week April 11
Everything covered in the course can be on the exam You can bring in two cheat sheets (8.5x11’’)

9 Review of Last Lecture Wrap up basic caches Start main memory
Handling writes Sectored caches Instruction vs. data Multi-level caching issues Cache performance Multiple outstanding misses Multiple accesses per cycle Start main memory DRAM basics Interleaving Bank, rank concepts

10 Review: Interleaving Interleaving (banking)
Problem: a single monolithic memory array takes long to access and does not enable multiple accesses in parallel Goal: Reduce the latency of memory array access and enable multiple accesses in parallel Idea: Divide the array into multiple banks that can be accessed independently (in the same cycle or in consecutive cycles) Each bank is smaller than the entire memory storage Accesses to different banks can be overlapped Issue: How do you map data to different banks? (i.e., how do you interleave data across banks?)

11 The DRAM Subsystem

12 DRAM Subsystem Organization
Channel DIMM Rank Chip Bank Row/Column

13 The DRAM Bank Structure

14 The DRAM Bank Structure

15 Page Mode DRAM A DRAM bank is a 2D array of cells: rows x columns
A “DRAM row” is also called a “DRAM page” “Sense amplifiers” also called “row buffer” Each address is a <row,column> pair Access to a “closed row” Activate command opens row (placed into row buffer) Read/write command reads/writes column in the row buffer Precharge command closes the row and prepares the bank for next access Access to an “open row” No need for activate command

16 DRAM Bank Operation Access Address: (Row 0, Column 0) Columns
Row address 0 Row address 1 Row decoder Rows Row 0 Empty Row 1 Row Buffer CONFLICT ! HIT HIT Column address 0 Column address 1 Column address 0 Column address 85 Column mux Data

17 The DRAM Chip Consists of multiple banks (2-16 in Synchronous DRAM)
Banks share command/address/data buses The chip itself has a narrow interface (4-16 bits per read)

18 128M x 8-bit DRAM Chip

19 DRAM Rank and Module Rank: Multiple chips operated together to form a wide interface All chips comprising a rank are controlled at the same time Respond to a single command Share address and command buses, but provide different data A DRAM module consists of one or more ranks E.g., DIMM (dual inline memory module) This is what you plug into your motherboard If we have chips with 8-bit interface, to read 8 bytes in a single access, use 8 chips in a DIMM

20 A 64-bit Wide DIMM (One Rank)

21 A 64-bit Wide DIMM (One Rank)
Advantages: Acts like a high-capacity DRAM chip with a wide interface Flexibility: memory controller does not need to deal with individual chips Disadvantages: Granularity: Accesses cannot be smaller than the interface width

22 Multiple DIMMs Advantages: Disadvantages: Enables even higher capacity
Interconnect complexity and energy consumption can be high

23 DRAM Channels 2 Independent Channels: 2 Memory Controllers (Above)
2 Dependent/Lockstep Channels: 1 Memory Controller with wide interface (Not Shown above)

24 Generalized Memory Structure

25 Generalized Memory Structure

26 The DRAM Subsystem The Top Down View

27 DRAM Subsystem Organization
Channel DIMM Rank Chip Bank Row/Column

28 DIMM (Dual in-line memory module)
The DRAM subsystem “Channel” DIMM (Dual in-line memory module) Processor Memory channel Memory channel

29 DIMM (Dual in-line memory module)
Breaking down a DIMM DIMM (Dual in-line memory module) Side view Front of DIMM Back of DIMM

30 DIMM (Dual in-line memory module)
Breaking down a DIMM DIMM (Dual in-line memory module) Side view Front of DIMM Back of DIMM Rank 0: collection of 8 chips Rank 1

31 Rank Rank 0 (Front) Rank 1 (Back) <0:63> <0:63> Addr/Cmd
CS <0:1> Data <0:63> Memory channel

32 . . . Breaking down a Rank Chip 0 Chip 1 Chip 7 Rank 0 <0:63>
<0:7> <8:15> <56:63> Data <0:63>

33 Breaking down a Chip ... 8 banks Chip 0 Bank 0 <0:7> <0:7>

34 Breaking down a Bank ... ... Row-buffer Bank 0 <0:7> 1B 1B 1B
1B (column) row 16k-1 Bank 0 ... row 0 <0:7> Row-buffer 1B 1B 1B ... <0:7>

35 DRAM Subsystem Organization
Channel DIMM Rank Chip Bank Row/Column

36 Example: Transferring a cache block
Physical memory space 0xFFFF…F Channel 0 ... DIMM 0 Mapped to 0x40 Rank 0 64B cache block 0x00

37 Example: Transferring a cache block
Physical memory space Chip 0 Chip 1 Chip 7 Rank 0 0xFFFF…F . . . ... <0:7> <8:15> <56:63> 0x40 64B cache block Data <0:63> 0x00

38 Example: Transferring a cache block
Physical memory space Chip 0 Chip 1 Chip 7 Rank 0 0xFFFF…F . . . Row 0 Col 0 ... <0:7> <8:15> <56:63> 0x40 64B cache block Data <0:63> 0x00

39 Example: Transferring a cache block
Physical memory space Chip 0 Chip 1 Chip 7 Rank 0 0xFFFF…F . . . Row 0 Col 0 ... <0:7> <8:15> <56:63> 0x40 64B cache block Data <0:63> 8B 0x00 8B

40 Example: Transferring a cache block
Physical memory space Chip 0 Chip 1 Chip 7 Rank 0 0xFFFF…F . . . Row 0 Col 1 ... <0:7> <8:15> <56:63> 0x40 64B cache block Data <0:63> 8B 0x00

41 Example: Transferring a cache block
Physical memory space Chip 0 Chip 1 Chip 7 Rank 0 0xFFFF…F . . . Row 0 Col 1 ... <0:7> <8:15> <56:63> 0x40 64B cache block 8B Data <0:63> 8B 0x00 8B

42 Example: Transferring a cache block
Physical memory space Chip 0 Chip 1 Chip 7 Rank 0 0xFFFF…F . . . Row 0 Col 1 ... <0:7> <8:15> <56:63> 0x40 64B cache block 8B Data <0:63> 8B 0x00 A 64B cache block takes 8 I/O cycles to transfer. During the process, 8 columns are read sequentially.

43 Latency Components: Basic DRAM Operation
CPU → controller transfer time Controller latency Queuing & scheduling delay at the controller Access converted to basic commands Controller → DRAM transfer time DRAM bank latency Simple CAS if row is “open” OR RAS + CAS if array precharged OR PRE + RAS + CAS (worst case) DRAM → CPU transfer time (through controller)

44 Multiple Banks (Interleaving) and Channels
Enable concurrent DRAM accesses Bits in address determine which bank an address resides in Multiple independent channels serve the same purpose But they are even better because they have separate data buses Increased bus bandwidth Enabling more concurrency requires reducing Bank conflicts Channel conflicts How to select/randomize bank/channel indices in address? Lower order bits have more entropy Randomizing hash functions (XOR of different address bits)

45 How Multiple Banks/Channels Help

46 Multiple Channels Advantages Disadvantages Increased bandwidth
Multiple concurrent accesses (if independent channels) Disadvantages Higher cost than a single channel More board wires More pins (if on-chip memory controller)

47 Address Mapping (Single Channel)
Single-channel system with 8-byte memory bus 2GB memory, 8 banks, 16K rows & 2K columns per bank Row interleaving Consecutive rows of memory in consecutive banks Cache block interleaving Consecutive cache block addresses in consecutive banks 64 byte cache blocks Accesses to consecutive cache blocks can be serviced in parallel How about random accesses? Strided accesses? Row (14 bits) Bank (3 bits) Column (11 bits) Byte in bus (3 bits) Row (14 bits) High Column Bank (3 bits) Low Col. Byte in bus (3 bits) 8 bits 3 bits

48 Bank Mapping Randomization
DRAM controller can randomize the address mapping to banks so that bank conflicts are less likely 3 bits Column (11 bits) Byte in bus (3 bits) XOR Bank index (3 bits)

49 Address Mapping (Multiple Channels)
Row (14 bits) Bank (3 bits) Column (11 bits) Byte in bus (3 bits) Where are consecutive cache blocks? Row (14 bits) C Bank (3 bits) Column (11 bits) Byte in bus (3 bits) Row (14 bits) Bank (3 bits) C Column (11 bits) Byte in bus (3 bits) Row (14 bits) Bank (3 bits) Column (11 bits) C Byte in bus (3 bits) C Row (14 bits) High Column Bank (3 bits) Low Col. Byte in bus (3 bits) 8 bits 3 bits Row (14 bits) C High Column Bank (3 bits) Low Col. Byte in bus (3 bits) 8 bits 3 bits Row (14 bits) High Column C Bank (3 bits) Low Col. Byte in bus (3 bits) 8 bits 3 bits Row (14 bits) High Column Bank (3 bits) C Low Col. Byte in bus (3 bits) 8 bits 3 bits Row (14 bits) High Column Bank (3 bits) Low Col. C Byte in bus (3 bits) 8 bits 3 bits

50 Interaction with VirtualPhysical Mapping
Operating System influences where an address maps to in DRAM Operating system can control which bank a virtual page is mapped to. It can randomize Page<Bank,Channel> mappings Application cannot know/determine which bank it is accessing Virtual Page number (52 bits) Page offset (12 bits) VA Physical Frame number (19 bits) Page offset (12 bits) PA Row (14 bits) Bank (3 bits) Column (11 bits) Byte in bus (3 bits) PA

51 DRAM Refresh (I) DRAM capacitor charge leaks over time
The memory controller needs to read each row periodically to restore the charge Activate + precharge each row every N ms Typical N = 64 ms Implications on performance? -- DRAM bank unavailable while refreshed -- Long pause times: If we refresh all rows in burst, every 64ms the DRAM will be unavailable until refresh ends Burst refresh: All rows refreshed immediately after one another Distributed refresh: Each row refreshed at a different time, at regular intervals

52 DRAM Refresh (II) Distributed refresh eliminates long pause times
How else we can reduce the effect of refresh on performance? Can we reduce the number of refreshes?

53 Effect of DRAM Refresh Liu et al., “RAIDR: Retention-Aware Intelligent DRAM Refresh,” ISCA 2012.

54 Retention Time of DRAM Cells
Observation: DRAM cells have different data retention times Corollary: Not all rows need to be refreshed at the same frequency

55 Reducing DRAM Refresh Operations
Idea: If we can identify the retention time of different rows, we can refresh each row at the frequency it really needs to be refreshed Implementation: Refresh controller bins the rows according to their minimum retention times and refreshes rows in each bin at the frequency specified for the bin e.g., a bin for ms, another for ms, … Observation: Only very few rows need to be refreshed very frequently (every 256ms)  Have only a few bins  low HW overhead while reducing refresh frequency for most rows by 4X Liu et al., “RAIDR: Retention-Aware Intelligent DRAM Refresh,” ISCA 2012.

56 RAIDR Mechanism Liu et al., “RAIDR: Retention-Aware Intelligent DRAM Refresh,” ISCA 2012.

57 DRAM Controller Purpose and functions
Ensure correct operation of DRAM (refresh and timing) Service DRAM requests while obeying timing constraints of DRAM chips Constraints: resource conflicts (bank, bus, channel), minimum write-to-read delays Translate requests to DRAM command sequences Buffer and schedule requests to improve performance Reordering and row-buffer management Manage power consumption and thermals in DRAM Turn on/off DRAM chips, manage power modes

58 DRAM Controller Issues
Where to place? In chipset + More flexibility to plug different DRAM types into the system + Less power density in the CPU chip On CPU chip + Reduced latency for main memory access + Higher bandwidth between cores and controller More information can be communicated (e.g. request’s importance in the processing core)

59 DRAM Controller (II)

60 A Modern DRAM Controller

61 DRAM Scheduling Policies (I)
FCFS (first come first served) Oldest request first FR-FCFS (first ready, first come first served) 1. Row-hit first 2. Oldest first Goal: Maximize row buffer hit rate  maximize DRAM throughput Actually, scheduling is done at the command level Column commands (read/write) prioritized over row commands (activate/precharge) Within each group, older commands prioritized over younger ones

62 DRAM Scheduling Policies (II)
A scheduling policy is essentially a prioritization order Prioritization can be based on Request age Row buffer hit/miss status Request type (prefetch, read, write) Requestor type (load miss or store miss) Request criticality Oldest miss in the core? How many instructions in core are dependent on it?

63 Row Buffer Management Policies
Open row Keep the row open after an access + Next access might need the same row  row hit -- Next access might need a different row  row conflict, wasted energy Closed row Close the row after an access (if no other requests already in the request buffer need the same row) + Next access might need a different row  avoid a row conflict -- Next access might need the same row  extra activate latency Adaptive policies Predict whether or not the next access to the bank will be to the same row

64 Open vs. Closed Row Policies
Policy First access Next access Commands needed for next access Open row Row 0 Row 0 (row hit) Read Row 1 (row conflict) Precharge + Activate Row 1 + Closed row Row 0 – access in request buffer (row hit) Row 0 – access not in request buffer (row closed) Activate Row 0 + Read + Precharge Row 1 (row closed) Activate Row 1 + Read + Precharge

65 Why are DRAM Controllers Difficult to Design?
Need to obey DRAM timing constraints for correctness There are many (50+) timing constraints in DRAM tWTR: Minimum number of cycles to wait before issuing a read command after a write command is issued tRC: Minimum number of cycles between the issuing of two consecutive activate commands to the same bank Need to keep track of many resources to prevent conflicts Channels, banks, ranks, data bus, address bus, row buffers Need to handle DRAM refresh Need to optimize for performance (in the presence of constraints) Reordering is not simple Predicting the future?

66 Why are DRAM Controllers Difficult to Design?
From Lee et al., “DRAM-Aware Last-Level Cache Writeback: Reducing Write-Caused Interference in Memory Systems,” HPS Technical Report, April 2010.

67 DRAM Power Management DRAM chips have power modes
Idea: When not accessing a chip power it down Power states Active (highest power) All banks idle Power-down Self-refresh (lowest power) State transitions incur latency during which the chip cannot be accessed


Download ppt "18-447: Computer Architecture Lecture 19: Main Memory"

Similar presentations


Ads by Google