SDRAM Memory Controller

Slides:



Advertisements
Similar presentations
COEN 180 SRAM. High-speed Low capacity Expensive Large chip area. Continuous power use to maintain storage Technology used for making MM caches.
Advertisements

Figure (a) 8 * 8 array (b) 16 * 8 array.
Prith Banerjee ECE C03 Advanced Digital Design Spring 1998
Semiconductor Memory Design. Organization of Memory Systems Driven only from outside Data flow in and out A cell is accessed for reading by selecting.
5-1 Memory System. Logical Memory Map. Each location size is one byte (Byte Addressable) Logical Memory Map. Each location size is one byte (Byte Addressable)
COEN 180 DRAM. Dynamic Random Access Memory Dynamic: Periodically refresh information in a bit cell. Else it is lost. Small footprint: transistor + capacitor.
These slides incorporate figures from Digital Design Principles and Practices, third edition, by John F. Wakerly, Copyright 2000, and are used by permission.
1 DIGITAL DESIGN I DR. M. MAROUF MEMORY Read-only memories Static read/write memories Dynamic read/write memories Author: John Wakerly (CHAPTER 10.1 to.
DRAM: Dynamic RAM Store their contents as charge on a capacitor rather than in a feedback loop. 1T dynamic RAM cell has a transistor and a capacitor.
EECC341 - Shaaban #1 Lec # 19 Winter Read Only Memory (ROM) –Structure of diode ROM –Types of ROMs. –ROM with 2-Dimensional Decoding. –Using.
Kevin Walsh CS 3410, Spring 2010 Computer Science Cornell University Memory See: P&H Appendix C.8, C.9.
CS.305 Computer Architecture Memory: Structures Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made.
Chapter 9 Memory Basics Henry Hexmoor1. 2 Memory Definitions  Memory ─ A collection of storage cells together with the necessary circuits to transfer.
10/11/2007EECS150 Fa07 - DRAM 1 EECS Components and Design Techniques for Digital Systems Lec 14 – Storage: DRAM, SDRAM David Culler Electrical Engineering.
1 The Basic Memory Element - The Flip-Flop Up until know we have looked upon memory elements as black boxes. The basic memory element is called the flip-flop.
Chapter 10. Memory, CPLDs, and FPGAs
11/29/2004EE 42 fall 2004 lecture 371 Lecture #37: Memory Last lecture: –Transmission line equations –Reflections and termination –High frequency measurements.
DRAM. Any read or write cycle starts with the falling edge of the RAS signal. –As a result the address applied in the address lines will be latched.
Memory Computer Architecture Lecture 16: Memory Systems.
1 Lecture 16B Memories. 2 Memories in General Computers have mostly RAM ROM (or equivalent) needed to boot ROM is in same class as Programmable Logic.
1 EECS Components and Design Techniques for Digital Systems Lec 15 – Storage: Regs, SRAM, ROM David Culler Electrical Engineering and Computer Sciences.
CS152 / Kubiatowicz Lec /9/01©UCB Fall 2001 CS152 Computer Architecture and Engineering Lecture 20 Locality and Memory Technology November 9 th,
TopicF: Static and Dynamic Memories José Nelson Amaral
Memory Hierarchy.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output.
Registers  Flip-flops are available in a variety of configurations. A simple one with two independent D flip-flops with clear and preset signals is illustrated.
Ceg3420 L15.1 DAP Fa97,  U.CB CEG3420 Computer Design Locality and Memory Technology.
John Kubiatowicz (http.cs.berkeley.edu/~kubitron)
ECE 232 L24.Memory.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 24 Memory.
CS152 / Kubiatowicz Lec /01/99©UCB Fall 1999 CS152 Computer Architecture and Engineering Lecture 18 Locality and Memory Technology November 1, 1999.
Modern VLSI Design 2e: Chapter 6 Copyright  1998 Prentice Hall PTR Topics n Memories: –ROM; –SRAM; –DRAM. n PLAs.
1 Lecture 16B Memories. 2 Memories in General RAM - the predominant memory ROM (or equivalent) needed to boot ROM is in same class as Programmable Logic.
1 EE365 Read-only memories Static read/write memories Dynamic read/write memories.
Memory RAM Mano and Kime Sections 6-2, 6-3, 6-4. RAM - Random-Access Memory Byte - 8 bits Word - Usually in multiples of 8 K Address lines can reference.
Main Memory by J. Nelson Amaral.
Contemporary Logic Design Sequential Case Studies © R.H. Katz Transparency No Chapter #7: Sequential Logic Case Studies 7.6 Random Access Memories.
Physical Memory and Physical Addressing By: Preeti Mudda Prof: Dr. Sin-Min Lee CS147 Computer Organization and Architecture.
CompE 460 Real-Time and Embedded Systems Lecture 5 – Memory Technologies.
Memory Technology “Non-so-random” Access Technology:
Lecture 2-3: Digital Circuits & Components (1) Logic Gates(6) Registers Parallel Load (2) Boolean AlgebraShift Register Counter (3) Logic Simplification.
1 Random-Access Memory (RAM) Note: We’re skipping Sec 7.5 Today: First Hour: Static RAM –Section 7.6 of Katz’s Textbook –In-class Activity #1 Second Hour:
CPE232 Memory Hierarchy1 CPE 232 Computer Organization Spring 2006 Memory Hierarchy Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
CSIE30300 Computer Architecture Unit 07: Main Memory Hsin-Chou Chi [Adapted from material by and
CS 152 / Fall 02 Lec 19.1 CS 152: Computer Architecture and Engineering Lecture 19 Locality and Memory Technologies Randy H. Katz, Instructor Satrajit.
CpE 442 Memory System Start: X:40.
Review: Basic Building Blocks  Datapath l Execution units -Adder, multiplier, divider, shifter, etc. l Register file and pipeline registers l Multiplexers,
Survey of Existing Memory Devices Renee Gayle M. Chua.
Digital Logic Structures. Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 3-2 Roadmap Problems Algorithms.
CMPUT 429/CMPE Computer Systems and Architecture1 CMPUT429 - Winter 2002 Topic5: Memory Technology José Nelson Amaral.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
Lecture 13 Main Memory Computer Architecture COE 501.
Memory System Unit-IV 4/24/2017 Unit-4 : Memory System.
Memory Devices May be classified as: Connections: ROM; Flash; SRAM;
CPEN Digital System Design
Digital Logic Design Instructor: Kasım Sinan YILDIRIM
 Seattle Pacific University EE Logic System DesignMemory-1 Memories Memories store large amounts of digital data Each bit represented by a single.
Chapter 6: Internal Memory Computer Architecture Chapter 6 : Internal Memory Memory Processor Input/Output.
Memory Read only memory (ROM) – nonvolatile
Digital Circuits Introduction Memory information storage a collection of cells store binary information RAM – Random-Access Memory read operation.
07/11/2005 Register File Design and Memory Design Presentation E CSE : Introduction to Computer Architecture Slides by Gojko Babić.
Memory Systems 3/17/ Memory Classes Main Memory Invariably comprises solid state semiconductor devices Interfaces directly with the three bus architecture.
3/19/  Differentiate the class of memory  List the type of main memory  Explain memory architecture and operation  Draw memory map  Design.
CS35101 Computer Architecture Spring 2006 Lecture 18: Memory Hierarchy Paul Durand ( ) [Adapted from M Irwin (
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 25 Memory Hierarchy Design (Storage Technologies Trends and Caching) Prof.
CS152 / Kubiatowicz Lec17.1 4/5/99©UCB Spring 1999 CS152 Computer Architecture and Engineering Lecture 17 Locality and Memory Technology April 5, 1999.
Lecture 3. Lateches, Flip Flops, and Memory
COMP211 Computer Logic Design
Hakim Weatherspoon CS 3410 Computer Science Cornell University
Digital Logic & Design Dr. Waseem Ikram Lecture 40.
Overview Last lecture Digital hardware systems Today
Presentation transcript:

SDRAM Memory Controller Static RAM Technology 6T Memory Cell Memory Access Timing Dynamic RAM Technology 1T Memory Cell CS 150 - Spring 2007 – Lec #10: Memory Controller - 1

CS 150 - Spring 2007 – Lec #10: Memory Controller - 2 Tri-State Gates OE_L IN OUT + In X 1 OE_L Out Z OE_L IN OUT + IN OUT OE_L OUT IN OE_L CS 150 - Spring 2007 – Lec #10: Memory Controller - 2

Slick Multiplexer Implementation IN0 OUT S 1 Out IN0 IN1 IN1 S CS 150 - Spring 2007 – Lec #10: Memory Controller - 3

Basic Memory Subsystem Block Diagram Address Decoder Word Line Memory cell 2n word lines what happens if n and/or m is very large? n Address Bits m Bit Lines CS 150 - Spring 2007 – Lec #10: Memory Controller - 4

CS 150 - Spring 2007 – Lec #10: Memory Controller - 5 Static RAM Cell 6-Transistor SRAM Cell word word (row select) 1 1 bit bit Write: 1. Drive bit lines (bit=1, bit=0) 2. Select row Read: 1. Precharge bit and bit to Vdd or Vdd/2 => make sure equal! 2.. Select row 3. Cell pulls one line low 4. Sense amp on column detects difference between bit and bit bit bit The classical SRAM cell looks like this. It consists of two back-to-back inverters that serves as a flip-flop. Here is an expanded view of this cell, you can see it consists of 6 transistors. In order to write a value into this cell, you need to drive from both sides. For example, if you want to write a 1, you will drive “bit” to 1 while at the same time, drive “bit bar” to zero. Once the bit lines are driven to their desired values, you will turn on these two transistors by setting the word line to high so the values on the bit lines will be written into the cell. Remember now these are very very tiny transistors so we cannot rely on them to drive these long bit lines effectively during read. Also, the pull down devices are usually much stronger than the pull up devices. So the first thing we need to do on read is to charge these two bit lines to a high values. Once these bit lines are charged to high, we will turn on these two transistors so one of these inverters (the lower one in our example) will start pulling one of the bit line low while the other bit line will remain at HI. It will take this small inverter a long time to drive this long bit line to low but we don’t have to wait that long since all we need to detect the difference between these two bit lines. And if you ask any circuit designer, they will tell you it is much easier to detect a “differential signal” (point to bit and bit bar) than to detect an absolute signal. +2 = 30 min. (Y:10) replaced with pullup to save area CS 150 - Spring 2007 – Lec #10: Memory Controller - 5

Typical SRAM Organization: 16-word x 4-bit Din 3 Din 2 Din 1 Din 0 WrEn Precharge - + Wr Driver & Precharger - + Wr Driver & Precharger - + Wr Driver & Precharger - + Wr Driver & Precharger Word 0 A0 SRAM Cell SRAM Cell SRAM Cell SRAM Cell A1 Word 1 Address Decoder A2 SRAM Cell SRAM Cell SRAM Cell SRAM Cell A3 This picture shows you how to connect the SRAM cells into a 15-word by 4-bit SRAM array. The word lines are connected horizontally to the address decoder while the bit lines are connected vertically to the sense amplifier and write driver. **** What do you think is longer? Word line or bit line **** Since a typical SRAM will have thousands if not millions of words (vertical) and usually be less than 10s of bits, the bit line will be much much much longer than the word line. This is bad because if we have a large load on the word line (large capacitance), we can always build a bigger address decoder to drive them no sweat. But for the bit lines, we still have to rely on the tiny transistors (SRAM cell). That’s why we need to precharge them to high and use sense amp to detect the differences. Read enable is not needed here because if Write Enable is not asserted, read is by default.(Don’t need Read Enable) The internal logic will detect an address changes and precharge the bit lines. Once the bit lines are precharged, the values of the new address will appear at the Dout pin. +2 = 32 min. (Y:12) : : : : Word 15 SRAM Cell SRAM Cell SRAM Cell SRAM Cell - + Sense Amp - + Sense Amp - + Sense Amp - + Sense Amp Dout 3 CS 150 - Spring 2007 – Lec #10: Memory Controller - 6 Dout 2 Dout 1 Dout 0

Logic Diagram of a Typical SRAM Write Enable is usually active low (WE_L) Din and Dout are combined to save pins: A new control signal, output enable (OE_L) is needed WE_L is asserted (Low), OE_L is disasserted (High) D serves as the data input pin WE_L is disasserted (High), OE_L is asserted (Low) D is the data output pin Both WE_L and OE_L are asserted: Result is unknown. Don’t do that!!! A D OE_L 2 N words x M bit SRAM M WE_L Here is the logic diagram of a typical SRAM. In order to save pins, Din and Dout are combined into a set of bidirectional pins so you need a new control signal: Output Enable. Both write enable and output enable are usually asserted low. When Write Enable is asserted, the D pins serve as the data input pin. When Output Enable is asserted, the D pins serve as the data output pin. +1 = 33 min. (Y:13) CS 150 - Spring 2007 – Lec #10: Memory Controller - 7

CS 150 - Spring 2007 – Lec #10: Memory Controller - 8 Typical SRAM Timing A D OE_L 2 N words x M bit SRAM M WE_L OE determines direction Hi = Write, Lo = Read Writes are dangerous! Be careful! Double signaling: OE Hi, WE Lo Write Timing: Read Timing: High Z D Data In Data Out Data Out For write, you set up your address and data on the A and D pins and then you generate a write pulse that is long enough for the write access time. For simplicity, I have assumed the Write setup time for address and data to be the same. In real life, they can be different. For read operation, you have disasserted Wr Enable and assert Output Enable. Since you are supplying garbage address here so as soon as you assert OE_L, you will get garbage out. If you then present an valid address to the SRAM, valid data will be available at the output after a delay of the Write Access Time. SRAM’s timing is much simpler than the DRAM timing which I will show you later. +1 = 34 min. (Y:14) Junk A Write Address Read Address Read Address OE_L WE_L Write Hold Time Read Access Time Read Access Time Write Setup Time CS 150 - Spring 2007 – Lec #10: Memory Controller - 8

CS 150 - Spring 2007 – Lec #10: Memory Controller - 9 Problems with SRAM Six transistors use up lots of area Consider a “Zero” is stored in the cell: Transistor N1 will try to pull “bit” to 0 Transistor P2 will try to pull “bit bar” to 1 Bit lines are already pre-charged high: Are P1 and P2 really necessary? Select = 1 P1 P2 Off On On On On Off N1 N2 Let’s look at the 6-T SRAM cell again and see whether we can improve it. Consider a “Zero” is stored in the cell so if we try to read it, Transistor N1 will try to pull the bit line to zero while transistor P2 will try to pull the “bit bar” line to 1. But the “bit bar” line has already been charged to high by some internal circuit even BEFORE we open this transistor to start the read. So are transistors P1 and P2 really necessary? +1 = 41 min. (Y:21) bit = 1 bit = 0 CS 150 - Spring 2007 – Lec #10: Memory Controller - 9

1-Transistor Memory Cell (DRAM) Write: 1. Drive bit line 2. Select row Read: 1. Precharge bit line to Vdd/2 3. Cell and bit line share charges Minute voltage changes on the bit line 4. Sense (fancy sense amp) Can detect changes of ~1 million electrons 5. Write: restore the value Refresh 1. Just do a dummy read to every cell row select bit The state of the art DRAM cell only has one transistor. The bit is stored in a tiny transistor. The write operation is very simple. Just drive the bit line and select the row by turning on this pass transistor. For read, we will need to precharge this bit line to high and then turn on the pass transistor. This will cause a small voltage change on the bit line and a very sensitive amplifier will be used to measure this small voltage change with respect to a reference bit line. Once again, the value we stored will be destroyed by the read operation so an automatic write back has to be performed at the end of every read. + 2 = 48 min. (Y:28) Read is really a read followed by a restoring write CS 150 - Spring 2007 – Lec #10: Memory Controller - 10

Classical DRAM Organization (Square) bit (data) lines r o w d e c Each intersection represents a 1-T DRAM Cell RAM Cell Array Square keeps the wires short: Power and speed advantages Less RC, faster precharge and discharge is faster access time! Similar to SRAM, DRAM is organized into rows and columns. But unlike SRAM, which allows you to read an entire row out at a time at a word, classical DRAM only allows you read out one-bit at time time. The reason for this is to save power as well as area. Remember now the DRAM cell is very small we have a lot of them across horizontally. So it will be very difficult to build a Sense Amplifier for each column due to the area constraint not to mention having a sense amplifier per column will consume a lot of power. You select the bit you want to read or write by supplying a Row and then a Column address. Similar to SRAM, each row control line is referred to as the word line and each vertical data line is referred to as the bit line. +2 = 57 min. (Y:37) word (row) select Column Selector & I/O Circuits Column Address row address Row and Column Address together: Select 1 bit a time data CS 150 - Spring 2007 – Lec #10: Memory Controller - 11

DRAM Logical Organization (4 Mbit) Column Decoder 4 Mbit = 22 address bits 11 row address bits 11 col address bits … Data In D 11 Sense Amps & I/O R O W D E C Bit Line Data Out Q A0…A10 11 Memory Array Address Buffer (2,048 x 2,048) Storage W ord Line Cell Square root of bits per RAS/CAS Row selects 1 row of 2048 bits from 2048 rows Col selects 1 bit out of 2048 bits in such a row CS 150 - Spring 2007 – Lec #10: Memory Controller - 12

Logic Diagram of a Typical DRAM RAS_L CAS_L WE_L OE_L 256K x 8 DRAM A D 9 8 Control Signals (RAS_L, CAS_L, WE_L, OE_L) are all active low Din and Dout are combined (D): WE_L is asserted (Low), OE_L is disasserted (High) D serves as the data input pin WE_L is disasserted (High), OE_L is asserted (Low) D is the data output pin Row and column addresses share the same pins (A) RAS_L goes low: Pins A are latched in as row address CAS_L goes low: Pins A are latched in as column address RAS/CAS edge-sensitive Here is the logic diagram of a typical DRAM. In order to save pins, Din and Dout are combined into a set of bidirectional pins so you need two pins Write Enable and Output Enable to control the D pins’ directions. In order to further save pins, the row and column addresses share one set of pins, pins A whose function is controlled by the Row Address Strobe and Column Address Strobe pins both of which are active low. Whenever the Row Address Strobe makes a high to low transition, the value on the A pins are latched in as Row address. Whenever the Column Address Strobe makes a high to low transition, the value on the A pins are latched in as Column address. +2 = 60 min. (Y:40) CS 150 - Spring 2007 – Lec #10: Memory Controller - 13

CS 150 - Spring 2007 – Lec #10: Memory Controller - 14 DRAM READ Timing RAS_L CAS_L WE_L OE_L Every DRAM access begins at: Assertion of the RAS_L 2 ways to read: early or late v. CAS D A 256K x 8 DRAM 9 8 DRAM Read Cycle Time RAS_L CAS_L A Row Address Col Address Junk Row Address Col Address Junk Similar to DRAM write, DRAM read can also be a Early read or a Late read. In the Early Read Cycle, Output Enable is asserted before CAS is asserted so the data lines will contain valid data one Read access time after the CAS line has gone low. In the Late Read cycle, Output Enable is asserted after CAS is asserted so the data will not be available on the data lines until one read access time after OE is asserted. Once again, notice that the RAS line has to remain asserted during the entire time. The DRAM read cycle time is defined as the time between the two RAS pulse. Notice that the DRAM read cycle time is much longer than the read access time. Q: RAS & CAS at same time? Yes, both must be low +2 = 65 min. (Y:45) WE_L OE_L D High Z Junk Data Out High Z Data Out Read Access Time Output Enable Delay Early Read Cycle: OE_L asserted before CAS_L Late Read Cycle: OE_L asserted after CAS_L CS 150 - Spring 2007 – Lec #10: Memory Controller - 14

CS 150 - Spring 2007 – Lec #10: Memory Controller - 15 Early Read Sequencing Assert Row Address Assert RAS_L Commence read cycle Meet Row Addr setup time before RAS/hold time after RAS Assert OE_L Assert Col Address Assert CAS_L Meet Col Addr setup time before CAS/hold time after CAS Valid Data Out after access time Disassert OE_L, CAS_L, RAS_L to end cycle CS 150 - Spring 2007 – Lec #10: Memory Controller - 15

Sketch of Early Read FSM FSM Clock? Row Address to Memory Setup time met? Assert RAS_L Hold time met? Assert OE_L, RAS_L Col Address to Memory Setup time met? Assert OE_L, RAS_L, CAS_L Hold time met? Assert OE_L, RAS_L, CAS_L Data Available (better grab it!) CS 150 - Spring 2007 – Lec #10: Memory Controller - 16

CS 150 - Spring 2007 – Lec #10: Memory Controller - 17 Late Read Sequencing Assert Row Address Assert RAS_L Commence read cycle Meet Row Addr setup time before RAS/hold time after RAS Assert Col Address Assert CAS_L Meet Col Addr setup time before CAS/hold time after CAS Assert OE_L Valid Data Out after access time Disassert OE_L, CAS_L, RAS_L to end cycle CS 150 - Spring 2007 – Lec #10: Memory Controller - 17

CS 150 - Spring 2007 – Lec #10: Memory Controller - 18 Sketch of Late Read FSM FSM Clock? Row Address to Memory Setup time met? Assert RAS_L Hold time met? Col Address to Memory Assert RAS_L Setup time met? Col Address to Memory Assert RAS_L, CAS_L Hold time met? Assert OE_L, RAS_L, CAS_L Data Available (better grab it!) CS 150 - Spring 2007 – Lec #10: Memory Controller - 18

CS 150 - Spring 2007 – Lec #10: Memory Controller - 19 DRAM WRITE Timing RAS_L CAS_L WE_L OE_L Every DRAM access begins at: The assertion of the RAS_L 2 ways to write: early or late v. CAS A 256K x 8 DRAM D 9 8 DRAM WR Cycle Time RAS_L CAS_L A Row Address Col Address Junk Row Address Col Address Junk Let me show you an example. Here we are performing two write operation to the DRAM. Setup/hold times in colar bars Remember, this is very important. All DRAM access start with the assertion of the RAS line. When the RAS_L line go low, the address lines are latched in as row address. This is followed by the CAS_L line going low to latch in the column address. Of course, there will be certain setup and hold time requirements for the address as well as data as highlighted here. Since the Write Enable line is already asserted before CAS is asserted, write will occur shortly after the column address is latched in. This is referred to as the Early Write Cycle. This is different from the 2nd example I showed here where the Write Enable signal comes AFTER the assertion of CAS. This is referred to as a Later Write cycle. Notice that in the early write cycle, the width of the CAS line, which you as a logic designer can and should control, must be as long as the memory’s write access time. On the other hand, in the later write cycle, the width of the Write Enable pulse must be as wide as the WR Access Time. Also notice that the RAS line has to remain asserted (low) during the entire access cycle. The DRAM write cycle time is defined as the time between the two RAS pulse and is much longer than the DRAM write access time. +3 = 63 min. (Y:43) OE_L WE_L D Junk Data In Junk Data In Junk WR Access Time WR Access Time Early Wr Cycle: WE_L asserted before CAS_L CS 150 - Spring 2007 – Lec #10: Memory Controller - 19 Late Wr Cycle: WE_L asserted after CAS_L

Key DRAM Timing Parameters tRAC: minimum time from RAS line falling to the valid data output. Quoted as the speed of a DRAM A fast 4Mb DRAM tRAC = 60 ns tRC: minimum time from the start of one row access to the start of the next. tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns tCAC: minimum time from CAS line falling to valid data output. 15 ns for a 4Mbit DRAM with a tRAC of 60 ns tPC: minimum time from the start of one column access to the start of the next. 35 ns for a 4Mbit DRAM with a tRAC of 60 ns CS 150 - Spring 2007 – Lec #10: Memory Controller - 20

CS 150 - Spring 2007 – Lec #10: Memory Controller - 21 First Midterm Results 10% A 9 15% A- 16 20% B+ 21 20% B 24 20% B- 21 15% C+ 16 10% C/C- 9 116 Low: 7 High: 50 Mean: 32 Std. Dev: 9.9 CS 150 - Spring 2007 – Lec #10: Memory Controller - 21

SDRAM Memory Controller Static RAM Technology 6T Memory Cell Memory Access Timing Dynamic RAM Technology 1T Memory Cell CS 150 - Spring 2007 – Lec #10: Memory Controller - 22