Presentation is loading. Please wait.

Presentation is loading. Please wait.

SDRAM Memory Controller

Similar presentations


Presentation on theme: "SDRAM Memory Controller"— Presentation transcript:

1 SDRAM Memory Controller
Static RAM Technology 6T Memory Cell Memory Access Timing Dynamic RAM Technology 1T Memory Cell CS Spring – Lec #10: Memory Controller - 1

2 CS 150 - Spring 2007 – Lec #10: Memory Controller - 2
Tri-State Gates OE_L IN OUT + In X 1 OE_L Out Z OE_L IN OUT + IN OUT OE_L OUT IN OE_L CS Spring – Lec #10: Memory Controller - 2

3 Slick Multiplexer Implementation
IN0 OUT S 1 Out IN0 IN1 IN1 S CS Spring – Lec #10: Memory Controller - 3

4 Basic Memory Subsystem Block Diagram
Address Decoder Word Line Memory cell 2n word lines what happens if n and/or m is very large? n Address Bits m Bit Lines CS Spring – Lec #10: Memory Controller - 4

5 CS 150 - Spring 2007 – Lec #10: Memory Controller - 5
Static RAM Cell 6-Transistor SRAM Cell word word (row select) 1 1 bit bit Write: 1. Drive bit lines (bit=1, bit=0) 2. Select row Read: 1. Precharge bit and bit to Vdd or Vdd/2 => make sure equal! 2.. Select row 3. Cell pulls one line low 4. Sense amp on column detects difference between bit and bit bit bit The classical SRAM cell looks like this. It consists of two back-to-back inverters that serves as a flip-flop. Here is an expanded view of this cell, you can see it consists of 6 transistors. In order to write a value into this cell, you need to drive from both sides. For example, if you want to write a 1, you will drive “bit” to 1 while at the same time, drive “bit bar” to zero. Once the bit lines are driven to their desired values, you will turn on these two transistors by setting the word line to high so the values on the bit lines will be written into the cell. Remember now these are very very tiny transistors so we cannot rely on them to drive these long bit lines effectively during read. Also, the pull down devices are usually much stronger than the pull up devices. So the first thing we need to do on read is to charge these two bit lines to a high values. Once these bit lines are charged to high, we will turn on these two transistors so one of these inverters (the lower one in our example) will start pulling one of the bit line low while the other bit line will remain at HI. It will take this small inverter a long time to drive this long bit line to low but we don’t have to wait that long since all we need to detect the difference between these two bit lines. And if you ask any circuit designer, they will tell you it is much easier to detect a “differential signal” (point to bit and bit bar) than to detect an absolute signal. +2 = 30 min. (Y:10) replaced with pullup to save area CS Spring – Lec #10: Memory Controller - 5

6 Typical SRAM Organization: 16-word x 4-bit
Din 3 Din 2 Din 1 Din 0 WrEn Precharge - + Wr Driver & Precharger - + Wr Driver & Precharger - + Wr Driver & Precharger - + Wr Driver & Precharger Word 0 A0 SRAM Cell SRAM Cell SRAM Cell SRAM Cell A1 Word 1 Address Decoder A2 SRAM Cell SRAM Cell SRAM Cell SRAM Cell A3 This picture shows you how to connect the SRAM cells into a 15-word by 4-bit SRAM array. The word lines are connected horizontally to the address decoder while the bit lines are connected vertically to the sense amplifier and write driver. **** What do you think is longer? Word line or bit line **** Since a typical SRAM will have thousands if not millions of words (vertical) and usually be less than 10s of bits, the bit line will be much much much longer than the word line. This is bad because if we have a large load on the word line (large capacitance), we can always build a bigger address decoder to drive them no sweat. But for the bit lines, we still have to rely on the tiny transistors (SRAM cell). That’s why we need to precharge them to high and use sense amp to detect the differences. Read enable is not needed here because if Write Enable is not asserted, read is by default.(Don’t need Read Enable) The internal logic will detect an address changes and precharge the bit lines. Once the bit lines are precharged, the values of the new address will appear at the Dout pin. +2 = 32 min. (Y:12) : : : : Word 15 SRAM Cell SRAM Cell SRAM Cell SRAM Cell - + Sense Amp - + Sense Amp - + Sense Amp - + Sense Amp Dout 3 CS Spring – Lec #10: Memory Controller - 6 Dout 2 Dout 1 Dout 0

7 Logic Diagram of a Typical SRAM
Write Enable is usually active low (WE_L) Din and Dout are combined to save pins: A new control signal, output enable (OE_L) is needed WE_L is asserted (Low), OE_L is disasserted (High) D serves as the data input pin WE_L is disasserted (High), OE_L is asserted (Low) D is the data output pin Both WE_L and OE_L are asserted: Result is unknown. Don’t do that!!! A D OE_L 2 N words x M bit SRAM M WE_L Here is the logic diagram of a typical SRAM. In order to save pins, Din and Dout are combined into a set of bidirectional pins so you need a new control signal: Output Enable. Both write enable and output enable are usually asserted low. When Write Enable is asserted, the D pins serve as the data input pin. When Output Enable is asserted, the D pins serve as the data output pin. +1 = 33 min. (Y:13) CS Spring – Lec #10: Memory Controller - 7

8 CS 150 - Spring 2007 – Lec #10: Memory Controller - 8
Typical SRAM Timing A D OE_L 2 N words x M bit SRAM M WE_L OE determines direction Hi = Write, Lo = Read Writes are dangerous! Be careful! Double signaling: OE Hi, WE Lo Write Timing: Read Timing: High Z D Data In Data Out Data Out For write, you set up your address and data on the A and D pins and then you generate a write pulse that is long enough for the write access time. For simplicity, I have assumed the Write setup time for address and data to be the same. In real life, they can be different. For read operation, you have disasserted Wr Enable and assert Output Enable. Since you are supplying garbage address here so as soon as you assert OE_L, you will get garbage out. If you then present an valid address to the SRAM, valid data will be available at the output after a delay of the Write Access Time. SRAM’s timing is much simpler than the DRAM timing which I will show you later. +1 = 34 min. (Y:14) Junk A Write Address Read Address Read Address OE_L WE_L Write Hold Time Read Access Time Read Access Time Write Setup Time CS Spring – Lec #10: Memory Controller - 8

9 CS 150 - Spring 2007 – Lec #10: Memory Controller - 9
Problems with SRAM Six transistors use up lots of area Consider a “Zero” is stored in the cell: Transistor N1 will try to pull “bit” to 0 Transistor P2 will try to pull “bit bar” to 1 Bit lines are already pre-charged high: Are P1 and P2 really necessary? Select = 1 P1 P2 Off On On On On Off N1 N2 Let’s look at the 6-T SRAM cell again and see whether we can improve it. Consider a “Zero” is stored in the cell so if we try to read it, Transistor N1 will try to pull the bit line to zero while transistor P2 will try to pull the “bit bar” line to 1. But the “bit bar” line has already been charged to high by some internal circuit even BEFORE we open this transistor to start the read. So are transistors P1 and P2 really necessary? +1 = 41 min. (Y:21) bit = 1 bit = 0 CS Spring – Lec #10: Memory Controller - 9

10 1-Transistor Memory Cell (DRAM)
Write: 1. Drive bit line 2. Select row Read: 1. Precharge bit line to Vdd/2 3. Cell and bit line share charges Minute voltage changes on the bit line 4. Sense (fancy sense amp) Can detect changes of ~1 million electrons 5. Write: restore the value Refresh 1. Just do a dummy read to every cell row select bit The state of the art DRAM cell only has one transistor. The bit is stored in a tiny transistor. The write operation is very simple. Just drive the bit line and select the row by turning on this pass transistor. For read, we will need to precharge this bit line to high and then turn on the pass transistor. This will cause a small voltage change on the bit line and a very sensitive amplifier will be used to measure this small voltage change with respect to a reference bit line. Once again, the value we stored will be destroyed by the read operation so an automatic write back has to be performed at the end of every read. + 2 = 48 min. (Y:28) Read is really a read followed by a restoring write CS Spring – Lec #10: Memory Controller - 10

11 Classical DRAM Organization (Square)
bit (data) lines r o w d e c Each intersection represents a 1-T DRAM Cell RAM Cell Array Square keeps the wires short: Power and speed advantages Less RC, faster precharge and discharge is faster access time! Similar to SRAM, DRAM is organized into rows and columns. But unlike SRAM, which allows you to read an entire row out at a time at a word, classical DRAM only allows you read out one-bit at time time. The reason for this is to save power as well as area. Remember now the DRAM cell is very small we have a lot of them across horizontally. So it will be very difficult to build a Sense Amplifier for each column due to the area constraint not to mention having a sense amplifier per column will consume a lot of power. You select the bit you want to read or write by supplying a Row and then a Column address. Similar to SRAM, each row control line is referred to as the word line and each vertical data line is referred to as the bit line. +2 = 57 min. (Y:37) word (row) select Column Selector & I/O Circuits Column Address row address Row and Column Address together: Select 1 bit a time data CS Spring – Lec #10: Memory Controller - 11

12 DRAM Logical Organization (4 Mbit)
Column Decoder 4 Mbit = 22 address bits 11 row address bits 11 col address bits Data In D 11 Sense Amps & I/O R O W D E C Bit Line Data Out Q A0…A10 11 Memory Array Address Buffer (2,048 x 2,048) Storage W ord Line Cell Square root of bits per RAS/CAS Row selects 1 row of 2048 bits from 2048 rows Col selects 1 bit out of 2048 bits in such a row CS Spring – Lec #10: Memory Controller - 12

13 Logic Diagram of a Typical DRAM
RAS_L CAS_L WE_L OE_L 256K x 8 DRAM A D 9 8 Control Signals (RAS_L, CAS_L, WE_L, OE_L) are all active low Din and Dout are combined (D): WE_L is asserted (Low), OE_L is disasserted (High) D serves as the data input pin WE_L is disasserted (High), OE_L is asserted (Low) D is the data output pin Row and column addresses share the same pins (A) RAS_L goes low: Pins A are latched in as row address CAS_L goes low: Pins A are latched in as column address RAS/CAS edge-sensitive Here is the logic diagram of a typical DRAM. In order to save pins, Din and Dout are combined into a set of bidirectional pins so you need two pins Write Enable and Output Enable to control the D pins’ directions. In order to further save pins, the row and column addresses share one set of pins, pins A whose function is controlled by the Row Address Strobe and Column Address Strobe pins both of which are active low. Whenever the Row Address Strobe makes a high to low transition, the value on the A pins are latched in as Row address. Whenever the Column Address Strobe makes a high to low transition, the value on the A pins are latched in as Column address. +2 = 60 min. (Y:40) CS Spring – Lec #10: Memory Controller - 13

14 CS 150 - Spring 2007 – Lec #10: Memory Controller - 14
DRAM READ Timing RAS_L CAS_L WE_L OE_L Every DRAM access begins at: Assertion of the RAS_L 2 ways to read: early or late v. CAS D A 256K x 8 DRAM 9 8 DRAM Read Cycle Time RAS_L CAS_L A Row Address Col Address Junk Row Address Col Address Junk Similar to DRAM write, DRAM read can also be a Early read or a Late read. In the Early Read Cycle, Output Enable is asserted before CAS is asserted so the data lines will contain valid data one Read access time after the CAS line has gone low. In the Late Read cycle, Output Enable is asserted after CAS is asserted so the data will not be available on the data lines until one read access time after OE is asserted. Once again, notice that the RAS line has to remain asserted during the entire time. The DRAM read cycle time is defined as the time between the two RAS pulse. Notice that the DRAM read cycle time is much longer than the read access time. Q: RAS & CAS at same time? Yes, both must be low +2 = 65 min. (Y:45) WE_L OE_L D High Z Junk Data Out High Z Data Out Read Access Time Output Enable Delay Early Read Cycle: OE_L asserted before CAS_L Late Read Cycle: OE_L asserted after CAS_L CS Spring – Lec #10: Memory Controller - 14

15 CS 150 - Spring 2007 – Lec #10: Memory Controller - 15
Early Read Sequencing Assert Row Address Assert RAS_L Commence read cycle Meet Row Addr setup time before RAS/hold time after RAS Assert OE_L Assert Col Address Assert CAS_L Meet Col Addr setup time before CAS/hold time after CAS Valid Data Out after access time Disassert OE_L, CAS_L, RAS_L to end cycle CS Spring – Lec #10: Memory Controller - 15

16 Sketch of Early Read FSM
FSM Clock? Row Address to Memory Setup time met? Assert RAS_L Hold time met? Assert OE_L, RAS_L Col Address to Memory Setup time met? Assert OE_L, RAS_L, CAS_L Hold time met? Assert OE_L, RAS_L, CAS_L Data Available (better grab it!) CS Spring – Lec #10: Memory Controller - 16

17 CS 150 - Spring 2007 – Lec #10: Memory Controller - 17
Late Read Sequencing Assert Row Address Assert RAS_L Commence read cycle Meet Row Addr setup time before RAS/hold time after RAS Assert Col Address Assert CAS_L Meet Col Addr setup time before CAS/hold time after CAS Assert OE_L Valid Data Out after access time Disassert OE_L, CAS_L, RAS_L to end cycle CS Spring – Lec #10: Memory Controller - 17

18 CS 150 - Spring 2007 – Lec #10: Memory Controller - 18
Sketch of Late Read FSM FSM Clock? Row Address to Memory Setup time met? Assert RAS_L Hold time met? Col Address to Memory Assert RAS_L Setup time met? Col Address to Memory Assert RAS_L, CAS_L Hold time met? Assert OE_L, RAS_L, CAS_L Data Available (better grab it!) CS Spring – Lec #10: Memory Controller - 18

19 CS 150 - Spring 2007 – Lec #10: Memory Controller - 19
DRAM WRITE Timing RAS_L CAS_L WE_L OE_L Every DRAM access begins at: The assertion of the RAS_L 2 ways to write: early or late v. CAS A 256K x 8 DRAM D 9 8 DRAM WR Cycle Time RAS_L CAS_L A Row Address Col Address Junk Row Address Col Address Junk Let me show you an example. Here we are performing two write operation to the DRAM. Setup/hold times in colar bars Remember, this is very important. All DRAM access start with the assertion of the RAS line. When the RAS_L line go low, the address lines are latched in as row address. This is followed by the CAS_L line going low to latch in the column address. Of course, there will be certain setup and hold time requirements for the address as well as data as highlighted here. Since the Write Enable line is already asserted before CAS is asserted, write will occur shortly after the column address is latched in. This is referred to as the Early Write Cycle. This is different from the 2nd example I showed here where the Write Enable signal comes AFTER the assertion of CAS. This is referred to as a Later Write cycle. Notice that in the early write cycle, the width of the CAS line, which you as a logic designer can and should control, must be as long as the memory’s write access time. On the other hand, in the later write cycle, the width of the Write Enable pulse must be as wide as the WR Access Time. Also notice that the RAS line has to remain asserted (low) during the entire access cycle. The DRAM write cycle time is defined as the time between the two RAS pulse and is much longer than the DRAM write access time. +3 = 63 min. (Y:43) OE_L WE_L D Junk Data In Junk Data In Junk WR Access Time WR Access Time Early Wr Cycle: WE_L asserted before CAS_L CS Spring – Lec #10: Memory Controller - 19 Late Wr Cycle: WE_L asserted after CAS_L

20 Key DRAM Timing Parameters
tRAC: minimum time from RAS line falling to the valid data output. Quoted as the speed of a DRAM A fast 4Mb DRAM tRAC = 60 ns tRC: minimum time from the start of one row access to the start of the next. tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns tCAC: minimum time from CAS line falling to valid data output. 15 ns for a 4Mbit DRAM with a tRAC of 60 ns tPC: minimum time from the start of one column access to the start of the next. 35 ns for a 4Mbit DRAM with a tRAC of 60 ns CS Spring – Lec #10: Memory Controller - 20

21 CS 150 - Spring 2007 – Lec #10: Memory Controller - 21
First Midterm Results 10% A 15% A 20% B 20% B 20% B 15% C 10% C/C- 9 116 Low: 7 High: 50 Mean: 32 Std. Dev: 9.9 CS Spring – Lec #10: Memory Controller - 21

22 SDRAM Memory Controller
Static RAM Technology 6T Memory Cell Memory Access Timing Dynamic RAM Technology 1T Memory Cell CS Spring – Lec #10: Memory Controller - 22


Download ppt "SDRAM Memory Controller"

Similar presentations


Ads by Google