Presentation is loading. Please wait.

Presentation is loading. Please wait.

Memory Array Subsystems

Similar presentations


Presentation on theme: "Memory Array Subsystems"— Presentation transcript:

1 Memory Array Subsystems
EE/CPRE 465 Memory Array Subsystems

2 Outline Memory Arrays SRAM Architecture DRAM Architecture ROM
SRAM Cell Decoders Column Circuitry Multiple Ports DRAM Architecture DRAM Cell Bitline architectures ROM Serial Access Memories

3 Memory Arrays

4 Array Architecture 2n words of 2m bits each
If n >> m, fold by 2k into fewer rows of more columns Good regularity – easy to design Very high density if good cells are used n=4 m=2 k=1

5 12T SRAM Cell Basic building block: SRAM Cell
Holds one bit of information, like a latch Must be read and written through bitline 12-transistor (12T) SRAM cell Use a simple latch connected to bitline 46 x 75 l unit cell

6 6T SRAM Cell Cell size accounts for most of array size 6T SRAM Cell
Reduce cell size at expense of complexity 6T SRAM Cell Used in most commercial chips Data stored in cross-coupled inverters Read: Precharge bit, bit_b Raise wordline Write: Drive data onto bit, bit_b

7 SRAM Read Precharge both bitlines high Then turn on wordline
One of the two bitlines will be pulled down by the cell Ex: A = 0, A_b = 1 bit discharges, bit_b stays high But A bumps up slightly Read stability A must not flip N1 >> N2

8 SRAM Write Drive one bitline high, the other low Then turn on wordline
Bitlines overpower cell with new value Ex: A = 0, A_b = 1, bit = 1, bit_b = 0 Force A_b low, then A rises high Writability Must overpower feedback inverter N4 >> P2

9 SRAM Sizing High bitlines must not overpower inverters during reads
But low bitlines must write new value into cell

10 SRAM Column Example Read Write

11 SRAM Layout Cell size is critical: 26 x 45 l (even smaller in industry) Tile cells sharing VDD, GND, bitline contacts

12 Thin Cell In nanometer CMOS
Avoid bends in polysilicon and diffusion Orient all transistors in one direction Lithographically friendly or thin cell layout fixes this Also reduces length and capacitance of bitlines

13 Commercial SRAMs Five generations of Intel SRAM cell micrographs
Transition to thin cell at 65 nm Steady scaling of cell area

14 Decoders n:2n decoder consists of 2n n-input AND gates
One needed for each row of memory Build AND from NAND or NOR gates Static CMOS Pseudo-nMOS

15 Decoder Layout Decoders must be pitch-matched to SRAM cell
Requires very skinny gates

16 Large Decoders For n > 4, NAND gates become slow
Break large gates into multiple smaller gates

17 Predecoding Many of these gates are redundant Factor out common
gates into predecoder Saves area Same path effort

18 Column Circuitry Some circuitry is required for each column
Bitline conditioning Sense amplifiers Column multiplexing

19 Bitline Conditioning Precharge bitlines high before reads
Equalize bitlines to minimize voltage difference when using sense amplifiers

20 Sense Amplifiers Bitlines have many cells attached tpd  (C/I) DV
Ex: 32-kbit SRAM has 256 rows x 128 cols 128 cells on each bitline tpd  (C/I) DV Even with shared diffusion contacts, 64C of diffusion capacitance (big C) Discharged slowly through small transistors (small I) Sense amplifiers are triggered on small voltage swing (reduce DV)

21 Differential Pair Amp Differential pair requires no clock
But always dissipates static power Current mirror as high impedance load Current source

22 Clocked Sense Amp Clocked sense amp saves power
Requires sense_clk after enough bitline swing Isolation transistors cut off large bitline capacitance

23 Twisted Bitlines Sense amplifiers also amplify noise
Coupling noise is severe in modern processes Try to couple equally onto bit and bit_b Done by twisting bitlines

24 Column Multiplexing Recall that array may be folded for good aspect ratio Ex: 2k word x 16 bit folded into 256 rows x 128 columns Must select 16 output bits from the 128 columns Requires 16 8:1 column multiplexers

25 Tree Decoder Mux Column mux can use pass transistors
Use nMOS only, precharge outputs One design is to use k series transistors for 2k:1 mux No external decoder logic needed 2:1 MUX

26 Single Pass-Gate Mux Or eliminate series transistors with separate decoder

27 Example: 2-way Muxed SRAM

28 Multiple Ports We have considered single-ported SRAM
One read or one write on each cycle Multiported SRAM are needed for register files Examples: Multicycle MIPS must read two sources or write a result on some cycles Pipelined MIPS must read two sources and write a third result each cycle Superscalar MIPS must read and write many sources and results each cycle

29 Dual-Ported SRAM Simple dual-ported SRAM
Two independent single-ended reads Or one differential write Do two reads and one write by time multiplexing Read during ph1, write during ph2

30 Multi-Ported SRAM Adding more access transistors hurts read stability
Multiported SRAM isolates reads from state node Single-ended design minimizes number of bitlines 4 read ports 3 write ports

31 Large SRAMs Large SRAMs are split into subarrays for speed
Ex: UltraSparc 512KB cache 4 128 KB subarrays Each have 16 8KB banks 256 rows x 256 cols / bank 60% subarray area efficiency Also space for tags & control [Shin05]

32 Dynamic RAM (DRAM) Store contents as charge on a capacitor
1 transistor and 1 capacitor per cell Order of magnitude greater density but much higher latency than SRAM Cell must be periodically read and refreshed so that its contents do not leak away

33 DRAM Cell Read Operation
Bitline is first precharged to VDD/2 Charge sharing between cell and bitline causes a voltage change DV in bitline Cell must be rewritten after each read

34 Trench Capacitor Cell must be small to achieve high density
Large cell capacitance is essential to provide a reasonable voltage swing and to minimize soft errors

35 Subarray Architectures
Size represents a tradeoff between density and performance Larger subarray amortize the decoders and sense amplifers Small subarray are faster and have larger bitline swings because of smaller wordline and bitline capacitance

36 Bitline Architectures
Open Folded Twisted

37 Two-Level Addressing Scheme
DRAM uses a two-level decoder Row access: choose one row by activating a word line Column access: select the data from the column latches To save pins and reduce package cost, the same address lines can be used for both the row and column addresses Control signals RAS (Row Access Strobe) and CAS (Column Access Strobe) are used to signal which address is being supplied Refresh one row in each cycle

38 Sychronous DRAM For each row access: Sychronous DRAM (SDRAM):
An entire row is copied to column latches Only a few bits (= data width) are output All the others are thrown away! Sychronous DRAM (SDRAM): Allow the column address to change without changing the row address -- this feature is called page mode Enable the transfer a burst of data from a series of sequential addresses A burst is defined by a starting address and a burst length Double Data Rate (DDR) SDRAM – data are transferred on both the rising and falling edge of an externally supplied clock (up to 300 MHz in 2004)

39 Read-Only Memories Read-Only Memories are nonvolatile
Retain their contents when power is removed Mask-programmed ROMs use one transistor per bit Presence or absence determines 1 or 0

40 ROM Example 4-word x 6-bit ROM Represented with dot diagram
Dots indicate 1’s in ROM Word 0: Word 1: Word 2: Word 3: Looks like 6 4-input pseudo-nMOS NORs

41 ROM Array Layout Unit cell is 12 x 8 l (about 1/10 size of SRAM) word0
GND word0 word1 word2 word3 bit5 bit4 bit3 bit2 bit1 bit0

42 Row Decoders ROM row decoders must pitch-match with ROM
Only a single track per word! A0 ~A0 A1 ~A1 A0 ~A0 A1 ~A1 VDD GND word3 word2 word1 word0

43 Complete ROM Layout

44 PROMs and EPROMs Programmable ROMs Electrically Programmable ROMs
Build array with transistors at every site Burn out fuses to disable unwanted transistors Electrically Programmable ROMs Use floating gate to turn off unwanted transistors High voltage to upper gate causes electrons to jump through thin oxide onto the floating gate

45 Electrically Programmable ROMs
EPROM (Erasable Programmable ROM) Erased through exposure to UV light that knocks the electrons off the floating gate EEPROM (Electrically Erasable Programmable ROM) Can be erased electrically Offers fine-grained control over which bits are erased Flash Erase a block at a time A freshly erased block can be written once, but then cannot be written again until the entire block is erased again Economical and convenient NOR flash or NAND flash

46 NOR ROM vs. NAND ROM NOR ROM NAND ROM
Each bitline is a pseudo-nMOS NOR The GND line limits the cell size As small as 11x7l per cell NAND ROM Each bitline is a pseudo-nMOS NAND No Vdd/GND line to limit the cell size As small as 6x5l per cell Delay quadratic to # of series transistors

47 Layout of NAND ROM A transistor in every position
Cell contents are specified by using either a transistor (0) or a metal jumper (1) in each position 7x8l per cell A transistor in every position Extra implanatation step to create a –ve threshold voltage to turn certain transistors permanently ON 6x5l per cell

48 NOR Flash vs. NAND Flash NOR Flash NAND Flash
Read almost as fast as DRAM, but long erase and write times Endurance 10,000 to 1,000,000 erase cycles A full address/data interface that allows random access Suitable for storage of data that needs infrequent updates Computer BIOS, Firmware of set-top box NAND Flash Higher density, lower cost per bit Longer read time, but faster erase and write times 10x the endurance Limited I/O interface allows only sequential data access Suitable for mass-storage devices E.g., USB Flash drive, Memory card

49 SDRAM, NOR / NAND Flash Comparison

50 NOR ROM Flash Memory Operations
Erase Write Read

51 Building Logic with ROMs
Use ROM as lookup table (LUT) containing truth table n inputs, k outputs requires 2n words x k bits Changing function is easy – reprogram ROM Finite State Machine n inputs, k outputs, s bits of state Build with 2n+s x (k+s) bit ROM and (k+s) bit reg

52 PLAs A Programmable Logic Array performs any function in sum-of-products form. Literals: inputs & complements Products / Minterms: AND of literals Outputs: OR of Minterms Example: Full Adder

53 NOR-NOR PLAs ANDs and ORs are not very efficient in CMOS
Dynamic or Pseudo-nMOS NORs are very efficient Use DeMorgan’s Law to convert to all NORs

54 PLA Schematic & Layout

55 PLAs vs. ROMs The OR plane of the PLA is like the ROM array
The AND plane of the PLA is like the ROM decoder PLAs are more flexible than ROMs No need to have 2n rows for n inputs Only generate the minterms that are needed Take advantage of logic simplification

56 Serial Access Memories
Serial access memories do not use an address Shift Registers Tapped Delay Lines Serial In Parallel Out (SIPO) Parallel In Serial Out (PISO) Queues (FIFO, LIFO)

57 Shift Register Shift registers store and delay data
Simple design: cascade of registers Watch your hold times!

58 Denser Shift Registers
Flip-flops aren’t very area-efficient For large shift registers, keep data in SRAM instead Move read/write pointers to RAM rather than data Initialize read address to first entry, write to last Increment address on each cycle

59 Tapped Delay Line A tapped delay line is a shift register with a programmable number of stages Set number of stages with delay controls to mux Ex: 0 – 63 stages of delay

60 Serial In Parallel Out 1-bit shift register reads in serial data
After N steps, presents N-bit parallel output

61 Parallel In Serial Out Load all N bits in parallel when shift = 0
Then shift one bit out per cycle

62 Queues Queues allow data to be read and written at different rates.
Read and write each use their own clock, data Queue indicates whether it is full or empty Build with SRAM and read/write counters (pointers)

63 FIFO, LIFO Queues First In First Out (FIFO) Last In First Out (LIFO)
Initialize read and write pointers to first element Queue is EMPTY On write, increment write pointer If write almost catches read, Queue is FULL On read, increment read pointer Last In First Out (LIFO) Also called a stack Use a single stack pointer for read and write


Download ppt "Memory Array Subsystems"

Similar presentations


Ads by Google