1 Lecture 5: Refresh, Chipkill Topics: refresh basics and innovations, error correction.

Slides:



Advertisements
Similar presentations
- Dr. Kalpakis CMSC Dr. Kalpakis 1 Outline In implementing DBMS we need to answer How should the system store and manage very large amounts of data?
Advertisements

Improving DRAM Performance by Parallelizing Refreshes with Accesses
A Case for Refresh Pausing in DRAM Memory Systems
Rethinking DRAM Design and Organization for Energy-Constrained Multi-Cores Aniruddha N. Udipi, Naveen Muralimanohar*, Niladrish Chatterjee, Rajeev Balasubramonian,
1 Lecture 6: Chipkill, PCM Topics: error correction, PCM basics, PCM writes and errors.
Citadel: Efficiently Protecting Stacked Memory From Large Granularity Failures June 14 th 2014 Prashant J. Nair - Georgia Tech David A. Roberts- AMD Research.
Citadel: Efficiently Protecting Stacked Memory From Large Granularity Failures Dec 15 th 2014 MICRO-47 Cambridge UK Prashant Nair - Georgia Tech David.
1 Lecture 13: DRAM Innovations Today: energy efficiency, row buffer management, scheduling.
Lecture 12: DRAM Basics Today: DRAM terminology and basics, energy innovations.
1 Lecture 14: Cache Innovations and DRAM Today: cache access basics and innovations, DRAM (Sections )
1 Lecture 15: DRAM Design Today: DRAM basics, DRAM innovations (Section 5.3)
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Lecture 27: Disks, Reliability, SSDs, Processors Topics: HDDs, SSDs, RAID, Intel and IBM case studies Final exam stats:  Highest 91, 18 scores of 82+
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
Justin Meza Qiang Wu Sanjeev Kumar Onur Mutlu Revisiting Memory Errors in Large-Scale Production Data Centers Analysis and Modeling of New Trends from.
1 Lecture 14: DRAM, PCM Today: DRAM scheduling, reliability, PCM Class projects.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
1 Lecture 1: Introduction and Memory Systems CS 7810 Course organization:  5 lectures on memory systems  5 lectures on cache coherence and consistency.
Due to the economic downturn, Microsoft Research has eliminated all funding for title slides. We sincerely apologize for any impact these austerity measures.
1 Towards Scalable and Energy-Efficient Memory System Architectures Rajeev Balasubramonian School of Computing University of Utah.
1 Lecture 4: Memory: HMC, Scheduling Topics: BOOM, memory blades, HMC, scheduling policies.
Prashant Nair Dae-Hyun Kim Moinuddin K. Qureshi
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
Redundant Array of Inexpensive Disks aka Redundant Array of Independent Disks (RAID) Modified from CCT slides.
1 Efficient Data Access in Future Memory Hierarchies Rajeev Balasubramonian School of Computing Research Buffet, Fall 2010.
LOT-ECC: LOcalized and Tiered Reliability Mechanisms for Commodity Memory Systems Ani Udipi § Naveen Muralimanohar* Rajeev Balasubramonian Al Davis Norm.
IVEC: Off-Chip Memory Integrity Protection for Both Security and Reliability Ruirui Huang, G. Edward Suh Cornell University.
1 Lecture: Virtual Memory, DRAM Main Memory Topics: virtual memory, TLB/cache access, DRAM intro (Sections 2.2)
MODULE 5: Main Memory.
Optimizing DRAM Timing for the Common-Case Donghyuk Lee Yoongu Kim, Gennady Pekhimenko, Samira Khan, Vivek Seshadri, Kevin Chang, Onur Mutlu Adaptive-Latency.
1 Lecture 14: DRAM Main Memory Systems Today: cache/TLB wrap-up, DRAM basics (Section 2.3)
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
This project has received funding from the European Union's Seventh Framework Programme for research, technological development.
1 Lecture 2: Memory Energy Topics: energy breakdowns, handling overfetch, LPDRAM, row buffer management, channel energy, refresh energy.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
COMP541 Memories II: DRAMs
ECE/CS 552: Main Memory and ECC © Prof. Mikko Lipasti Lecture notes based in part on slides created by Mark Hill, David Wood, Guri Sohi, John Shen and.
1 Lecture 27: Disks Today’s topics:  Disk basics  RAID  Research topics.
1 Lecture 5: Scheduling and Reliability Topics: scheduling policies, handling DRAM errors.
1 Lecture 3: Memory Buffers and Scheduling Topics: buffers (FB-DIMM, RDIMM, LRDIMM, BoB, BOOM), memory blades, scheduling policies.
1 Lecture: Memory Technology Innovations Topics: memory schedulers, refresh, state-of-the-art and upcoming changes: buffer chips, 3D stacking, non-volatile.
1 Lecture 3: Memory Energy and Buffers Topics: Refresh, floorplan, buffers (SMB, FB-DIMM, BOOM), memory blades, HMC.
1 Lecture 7: PCM Wrap-Up, Cache coherence Topics: handling PCM errors and writes, cache coherence intro.
1 Lecture: DRAM Main Memory Topics: DRAM intro and basics (Section 2.3)
Types of RAM (Random Access Memory) Information Technology.
1 Lecture 4: Memory Scheduling, Refresh Topics: scheduling policies, refresh basics.
1 Lecture 16: Main Memory Innovations Today: DRAM basics, innovations, trends HW5 due on Thursday; simulations can take a few hours Midterm: 32 scores.
1 Lecture: Memory Basics and Innovations Topics: memory organization basics, schedulers, refresh,
COMP541 Memories II: DRAMs
Types of RAM (Random Access Memory)
Understanding Latency Variation in Modern DRAM Chips Experimental Characterization, Analysis, and Optimization Kevin Chang Abhijith Kashyap, Hasan Hassan,
Lecture 15: DRAM Main Memory Systems
Lecture: Memory, Multiprocessors
Lecture: DRAM Main Memory
Lecture 23: Cache, Memory, Virtual Memory
Lecture 22: Cache Hierarchies, Memory
Lecture: DRAM Main Memory
Lecture: DRAM Main Memory
Lecture 28: Reliability Today’s topics: GPU wrap-up Disk basics RAID
Lecture: Memory Technology Innovations
Lecture 6: Reliability, PCM
Lecture 24: Memory, VM, Multiproc
Aniruddha N. Udipi, Naveen Muralimanohar*, Niladrish Chatterjee,
Lecture 22: Cache Hierarchies, Memory
RAIDR: Retention-Aware Intelligent DRAM Refresh
18-447: Computer Architecture Lecture 19: Main Memory
Presentation transcript:

1 Lecture 5: Refresh, Chipkill Topics: refresh basics and innovations, error correction

2 Refresh Basics A cell is expected to have a retention time of 64ms; every cell must be refreshed within a 64ms window The refresh task is broken into 8K refresh operations; a refresh operation is issued every tREFI = 7.8 us If you assume that a row of cells on a chip is 8Kb and there are 8 banks, then every refresh operation in a 4Gb chip must handle 8 rows in each bank Each refresh operation takes time tRFC = 300ns Larger chips have more cells and tRFC will grow

3 More Refresh Details To refresh a row, it needs to be activated and precharged Refresh pipeline: the first bank draws the max available current to refresh a row in many subarrays in parallel; each bank is handled sequentially; the process ends with a recovery period to restore charge pumps “Row” on the previous slide refers to the size of the available row buffer when you do an Activate; an Activate only deals with some of the subarrays in a bank; refresh performs an activate in all subarrays in a bank, so it can do multiple rows in a bank in parallel

4 Fine Granularity Refresh Will be used in DDR4 Breaks refresh into small tasks; helps reduce read queuing delays (see example) In a future 32Gb chip, tRFC = 640ns, tRFC_2x = 480ns, tRFC_4x = 350ns – note the high overhead from the recovery period

5 What Makes Refresh Worse Refresh operations are issued per rank; LPDDR does allow per bank refresh Can refresh all ranks simultaneously – this reduces memory unavailable time, but increases memory peak power Can refresh ranks in staggered manner – increases memory unavailable time, but reduces memory peak power High temperatures will increase the leakage rate and require faster refresh rates (> 85 degrees C  3.9us tREFI)

6 Refresh Innovations Smart refresh (Ghosh et al.): do not refresh rows that have been accessed recently Elastic refresh (Stuecheli et al.): perform refresh during periods of inactivity Flikker (Liu et al.): lower refresh rate for non-critical pages Refresh pausing (Nair et al.): interrupt the refresh process when demand requests arrive Preemptive command drain (Mukundan et al.): prioritize requests to ranks that will soon be refreshed

7 Refresh Innovations II Concurrent refresh and accesses (Chang et al. and Zhang et al.): makes refresh less efficient, but helps reduce queuing delays for pending reads RAIDR (Liu et al.): profile at run-time to identify weak cells; track such weak-cell rows in retention time bins with Bloom Filters; refreshes are skipped based on membership in a retention time bin Empirical study (Liu et al.): use a custom memory controller on an FPGA to measure retention time in many DIMMs; shows that retention time varies with time and is a function of data in neighboring cells

8 Basic Reliability Every 64-bit data transfer is accompanied by an 8-bit (Hamming) code – typically stored in a x8 DRAM chip Guaranteed to detect any 2 errors and recover from any single-bit error (SEC-DED) Such DIMMs are commodities and are sufficient for most applications 12.5% overhead in storage and energy For a BCH code, to correct t errors in k-bit data, need an r-bit code, r = t * ceil (log 2 k) + 1

9 Terminology Hard errors: caused by permanent device-level faults Soft errors: caused by particle strikes, noise, etc. SDC: silent data corruption (error was never detected) DUE: detected uncorrectable error DUE in memory caused by a hard error will typically lead to DIMM replacement Scrubbing: a background scan of memory (1GB every 45 mins) to detect and correct 1-bit errors

10 Field Studies Schroeder et al., SIGMETRICS 2009 Memory errors are the top causes for hw failures in servers and DIMMs are the top component replacements in servers Study examined Google servers in , using DDR1, DDR2, and FBDIMM

11 Field Studies Schroeder et al., SIGMETRICS 2009

12 Field Studies Schroeder et al., SIGMETRICS 2009 A machine with past errors is more likely to have future errors 20% of DIMMs account for 94% of errors DIMMs in platforms C and D see higher UE rates because they do not have chipkill 65-80% of uncorrectable errors are preceded by a correctable error in the same month – but predicting a UE is very difficult Chip/DIMM capacity does not strongly influence error rates Higher temperature by itself does not cause more errors, but higher system utilization does Error rates do increase with age; the increase is steep in the month range and then flattens out

13 Chipkill Chipkill correct systems can withstand failure of an entire DRAM chip For chipkill correctness  the 72-bit word must be spread across 72 DRAM chips  or, a 13-bit word (8-bit data and 5-bit ECC) must be spread across 13 DRAM chips

14 RAID-like DRAM Designs DRAM chips do not have built-in error detection Can employ a 9-chip rank with ECC to detect and recover from a single error; in case of a multi-bit error, rely on a second tier of error correction Can do parity across DIMMs (needs an extra DIMM); use ECC within a DIMM to recover from 1-bit errors; use parity across DIMMs to recover from multi-bit errors in 1 DIMM Reads are cheap (must only access 1 DIMM); writes are expensive (must read and write 2 DIMMs) Used in some HP servers

15 RAID-like DRAM Udipi et al., ISCA’10 Add a checksum to every row in DRAM; verified at the memory controller Adds area overhead, but provides self-contained error detection When a chip fails, can re-construct data by examining another parity DRAM chip Can control overheads by having checksum for a large row or one parity chip for many data chips Writes are again problematic

16 SSC-DSD The cache line is organized into multi-bit symbols Two symbols are required for error detection and 3/4 symbols are used for error correction (can handle complete failure in one symbol, i.e., each symbol is fetched from a different DRAM chip) 3-symbol codes are not popular because it leads to non-standard DIMMs 4-symbol codes are more popular, but are used as 32+4 so that standard ECC DIMMs can be used (high activation energy and low rank-level parallelism) (16+4 would require a non-standard DIMM)

17 Virtualized ECC Yoon and Erez, ASPLOS’10 Also builds a two-tier error protection scheme, but does the second tier in software The second-tier codes are stored in the regular physical address space (not specialized DRAM chips); software has flexibility in terms of the types of codes to use and the types of pages that are protected Reads are cheap; writes are expensive as usual; but, the second-tier codes can now be cached; greatly helps reduce the number of DRAM writes Requires a 144-bit datapath (increases overfetch)

18 LoT-ECC Udipi et al., ISCA 2012 Use checksums to detect errors and parity codes to fix Requires access of only 9 DRAM chips per read, but the storage overhead grows to 26%

19 Title Bullet