236601 - Coding and Algorithms for Memories Lecture 5 1.

Slides:



Advertisements
Similar presentations
Mahdi Barhoush Mohammad Hanaysheh
Advertisements

Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Cyclic Code.
Error Control Code.
Sampling and Pulse Code Modulation
Information and Coding Theory
Error Correcting Codes Stanley Ziewacz 22M:151 Spring 2009.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
Cellular Communications
Quantum Error Correction SOURCES: Michele Mosca Daniel Gottesman Richard Spillman Andrew Landahl.
1 Eitan Yaakobi, Laura Grupp Steven Swanson, Paul H. Siegel, and Jack K. Wolf Flash Memory Summit, August 2010 University of California San Diego Efficient.
DIGITAL COMMUNICATION Coding
1 Error Correction Coding for Flash Memories Eitan Yaakobi, Jing Ma, Adrian Caulfield, Laura Grupp Steven Swanson, Paul H. Siegel, Jack K. Wolf Flash Memory.
Coding for Flash Memories
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Ger man Aerospace Center Gothenburg, April, 2007 Coding Schemes for Crisscross Error Patterns Simon Plass, Gerd Richter, and A.J. Han Vinck.
exercise in the previous class (1)
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
MAT 1000 Mathematics in Today's World Winter 2015.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
Syndrome Decoding of Linear Block Code
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Coding and Algorithms for Memories Lecture 2 1.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Exercise in the previous class p: the probability that symbols are delivered correctly C: 1 00 → → → → What is the threshold.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany On STORAGE Systems A.J. Han Vinck January 2011.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany On STORAGE Systems A.J. Han Vinck June 2004.
Channel Capacity.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Basic Characteristics of Block Codes
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Error Correction and Partial Information Rewriting for Flash Memories Yue Li joint work with Anxiao (Andrew) Jiang and Jehoshua Bruck.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
Coding and Algorithms for Memories Lecture 4 1.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Information Theory Linear Block Codes Jalal Al Roumy.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Perfect and Related Codes
Some Computation Problems in Coding Theory
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Data Communications and Networking
Coding and Algorithms for Memories Lecture 2 + 4
INFORMATION THEORY Pui-chor Wong.
Rate Distortion Theory. Introduction The description of an arbitrary real number requires an infinite number of bits, so a finite representation of a.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Coding and Algorithms for Memories Lecture 6 1.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
Coding and Algorithms for Memories Lecture 2
Part 2 Linear block codes
Coding and Algorithms for Memories Lecture 5
Coding and Algorithms for Memories Lecture 4
Coding and Algorithms for Memories Lecture 5
II. Linear Block Codes.
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
II. Linear Block Codes.
Lecture 17 Making New Codes from Old Codes (Section 4.6)
Error Correction Coding
Theory of Information Lecture 13
Presentation transcript:

Coding and Algorithms for Memories Lecture 5 1

Write-Once Memories (WOM) Introduced by Rivest and Shamir, “How to reuse a write-once memory”, 1982 The memory elements represent bits (2 levels) and are irreversibly programmed from ‘0’ to ‘1’ Q: How many cells are required to write 100 bits twice? P1: Is it possible to do better…? P2: How many cells to write k bits twice? P3: How many cells to write k bits t times? P3’: What is the total number of bits that is possible to write in n cells in t writes? 1 st Write 2 nd Write 2

Definition: WOM Codes Definition: An [n,t;M 1,…,M t ] t-write WOM code is a coding scheme which consists of n cells and guarantees any t writes of alphabet size M 1,…,M t by programming cells from zero to one – A WOM code consists of t encoding and decoding maps E i, D i, 1 ≤i≤ t –E 1 : {1,…,M 1 }  {0,1} n – For 2 ≤i≤ t, E i : {1,…,M i }×{0,1} n  {0,1} n such that for all (m,c) ∊ {1,…,M i }×{0,1} n, E i (m,c) ≥ c – For 1 ≤i≤ t, D i : {0,1} n  {1,…,M i } such that for D i ( E i (m,c)) =m for all (m,c) ∊ {1,…,M i }×{0,1} n The sum-rate of the WOM code is R = (Σ 1 t logM i )/n Rivest Shamir: [3,2;4,4], R = (log4+log4)/3=4/3 3

The Capacity of WOM Codes The Capacity Region for two writes C 2-WOM ={(R 1,R 2 )| ∃ p ∊ [0,0.5],R 1 ≤h(p), R 2 ≤1-p} h(p) – the entropy function h(p) = -plog(p)-(1-p)log(1-p) – p – the prob to program a cell on the 1 st write, thus R 1 ≤ h(p) – After the first write, 1-p out of the cells aren’t programmed, thus R 2 ≤ 1-p The maximum achievable sum-rate is max p ∊ [0,0.5] {h(p)+(1-p)} = log3 achieved for p=1/3: R 1 = h(1/3) = log(3)-2/3 R 2 = 1-1/3 = 2/3 4

The Capacity of WOM Codes The Capacity Region for two writes C 2-WOM ={(R 1,R 2 )| ∃ p ∊ [0,0.5],R 1 ≤h(p), R 2 ≤1-p} h(p) – the entropy function h(p) = -plog(p)-(1-p)log(1-p) The Capacity Region for t writes: C t-WOM ={(R 1,…,R t )| ∃ p 1,p 2,…p t-1 ∊ [0,0.5], R 1 ≤ h(p 1 ), R 2 ≤ (1–p 1 )h(p 2 ),…, R t-1 ≤ (1–p 1 )  (1–p t–2 )h(p t–1 ) R t ≤ (1–p 1 )  (1–p t–2 )(1–p t–1 )} p 1 - prob to prog. a cell on the 1 st write: R 1 ≤ h(p) p 2 - prob to prog. a cell on the 2 nd write (from the remainder): R 2 ≤(1-p 1 )h(p 2 ) p t-1 - prob to prog. a cell on the (t-1) th write (from the remainder): R t-1 ≤ (1–p 1 )  (1–p t–2 )h(p t–1 ) R t ≤ (1–p 1 )  (1–p t–2 )(1–p t–1 ) because (1–p 1 )  (1–p t–2 )(1–p t–1 ) cells weren’t programmed The maximum achievable sum-rate is log(t+1) 5

James Saxe’s WOM Code [n,n/2-1; n/2,n/2-1,n/2-2,…,2] WOM Code – Partition the memory into two parts of n/2 cells each – First write: input symbol m ∊ {1,…,n/2} program the i th cell of the 1 st group – The i th write, i≥2: input symbol m ∊ {1,…,n/2-i+1} copy the first group to the second group program the i th available cell in the 1 st group – Decoding: There is always one cell that is programmed in the 1 st and not in the 2 nd group Its location, among the non-programmed cells, is the message value – Sum-rate: (log(n/2)+log(n/2-1)+ … +log2)/n=log((n/2)!)/n ≈ (n/2log(n/2))/n ≈ (log n)/2 6

The Coset Coding Scheme Cohen, Godlewski, and Merkx ‘86 – The coset coding scheme – Use Error Correcting Codes (ECC) in order to construct WOM-codes – Let C[n,n-r] be an ECC with parity check matrix H of size r×n – Write r bits: Given a syndrome s of r bits, find a length-n vector e such that H ⋅ e T = s – Use ECC’s that guarantee on successive writes to find vectors that do not overlap with the previously programmed cells – The goal is to find a vector e of minimum weight such that only 0s flip to 1s 7

The Coset Coding Scheme C[n,n-r] is an ECC with an r×n parity check matrix H Write r bits: Given a syndrome s of r bits, find a length-n vector e such that H ⋅ e T = s Example: H is the parity check matrix of a Hamming code – s=100, v 1 = : c = – s=000, v 2 = : c = – s=111, v 3 = : c = – s=010, …  can’t write! This matrix gives a [7,3:8,8,8] WOM code 8

Binary Two-Write WOM-Codes First Write: program only vectors v such that rank(H v ) = r V C = { v ∊ {0,1} n | rank(H v ) = r} – For H we get |V C | = 92 - we can write 92 messages – Assume we write v 1 = v 1 = ( ) 9

Binary Two-Write WOM-Codes First Write: program only vectors v such that rank(H v ) = r, V C = { v ∊ {0,1} n | rank(H v ) = r} Second Write Encoding: Second Write Decoding: Multiply the received word by H: H ⋅ (v 1 + v 2 ) = H ⋅ v 1 + H ⋅ v 2 = s 1 + (s 1 + s 2 ) = s 2 v 1 = ( ) 1.Write a vector s 2 of r bits 2.Calculate s 1 = H ⋅ v 1 3.Find v 2 such that H v 1 ⋅ v 2 = s 1 +s 2 a 4. v 2 exists since rank(H v 1 ) = r a 5.Write v 1 +v 2 to memory 1. s 2 = s 1 = H ⋅ v 1 = H v 1 ⋅ v 2 = s 1 +s 2 = 011 a 4. v 2 = v 1 +v 2 =

Sum-rate Results The construction works for any linear code C For any C[n,n-r] with parity check matrix H, V C = { v ∊ {0,1} n | rank(H v ) = r} The rate of the first write is: R 1 (C) = (log 2 |V C |)/n The rate of the second write is: R 2 (C) = r/n Thus, the sum-rate is: R(C) = (log 2 |V C | + r)/n In the last example: – R 1 = log(92)/7=6.52/7=0.93, R 2 =3/7=0.42, R=1.35 Goal: Choose a code C with parity check matrix H that maximizes the sum-rate 11

Capacity Achieving Results The Capacity region C 2-WOM ={(R 1,R 2 )| ∃ p ∊ [0,0.5],R 1 ≤h(p), R 2 ≤1-p} Theorem: For any (R 1, R 2 ) ∊ C 2-WOM and ε>0, there exists a linear code C satisfying R 1 (C) ≥ R 1 -ε and R 2 (C) ≥ R 2 –ε By computer search – Best unrestricted sum-rate (upper bound 1.58) – Best fixed sum-rate (upper bound 1.54) 12

Capacity Region and Achievable Rates of Two-Write WOM codes 13

Non-Binary WOM Codes Definition: An [n,t; M 1,…,M t ] q t-write WOM code is a coding scheme that consists of n q-ary cells and guarantees any t writes of alphabet size M 1,…,M t only by increasing the cell levels The sum-rate of the WOM-code is R = (Σ i=1 t logM i )/n 14

WOM Capacity The capacity of non-binary WOM-codes was given by Fu and Han Vinck, ‘99 The maximal sum-rate using t writes and q-ary cells is There is no tight upper bound on the sum-rate in case equal amount of information is written on each write 15

Basic Constructions Construction 1: Assume t=q–1 and there are n q-ary cells – On each write, n binary bits are written – Write to the cells level by level – The sum-rate is n(q–1)/n = q-1 (upper bound ≈ 2(q-1)) Construction 2: Assume t=2 and q is odd – First write: use only levels 0,1,…,(q-1)/2 – Second write: use only levels (q-1)/2,…,q-1 – The sum-rate is 2log((q+1)/2) – The difference between the upper bound is 16

Construction 3 Assume q is a power of 2, q=2 m Every cell is converted into m bits For 1 ≤ i ≤ m, the i th bits from each cell comprise a t-write binary WOM code The m WOM codes write into their corresponding bits in the cells independently Since every bit can only change from 0 to 1, the level of each cell cannot decrease – Use the binary expansion ϕ :{0,1} m → {0,…,2 m -1} x=(x 0,…,x m-1 ), ϕ (x) = ∑ j 2 m-j x j 17

Example – Construction 3 Let q = 8 = 2 3 and n = 3 Every cell corresponds to 3 bits Use Rivest-Shamir WOM-code The i th bits from each cell comprise a WOM-code The sum-rate is 6∙2/3 = 4 (upper bound is 5.17) Write NumberData bits Encoding by the RS base-code Encoded values in the 8-ary cells 1(11,01,10)(100,001,010)(100,001,010)(4,1,2)(4,1,2) 2(00,11,01)(101,111,110)(101,111,110)(7,3,6)(7,3,6) 18

Construction 3 Theorem: If q=2 m and there exists a t-write binary WOM-code of sum-rate R, then there exists a t-write q-ary WOM-code of sum-rate mR 19

Binary Two-Write WOM-Codes First Write: program only vectors v such that rank(H v ) = r, V C = { v ∊ {0,1} n | rank(H v ) = r} Second Write Encoding: Second Write Decoding: Multiply the received word by H: H ⋅ (v 1 + v 2 ) = H ⋅ v 1 + H ⋅ v 2 = s 1 + (s 1 + s 2 ) = s 2 v 1 = ( ) 1.Write a vector s 2 of r bits 2.Calculate s 1 = H ⋅ v 1 3.Find v 2 such that H v 1 ⋅ v 2 = s 1 +s 2 a 4. v 2 exists since rank(H v 1 ) = r a 5.Write v 1 +v 2 to memory 1. s 2 = s 1 = H ⋅ v 1 = H v 1 ⋅ v 2 = s 1 +s 2 = 011 a 4. v 2 = v 1 +v 2 =

Non-Binary Two-Write WOM-Codes Let C[n,n-r] be a linear code w/ p.c.m H of size r×n over GF(q) For a vector v ∊ GF(q) n, let H v be the matrix H with 0’s in the columns that correspond to the positions of the nonzeros’s in v First Write: program only vectors v such that rank(H v )=r, V C = { v ∊ GF(q) n | rank(H v ) = r} Second Write Encoding: Write a vector s 2 of r symbols over GF(q) – Let v 1 be the programmed vector on the first write, and let s 1 = H ⋅ v 1 – Remember that rank(H v 1 ) = r a – Find v 2 such that H v 1 ⋅ v 2 = - s 1 +s 2 F – A solution v 2 exists since rank(H v 1 ) = r a – Write v 1 +v 2 to memory Second Write Decoding: Multiply the received word by H: H ⋅ (v 1 + v 2 ) = H ⋅ v 1 + H ⋅ v 2 = s 1 + ( - s 1 + s 2 ) = s 2 This construction works for any linear code over GF(q) 21

Code Rate The construction works for any linear code C over GF(q) For any C[n,n-r] with parity check matrix H, V C = { v ∊ GF(q) n | rank(H v ) = r} The rate of the first write is: R 1 (C) = (log 2 |V C |)/n The rate of the second write is: R 2 (C) = r ⋅ log 2 q/n Thus, the total rate is: R(C) = (log 2 |V C | + r ⋅ log 2 q)/n By choosing the parity check matrix H uniformly at random, it is possible to achieve the following capacity region: {(R 1,R 2 )| ∃ p ∊ [0,1-1/q], R 1 ≤h(p)+p ⋅ log 2 (q-1), R 2 ≤(1-p) ⋅ log 2 q} – The maximum achievable rate log 2 (2q-1) is achieved for p=(q-1)/(2q-1) Remarks: – This is not an optimal non-binary two-write WOM code construction – Cells cannot be programmed twice – However, these codes are useful to construct multiple-write WOM codes 22

Three-Write WOM Codes Let C 3 be an [n,2; 2 nR 1,2 nR 2 ] two-write WOM code over GF(3) Construct a binary [2n,3; 2 nR 1,2 nR 2,2 n ] 3-write WOM code as follows: – The code has 2n cells, which are broken into n 2-cell blocks – First write: use the first write of the code C 3 Write a message m 1 from {1,…, 2 nR 1 } This message is supposed to be written as a ternary vector v 1 ∊ GF(3) n Write the vector v 1 into the n 2-cell blocks, using the mapping Φ :GF(3) → (GF(2),GF(2)), where Φ(0)=(0,0), Φ(1)=(1,0), Φ(2)=(0,1) – Second write: use the second write of the code C 3 Write a message m 2 from {1,…, 2 nR 2 } The ternary vector v 2 is written into the n 2-cell blocks using the mapping Φ Each 2-cell block is written at most once and at most one cell is programmed C 3 Encoder m 1 ∊ {1,…, 2 nR 1 } v 1 ∊ GF(3) n Φ u 1 ∊ {0,1} 2n C 3 Encoder m 2 ∊ {1,…, 2 nR 2 } v 2 ∊ GF(3) n Φ u 2 ∊ {0,1} 2n 23

Three-Write WOM Codes Let C 3 be an [n,2; 2 nR 1,2 nR 2 ] two-write WOM code over GF(3) Construct a binary [2n,3; 2 nR 1,2 nR 2,2 n ] 3-write WOM code as follows: – The code has 2n cells, which are broken into n 2-cell blocks – First write: use the first write of the code C 3 Write a message m 1 from {1,…, 2 nR 1 } This message is supposed to be written as a ternary vector v 1 ∊ GF(3) n Write the vector v 1 into the n 2-cell blocks, using the mapping Φ :GF(3) → (GF(2),GF(2)), where Φ(0)=(0,0), Φ(1)=(1,0), Φ(2)=(0,1) – Second write: use the second write of the code C 3 Write a message m 2 from {1,…, 2 nR 2 } The ternary vector v 2 is written into the n 2-cell blocks using the mapping Φ Each 2-cell block is written at most once and at most one cell is programmed – Third Write: write n bits Each bit is stored in a 2-cell block: the bit value is 1 iff both cells are programmed It is possible to write a bit in each block since at most one cell in each block was programmed 24

Three-Write WOM Code - Example Ex: C 3 is a 2-write WOM code over GF(3) with n=7 cells First Write – Write a vector v 1 over GF(3) according to the first write of C 3 – Assume the vector is v 1 = – Using the mapping Φ, v 1 is written into the 2n = 14 binary cells → Φ Second Write – Write a vector v 2 over GF(3) according to the second write of C 3 – Assume the vector is v 2 = – Using the mapping Φ, v 2 is written into the 2n = 14 binary cells → Φ Third Write: Write 7 bits – Assume the message is v 3 = , then write to memory

Code Rate Theorem: Let C 3 be an [n,2; 2 nR 1,2 nR 2 ] 2-write WOM code over GF(3) of sum-rate R 1 +R 2, then there exists a binary [2n,3; 2 nR 1,2 nR 2,2 n ] 3-write WOM code of sum- rate (R 1 +R 2 +1)/2 The best sum-rate for the code C 3 is log5 = 2.32 Conclusion: It is possible to find a binary 3-write WOM code of sum-rate (log5 +1)/2 = 1.66 Using a computer search, the best sum-rate found for WOM codes over GF(3) is 2.22 and thus there exists a 3-write WOM code of sum-rate

Four-Write WOM Codes Let C 3 be an [n,2; 2 nR 3,1,2 nR 3,2 ] two-write WOM code over GF(3) and let C 2 be an [n,2; 2 nR 2,1,2 nR 2,2 ] binary two-write WOM code Construct a [2n,4; 2 nR 1,2 nR 2, 2 nR 2,1,2 nR 2,2 ] 4-write WOM code as follows: – The code has 2n cells, which are broken into n 2-cell blocks – First write: use the first write of the code C 3 Write a message from {1,…,2 nR 3,1 } This message is supposed to be written as a ternary vector v ∊ GF(3) n Write the vector v into the n 2-cell blocks, using the mapping Φ :GF(3) → (GF(2),GF(2)), where Φ(0)=(0,0), Φ(1)=(1,0), Φ(2)=(0,1) – Second write: use the second write of the code C 3 Write a message from {1,…,2 nR 3,2 } The ternary vector is written into the n 2-cell blocks using the mapping Φ Note that each 2-cell block is written only once – Third and fourth writes: use the first and second writes of C 2 Each bit is stored in a 2-cell block: the bit value is 1 iff both cells are programmed It is possible to write a bit in each block since at most one cell in each block was programmed 27

Code Rate Theorem: Let C 3 be an [n,2; 2 nR 3,1,2 nR 3,2 ] 2-write WOM code over GF(3) (found before) of sum-rate R 1 +R 2, and let C 2 be an [n,2; 2 nR 2,1,2 nR 2,2 ] binary two-write WOM code then there exists a binary [2n,4; 2 nR 3,1,2 nR 3,2,2 nR 2,1,2 nR 2,2 ] 4-write WOM code of sum-rate (R 3,1 +R 3,2 +R 2,1 +R 2,2 )/2 a The best sum-rate for the code C 3 is log5 = 2.32 and for the code C 2 is log3 = 1.58 Conclusion: It is possible to find a binary 3-write WOM code of sum-rate (log5 + log3)/2 = 1.95 Using a computer search, the best sum-rate found for codes over GF(3) is 2.22 and for codes over GF(2) is Thus there exists a 4-write WOM code of sum-rate

Five-Write WOM-Codes Use a 2-write WOM code C 3 over GF(3) of sum-rates R 3,1, R 3,2 and a binary 3-write WOM code C 2 of sum-rates R 2,1, R 2,2, R 2,3 a The first and second writes are implemented as before using the first and second writes of the code C 3 The third, fourth, and fifth writes are also implemented as before by using the first, second, and third writes of C 2 It is possible to achieve a five-write WOM code of rate (log5+(log5+1)/2)/2=1.99 and there exists a code of rate For six writes, we will change the code C 2 to be a 4-write WOM code such that it is possible to achieve WOM-rate (log5+1.95)/2 = 2.14 and there exists a code of rate