Presentation is loading. Please wait.

Presentation is loading. Please wait.

Coding and Algorithms for Memories Lecture 4

Similar presentations


Presentation on theme: "Coding and Algorithms for Memories Lecture 4"— Presentation transcript:

1 236601 - Coding and Algorithms for Memories Lecture 4

2 Write-Once Memories (WOM)
Introduced by Rivest and Shamir, “How to reuse a write-once memory”, 1982 The memory elements represent bits (2 levels) and are irreversibly programmed from ‘0’ to ‘1’ Q: How many cells are required to write 100 bits twice? P1: Is it possible to do better…? P2: How many cells to write k bits twice? P3: How many cells to write k bits t times? P3’: What is the total number of bits that is possible to write in n cells in t writes? 1st Write 2nd Write

3 Binary WOM Codes k1,…,kt:the number of bits on each write
n cells and t writes The sum-rate of the WOM code is R = (Σ1t ki)/n Rivest Shamir: R = (2+2)/3 = 4/3

4 Definition: WOM Codes Definition: An [n,t;M1,…,Mt] t-write WOM code is a coding scheme which consists of n cells and guarantees any t writes of alphabet size M1,…,Mt by programming cells from zero to one A WOM code consists of t encoding and decoding maps Ei, Di, 1 ≤i≤ t E1: {1,…,M1}  {0,1}n For 2 ≤i≤ t, Ei: {1,…,Mi}×Im(Ei-1)  {0,1}n such that for all (m,c)∊{1,…,Mi}×Im(Ei-1), Ei(m,c) ≥ c For 1 ≤i≤ t, Di: {0,1}n  {1,…,Mi} such that for Di(Ei(m,c)) =m for all (m,c)∊{1,…,Mi}×Im(Ei-1) The sum-rate of the WOM code is R = (Σ1t logMi)/n Rivest Shamir: [3,2;4,4], R = (log4+log4)/3=4/3

5 Definition: WOM Codes There are two cases
The individual rates on each write must all be the same: fixed-rate The individual rates are allowed to be different: unrestricted-rate We assume that the write number on each write is known. This knowledge does not affect the rate Assume there exists a [n,t;M1,…,Mt] t-write WOM code where the write number is known It is possible to construct a [Nn+t,t;M1N,…,MtN] t-write WOM code where the write number is not-known so asymptotically the sum-rate is the same

6 James Saxe’s WOM Code [n,n/2-1; n/2,n/2-1,n/2-2,…,2] WOM Code
Partition the memory into two parts of n/2 cells each First write: input symbol m∊{1,…,n/2} program the ith cell of the 1st group The ith write, i≥2: input symbol m∊{1,…,n/2-i+1} copy the first group to the second group program the ith available cell in the 1st group Decoding: There is always one cell that is programmed in the 1st and not in the 2nd group Its location, among the non-programmed cells, is the message value Sum-rate: (log(n/2)+log(n/2-1)+ … +log2)/n=log((n/2)!)/n ≈ (n/2log(n/2))/n ≈ (log n)/2

7 James Saxe’s WOM Code Example: n=8, [8,3; 4,3,2]
[n,n/2-1; n/2,n/2-1,n/2-2,…,2] WOM Code Partition the memory into two parts of n/2 cells each Example: n=8, [8,3; 4,3,2] First write: 3 Second write: 2 Third write: 1 Sum-rate: (log4+log3+log2)/8=4.58/8=0.57 0,0,0,0|0,0,0,0  0,0,1,0|0,0,0,0  0,1,1,0|0,0,1,0  1,1,1,0|0,1,1,0

8 WOM Codes Constructions
Rivest and Shamir ‘82 [3,2; 4,4] (R=1.33); [7,3; 8,8,8] (R=1.28); [7,5; 4,4,4,4,4] (R=1.42); [7,2; 26,26] (R=1.34) Tabular WOM-codes “Linear” WOM-codes David Klaner: [5,3; 5,5,5] (R=1.39) David Leavitt: [4,4; 7,7,7,7] (R=1.60) James Saxe: [n,n/2-1; n/2,n/2-1,n/2-2,…,2] (R≈0.5*log n), [12,3; 65,81,64] (R=1.53) Merkx ‘84 – WOM codes constructed with Projective Geometries [4,4;7,7,7,7] (R=1.60), [31,10; 31,31,31,31,31,31,31,31,31,31] (R=1.598) [7,4; 8,7,8,8] (R=1.69), [7,4; 8,7,11,8] (R=1.75) [8,4; 8,14,11,8] (R=1.66), [7,8; 16,16,16,16, 16,16,16,16] (R=1.75) Wu and Jiang ‘09 - Position modulation code for WOM codes [172,5; 256, 256,256,256,256] (R=1.63), [196,6; 256,256,256,256,256,256] (R=1.71), [238,8; 256,256,256,256,256,256,256,256] (R=1.88), [258,9; 256,256,256,256,256,256,256,256,256] (R=1.95), [278,10; 256,256,256,256,256,256,256,256,256,256] (R=2.01) Flash Memory Summit Santa Clara, CA USA

9 The Coset Coding Scheme
Cohen, Godlewski, and Merkx ‘86 – The coset coding scheme Use Error Correcting Codes (ECC) in order to construct WOM-codes Let C[n,n-r] be an ECC with parity check matrix H of size r×n Write r bits: Given a syndrome s of r bits, find a length-n vector e such that H⋅eT = s Use ECC’s that guarantee on successive writes to find vectors that do not overlap with the previously programmed cells The goal is to find a vector e of minimum weight such that only 0s flip to 1s Flash Memory Summit Santa Clara, CA USA

10 The Coset Coding Scheme
C[n,n-r] is an ECC with an r×n parity check matrix H Write r bits: Given a syndrome s of r bits, find a length-n vector e such that H⋅eT = s Example: H is aparity check matrix of a Hamming code s=100, v1 = : c = s=000, v2 = : c = s=111, v3 = : c = s=010, …  can’t write! This matrix gives a [7,3:8,8,8] WOM code The Golay (23,12,7) code: [23,3; 211,211,211], R=33/23=1.43 The Hamming code: r bits, 2r-2+2 times, 2r–1 cells: R=r(2r-2+2)/(2r –1) Improved my Godlewski (1987) to 2r-2+2r-4+2 times: R=r(2r-2+2r-4+2)/(2r –1)

11 Binary Two-Write WOM-Codes
C[n,n-r] is a linear code w/ parity check matrix H of size r×n For a vector v ∊ {0,1}n, Hv is the matrix H with 0’s in the columns that correspond to the positions of the 1’s in v v1 = ( ) Flash Memory Summit Santa Clara, CA USA

12 Binary Two-Write WOM-Codes
First Write: program only vectors v such that rank(Hv) = r VC = { v ∊ {0,1}n | rank(Hv) = r} For H we get |VC| = 92 - we can write 92 messages Assume we write v1 = v1 = ( ) Flash Memory Summit Santa Clara, CA USA

13 Binary Two-Write WOM-Codes
First Write: program only vectors v such that rank(Hv) = r, VC = { v ∊ {0,1}n | rank(Hv) = r} Second Write Encoding: Second Write Decoding: Multiply the received word by H: H⋅(v1 + v2) = H⋅v1 + H⋅v2 = s1+ (s1 + s2) = s2 Write a vector s2 of r bits Calculate s1 = H⋅v1 Find v2 such that Hv1⋅v2 = s1+s2 a v2 exists since rank(Hv1) = r a Write v1+v2 to memory s2 = 001 s1 = H⋅v1 = 010 Hv1⋅v2 = s1+s2 = 011 a v2 = v1+v2 = v1 = ( ) Flash Memory Summit Santa Clara, CA USA

14 Example Summary Let H be the parity check matrix
Let H be the parity check matrix of the [7,4] Hamming code First write: program only vectors v such that rank(Hv) = 3 VC = { v ϵ {0,1}n | rank(Hv) = 3} For H we get |VC| = 92 - we can write 92 messages Assume we write v1 = Write 0’s in the columns of H corresponding to 1’s in v1: Hv1 d Second write: write r = 3 bits, for example: s2 = 0 0 1 Calculate s1 = H⋅v1 = 0 1 0 Solve: find a vector v2 such that Hv1⋅v2 = s1 + s2 = d Choose v2 = Finally, write v1 + v2 = Decoding: H = Hv1 = . [ ]T = [0 0 1]

15 Sum-rate Results The construction works for any linear code C
For any C[n,n-r] with parity check matrix H, VC = { v ∊ {0,1}n | rank(Hv) = r} The rate of the first write is: R1(C) = (log2|VC|)/n The rate of the second write is: R2(C) = r/n Thus, the sum-rate is: R(C) = (log2|VC| + r)/n In the last example: R1= log(92)/7=6.52/7=0.93, R2=3/7=0.42, R=1.35 Goal: Choose a code C with parity check matrix H that maximizes the sum-rate

16 Sum-rate Results The (23,11,8) Golay code: (0.9415,0.5217), R = 1.4632
The (16,5,8) Reed-Muller (4,2) code: (0.7691, ), R = We can limit the number of messages available for the first write so that both writes have the same rate, R1 = R2 = , and R = 1.375 By computer search we found more codes Best code we found has rate For fixed rate on both writes, we found

17 Capacity Achieving Results
The Capacity region C2-WOM={(R1,R2)|∃p∊[0,0.5],R1≤h(p), R2≤1-p} Theorem: For any (R1, R2)∊C2-WOM and ε>0, there exists a linear code C satisfying R1(C) ≥ R1-ε and R2(C) ≥ R2–ε By computer search Best unrestricted sum-rate (upper bound 1.58) Best fixed sum-rate (upper bound 1.54)

18 Capacity Region and Achievable Rates of Two-Write WOM codes
Flash Memory Summit Santa Clara, CA USA

19 The Entropy Function How many vectors are there with at most a single 1? How many bits is it possible to represent this way? What is the rate? How many vectors are there with at most k 1’s? Is it possible to approximate the value ? Yes! ≈ h(p), where p=k/n and h(p) = -plog(p)-(1-p)log(1-p): the Binary Entropy Function h(p) is the information rate that is possible to represent when bits are programmed with prob. p n+1 log(n+1) log(n+1)/n log( ) log( )/n log( )/n log( )/n

20 The Binary Symmetric Channel
When transmitting a binary vector, with probability p, every bit is in error Roughly pn bits will be in error The amount of information which is lost is h(p) Therefore, the channel capacity is C(p)=1-h(p) The channel capacity is an indication on the amount of rate which is lost, or how much is necessary to “pay” in order to correct the errors in the channel 1-p p p 1-p

21 The Capacity of WOM Codes
The Capacity Region for two writes C2-WOM={(R1,R2)|∃p∊[0,0.5],R1≤h(p), R2≤1-p} h(p) – the binary entropy function h(p) = -plog(p)-(1-p)log(1-p) The maximum achievable sum-rate is maxp∊[0,0.5]{h(p)+(1-p)} = log3 achieved for p=1/3: R1 = h(1/3) = log(3)-2/3 R2 = 1-1/3 = 2/3 Capacity region (Heegard ‘86, Fu and Han Vinck ‘99) Ct-WOM={(R1,…,Rt)| R1 ≤ h(p1), R2 ≤ (1–p1)h(p2),…, Rt-1≤ (1–p1)(1–pt–2)h(pt–1) Rt ≤ (1–p1)(1–pt–2)(1–pt–1)} The maximum achievable sum-rate is log(t+1) Flash Memory Summit Santa Clara, CA USA

22 The Capacity of WOM Codes
The Capacity Region for two writes C2-WOM={(R1,R2)|∃p∊[0,0.5],R1≤h(p), R2≤1-p} h(p) – the entropy function h(p) = -plog(p)-(1-p)log(1-p) The Capacity Region for t writes: Ct-WOM={(R1,…,Rt)| ∃p1,p2,…pt-1∊[0,0.5], R1 ≤ h(p1), R2 ≤ (1–p1)h(p2),…, Rt-1≤ (1–p1)(1–pt–2)h(pt–1) Rt ≤ (1–p1)(1–pt–2)(1–pt–1)} p1 - prob to prog. a cell on the 1st write: R1 ≤ h(p1) p2 - prob to prog. a cell on the 2nd write (from the remainder): R2≤(1-p1)h(p2) pt-1 - prob to prog. a cell on the (t-1)th write (from the remainder): Rt-1 ≤ (1–p1)(1–pt–2)h(pt–1) Rt ≤ (1–p1)(1–pt–2)(1–pt–1) because (1–p1)(1–pt–2)(1–pt–1) cells weren’t programmed The maximum achievable sum-rate is log(t+1) Flash Memory Summit Santa Clara, CA USA

23 Non-Binary WOM Capacity
The capacity of non-binary WOM-codes was given by Fu and Han Vinck, ‘99 The maximal sum-rate using t writes and q-ary cells is There is no tight upper bound on the sum-rate in case equal amount of information is written on each write Flash Memory Summit Santa Clara, CA USA


Download ppt "Coding and Algorithms for Memories Lecture 4"

Similar presentations


Ads by Google