Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linear network code for erasure broadcast channel with feedback Presented by Kenneth Shum Joint work with Linyu Huang, Ho Yuet Kwan and Albert Sung 1Mar.

Similar presentations


Presentation on theme: "Linear network code for erasure broadcast channel with feedback Presented by Kenneth Shum Joint work with Linyu Huang, Ho Yuet Kwan and Albert Sung 1Mar."— Presentation transcript:

1 Linear network code for erasure broadcast channel with feedback Presented by Kenneth Shum Joint work with Linyu Huang, Ho Yuet Kwan and Albert Sung 1Mar 2014

2 Erasure broadcast channel Mar 20142 Source node Data Packets P 1, P 2, …, P N User 1User 2User 3 User K …. Broadcast Want to send all source data packets to each user. Each transmitted packet is erased with certain probability

3 Erasure broadcast channel with feedback Mar 20143 Source node User 1User 2User 3 User K …. Users can send acknowledgements back to the source node.

4 Linear Network Code Source node broadcasts encoded packets. A packet is considered as a vector over a finite field F. An encoded packet is obtained by taking linear combination of the N source packets, with coefficients drawn from F. – The vector formed by the N coefficients is called the encoding vector of the encoded packet. Mar 20144

5 Erasure broadcast channel Mar 20145 Source node Data Packets P 1, P 2, …, P N User 1User 2User 3 User K …. Broadcast Linear combinations of P 1, P 2, …, P N The packet header contains the encoding vector of the encoded packet.

6 The received packets are cached Mar 20146 Source node The source packets can be interpreted as the standard basis e 1, e 2, … e N in vector space F N User 1User 2User 3User K …. v 1, v 2, … Each user stores the received packets and the corresponding encoding vectors

7 Synopsis Objectives: – Minimize the completion time of each user. – Minimize encoding and decoding complexity. Decoding complexity can be reduced if the encoding vectors are sparse. – Apply some version of Gaussian elimination which exploits sparsity. The problem of generating sparse encoding vector is related to some NP-complete problems. Heuristic algorithms and comparison. Mar 20147

8 Complexity Issues in Network Coding Deciding whether there exists a linear network code with prescribed alphabet size is NP-hard – Lehman and Lehman, Complexity classification of network information flow problems, SODA, 2004. The minimization of the number of encoding nodes is NP- hard. – Langberg, Sprintson and Bruck, The encoding complexity of network coding, Trans. IT 2006. – Langberg and Sprintson, On the hardness of approximating the network coding capacity, Trans IT, 2011. For noiseless broadcast channel, when the alphabet is binary, the problem of minimizing the number of packet transmissions in the index coding problem is NP-hard. – El Rouayheb, Chaudhry and Sprintson, On the minimum number of transmissions in single-hop wireless coding networks, ITW, 2007. Mar 20148

9 Innovative Packet An encoded packet is said to be innovative to a user if the corresponding encoding vector is linearly independent with the encoding vectors received previously. If an encoded packet is innovative to all users, then we say that it is innovative. It is known that innovative packets always exist if the finite field size is larger than or equal to the number of users. – Keller, Drinea and Fragouli, Online broadcasting with network coding, NetCod, 2008. 9Mar 2014

10 Notation: Encoding matrix Mar 201410 Source node User 1User 2User 3User K …. C1C1 C2C2 C3C3 The rows of matrix C i are the encoding vectors of the received packets. CKCK The source packets are interpreted as the standard basis e 1, e 2, … e N in vector space F N

11 Given for all s, the set of all innovative encoding vectors is defined as 11Mar 2014 The set of all innovative encoding vectors

12 Given an encoding vector, the support of is defined as The Hamming weight of is defined as the cardinality of. with Hamming weight is said to be -sparse. 12Mar 2014 Hamming weight and sparsity

13 SPASITY Problem Consider both sparsity and innovativeness of an encoding vector, formulate the problem below: Problem : SPARSITY Instance : K matrices over GF(q), where. is a positive integer. Question : Is there a vector with Hamming weight less than or equal to ? 13Mar 2014

14 Example: Let q=2, K=2, N=4 and n=2. Consider the following two matrices We have There are three vectors in with Hamming weight less than or equal to n=2. 14Mar 2014

15 Theorem. SPARSITY is NP-complete. Now define the optimization version of as follows: Question: Find a vector with minimum Hamming weight. It can be shown that the optimization version of SPARSITY is NP-hard. However, for fixed K and q, by brute force methods, it can be solved in 15Mar 2014

16 Let be the row space of. Denote the orthogonal complement of by Let be an matrix whose rows form a basis of. can be obtained by the Reduced Row Echelon Form(RREF) of. 16Mar 2014 Orthogonal complement

17 To check whether an encoding vector is innovative, we use the following fact. Theorem. Given, an encoding vector belongs to iff for all s. 17Mar 2014

18 Minimizing the Hamming Weight Given all s, we have their by RREF. Let be the i-th row of. Define where denotes the logical-OR operator applied component-wise to vectors with each non-zero component being treated as a 1. 18Mar 2014

19 Example: Let q=3, K=3, N=4 and the orthogonal complements of be given by the row spaces of The vector for, are 19Mar 2014

20 Define as the matrix whose k-th row is. Note that is a binary matrix and has no zero rows. Given a subset of column indices of, let be the submatrix of matrix, whose columns are chosen according to. 20Mar 2014

21 Lemma 3. Let be an index set and. There exists an encoding vector with support inside (i.e. for ) iff has no zero rows. 21Mar 2014

22 Example (contd) Mar 201422 First user Second user Third user Choose a 3 x w submatrix of B with minimal w, such that the submatrix has no zero rows. We may choose the first two columns. We can find an encoding vector with two non-zero components.

23 By reducing HITTING SET to SPARSITY, NP-completeness of SPARSITY can be shown. Problem: HITTING SET Instance: A finite set, a collection of subsets of and an integer. Question: Is there a subset with cardinality, such that for each we have ? 23Mar 2014

24 Example (contd) Mar 201424 First user Second user Third user Choose a 3 x w submatrix of B with minimal w, such that the submatrix has no zero rows. 1,4 First user Second user Third user 2 3 The minimal hitting sets are: {1,2}, {1,3}, {2,3}, {2,4}, {3,4}

25 Optimal Hitting method Solve the hitting set problem optimally by reducing it to binary integer programming. – Minimum sparsity at each iteration is guaranteed. After the support of the encoding vector is determined, find the coefficients which make the vector innovative. Mar 201425

26 Greedy Hitting method Solve the hitting set problem heuristically by greedy method. – Sequentially pick an element which hits the largest number of sets. – Minimum sparsity is not guaranteed. After the support of the encoding vector is determined, find the coefficients which make the vector innovative. Mar 201426

27 Existing encoding schemes (I) Random linear network codes. – Encoding Phase 1: The source node first broadcast each packet. Phase 2: Sends encoded packets with coeff. randomly generated. – Decode by Gaussian elimination. – No feedback is required. Mar 201427

28 Existing encoding schemes (II) Chunked code – an extension of random linear network coding. – Divide the source packets into chunks. Each chunk contains c packets. – Apply random linear network coding to each chunk. – The resulting encoding vectors are c-sparse. – Feedback is not required. Mar 201428

29 Existing encoding schemes (III) Instantly decodable network code – Encoding Phase 1: The source packets are first broadcast once. Phase 2: Find a subset of users such that each of them can decode a source packet by transmitting an encoded packet. – Decoding: The user in the target set can decode one packet immediately if the encoded packet is received successfully. – Feedback is required. Mar 201429

30 Existing encoding schemes (IV) LT code – Use the robust soliton degree distribution in encoding – No feedback is required. Mar 201430

31 Comparison of complexity Mar 201431 SchemeEncodingDecoding LT codeO(N)O(N 2 ) Random linear network code O(N)O(N 3 ) Chunked codeO(c )O(c 2 N) Instantly decodable network code O(K 3 N 2 )O( min(K,N) N) Optimal hitting method O(1.238 N+K )O( min(K,N) N 2 ) Greed hitting method O(K 2 N 2 )O( min(K,N) N 2 )

32 Completion time vs number of users (perfect feedback) Mar 201432

33 Binary alphabet Mar 201433

34 Completion time vs number of users (lossy feedback) Mar 201434

35 Decoding time vs no. of users Mar 201435

36 Encoding time vs no. of users Mar 201436

37 Hamming weight vs no. of users Mar 201437

38 Conclusion We investigate the issue of the generation of sparsest innovative encoding vectors which is proven to be NP-hard. A systematic way to generate the sparsest innovative encoding vectors is given. There is a tradeoff between encoding complexity, decoding complexity, and completion time. 38Mar 2014


Download ppt "Linear network code for erasure broadcast channel with feedback Presented by Kenneth Shum Joint work with Linyu Huang, Ho Yuet Kwan and Albert Sung 1Mar."

Similar presentations


Ads by Google