Presentation is loading. Please wait.

Presentation is loading. Please wait.

Quantization and compressed sensing Dmitri Minkin.

Similar presentations


Presentation on theme: "Quantization and compressed sensing Dmitri Minkin."— Presentation transcript:

1 Quantization and compressed sensing Dmitri Minkin

2 2 Contents Sigma-Delta Quantization for Compressed Sensing Quantization of Sparse Representations Petros Boufounos Richard G. Baraniuk

3 3 Sigma-Delta Quantization Introduction The goal: HW implementation of compressed sensing A/D Explore: Sigma-Delta quantizer with random dictionary

4 4 Traditional A/D Difficult - tight specifications for: Anti-aliasing LPF (analog) Precision Quantizer (digital)

5 5 Traditional Sigma-Delta A/D Pros: Coarse Quantizer Cons: C/D – high rate oversampling LPF/Downsample at digital domain Trade-off: quantization accuracy and sampling rate

6 6 Standard CS A/D Random projections (analog), long switched capacitors filter Precision Q (digital) – hard to implement

7 7 Proposed Sigma-Delta CS A/D Advantages: Coarse Q (analog) => short filter Random projections => LPF/ Down sample at digital domain Same performance as fine quantizer at Nyquist rate

8 8 Proposed Sigma-Delta Quantizer Architecture Sigma-Delta Quantizer Random Projections

9 9 Compressed Sensing Overview Signal at sampling basis b n : Signal is K-sparse in basis {s k }: Signal is K-compressible, if it is well approx. by K coefficients

10 10 Compressed Sensing Overview Sampling with measurement vectors {u k } x n, u k,n – coeff. at b n basis Synthesis of y from dictionary f n :

11 11 CS Overview – RIP Property y is sufficient to recover x, if dictionary f satisfies RIP of order K with constant δ K : Remark: Robust, efficient solution requires 2K or stricter

12 12 Back to Sigma-Delta, Noise Iteratively get x n Non-linear quantization produces error Subsequent coeff. are updated p coeff. at each instance time n, memory of length p, quantizer order

13 13 Proposed Sigma-Delta Quantizer Architecture

14 14 Design Problem Definition Total quantization error: Select the feedback coeff. c n,n+i to reduce the total error ε, Subject to: hardware and stability constraints

15 15 Error Models White random, variance σ e 2, minimize total error power: Upper bound of total error from eq.(5) At each step select coeff. to minimize incremental error e n

16 16 Optimization Problem Minimize: Actually find projection of f n on Span{f n+1,…f n+p } “Close” solution produces small assuming ||f i ||=1, Hardware and stability often necessitate different approach

17 17 Sigma-Delta Design Digitally compute random projections from x’ n according to Change c n,n+i in the analog feedback loop in accordance to random dictionary to minimize error:

18 18 Quantizer Order and RIP Substitute vectors {f n+1,…,f n+p } with residual into Eq.(8) RIP of order K and constant δ K guaranties that Sigma-Delta of order p≤K-1, is not effective. Thus RIP forces p≥K

19 19 Sigma-Delta Stability Quantizer saturation level: Stability if |x in |< saturation level Thus from (4): x’ n does not saturate if:

20 20 Sigma-Delta Stability Define s max =max(s n ) Increasing s max forces increasing q max, that is the dynamic range of quantizer This is worst case Often violated for design flexibility with small probability of overflow

21 21 Randomized Dictionaries Assume no structure of dictionary, then: No simplification in feedback coeff. c n-i,n c n-i,n changes at a very high rate c n-i,n has continuous range: Precise tunable elements Multipliers with highly linear response Very short settling time

22 22 Practical Alternatives for Random Dictionary Restrict c n-i,n to the set {-c,0,c} or {-Lc,…,-c,0,c,…,Lc} If dictionary is not produced at runtime, c n-i,n can be calculated at design time

23 23 Reminder Minimize the error: That is: Subject to stability constraint: And possible, coeff. restriction: {-Lc,…,-c,0,c,…,Lc}

24 24 Greedy Optimization Iterative algorithm that uses history coeff. c n-i,m,i=1..p, i<m≤n Output: set of p coeff. for the next time n+1: c n-i+1,n+1, i=1..p Done at every random generation of dictionary vector f n+1

25 25 Greedy Optimization r l,n : residual error vector after using {f n+1,…f n+p } to compensate for the quantization error of x n, corresponding to frame vector f l. Note: - the residual to be minimized

26 26 Greedy Optimization Solution Define incremental error: Use additive noise model: Unconstrained minimization:

27 27 Unconstrained Solution Difficulties No stability guarantee Arbitrary coeff. – hard to implement Still can provide a lower bound for performance Solving with stability constraint at every step n – is hard

28 28 Constrained Coefficients Restrict to c n-i,n to: {-c,0,c} Thus implementing with invertor, open circuit, unit-gain, followed by constant gain c MUX e n-1 control 2 1 ∑ from c n-1,n from c n-2,n c

29 29 Constrained Coeff. Values Task: minimize residual error at n+1 compared to n: Stability constraint forces only part of coeff. to be non-zero:

30 30 Constrained Coeff. Values Choose coeff. that contribute largest improvement in power of residual errors Extension: coeff. Take values {-Lc,…,- c,0,c,…,Lc}

31 31 Experimental Setup Measure: MSE, E[s n ] = Parameters: r – frame redundancy M – frame support p – feedback order c – constrained coeff. value Compare: (un)constrained s n - stability (un)restricted coeff. - {-c,0,c}

32 32 Experimental Performance Unrestricted & Unconstrained Const P=8M

33 33 Experimental Performance Unrestricted & Unconstrained Const M=64

34 34 Experimental Performance Restricted & Unconstrained unrestricted

35 35 Experimental Performance Restricted & Constrained S proj – unrestricted & unconstrained

36 36 Conclusions Reducing hw. complexity by adapting feedback coeff. to dynamically randomly generated dictionary High-order feedback loop due to RIP, but low order compared to other implementations Algorithms guarantee stable feedback loop and quantizer does not exceed its dynamic range

37 37 Quantization of Sparse Representations Petros Boufounos Richard G. Baraniuk

38 38 Quantization of Sparse Representations - Introduction The goal: Examine the effect of quantization of random CS measurements. Conclusion: CS with scalar quantization does not use it’s allocated rate efficiently.

39 39 Signal Model S – signal space, dim(S) = N, Sampling: basis: space: operator: reconstruction (non)linear: - vector of sampling coeff.

40 40 Quantization M -sampling coefficients, quantized each L -quantization levels L M – possible quantization points B ≥ M log 2 L – bits, if no subsequent entropy coding, or For simplicity: L= 2 B/M

41 41 Quantization Error Quantization levels at define quantization cells y falling inside cell with center is quantized to Define error:

42 42 Example of Uniform Scalar Quantization, M=2,L=4

43 43 Recovery of Sparse Signal x is K-sparse, satisfies RIP with high probability, if random measurements are used If RIP satisfied, then reconstruction: If RIP and then:

44 44 Quantization of Subspaces K - dim subspace of W (dim(W)=M) intersects a number of cells:

45 45 1-dim subspace H intersection with L M =8 2 quantization cells

46 46 Quantization of Sparse Signal In case of K-sparse signals at S: The sampling produces at most of subspaces W i of dim(W i ) ≤ K Intersects a number of cells:

47 47 Quantization of Sparse Signal Assumption To minimize the e max, distribute these cells equally to each of the subspaces, I 0 points are used to encode each subspace  K components of sparse signal are uniformly chosen from N possible components

48 48 Rate Penalty for CS Send B bits, but use only log 2 (I N ) bits: Quantization efficiency:

49 49 Rate Penalty for CS Substitute RIP to (9): small

50 50 Rate Penalty at Fixed Bit-Rate B-number of bits send ( constant)

51 51 Increasing Rate to 1 Task: B=log 2 (I N ) Theoretical solution: subsequent lossless entropy coding Not currently available for CS measurements Problem: randomness of sampling dictionary, that gives university property

52 52 Error Bounds Signal assumptions: K-sparse, power limited: Use: vector quantizer, that partitions the space with P cells Worst case error: Min. MSE: Note: Assumption K sparse coeff. are uniformly distributed between N

53 53 Lower Bound on Quantization Error Equally distribute the quantization points to all subspaces Const. bit-rate B: quantization points per subspace The rest bits are used to encode the subspace Substitute P to (3) + Stirling formula

54 54 Vector vs. Scalar Quantizer Error Bounds Lower error bound, vector quantizer: Lower error bound, scalar quantizer:

55 55 Scalar Quantizer Efficiency Reduced Sampling, followed by scalar quantization with L=2 B/M levels intersects only I 0 (M,K,L) cells: Applying RIP: M=cKlog 2 (N/K)

56 56 CS Linear Program Reconstruction Error Bounds It is worse then: uniform quantization interval at rate B

57 57 Comparing the Terms 2 B/M – encoding M space instead of K space N/K - CS doesn’t have to encode the location of sparse components √M – vector vs. scalar space advantage log 2 (N/K) – inefficient recovery due to university

58 58 Compressible Signals Signal x from S N and K-sparse: x inside a p-ball: Quantization error at prev. article [5]: As p->0 error -> 0, then a tighter bound is (14):

59 59 Error Bound at Small p As p->0, the signal x converges to 1-Sparse Property: Error bound is non-decreasing in p because But as B increases, bound (19) becomes tighter

60 60 Conclusions CS of sparse signals followed by scalar quantization is inefficient at: Rate utilization Error performance Reason: University: requires quantization cells uniformly cover the sampling space and signal covers only a subspace of it. Reconstruction using linear programming Possible solutions: Vector quantization Reconstruction consistency with the quantization measurements

61 61 Thank you. Questions ?


Download ppt "Quantization and compressed sensing Dmitri Minkin."

Similar presentations


Ads by Google