Presentation on theme: "Andrea Goldsmith fundamental communication limits in non-asymptotic regimes Thanks to collaborators Chen, Eldar, Grover, Mirghaderi, Weissman."— Presentation transcript:
Andrea Goldsmith fundamental communication limits in non-asymptotic regimes Thanks to collaborators Chen, Eldar, Grover, Mirghaderi, Weissman
Information Theory and Asymptopia Capacity with asymptotically small error achieved by asymptotically long codes. Defining capacity in terms of asymptotically small error and infinite delay is brilliant! Has also been limiting Cause of unconsummated union between networks and information theory Optimal compression based on properties of asymptotically long sequences Leads to optimality of separation Other forms of asymptopia Infinite SNR, energy, sampling, precision, feedback, …
Theory vs. practice TheoryPractice Infinite blocklength codes Infinite SNR Infinite energy Infinite feedback Infinite sampling rates Infinite (free) processing Infinite precision ADCs Uncoded to LDPC -7dB in LTE Finite battery life 1 bit ARQ 50-500 Msps 200 MFLOPs-1B FLOPs 8-16 bits What else lives in asymptopia?
Backing off from: infinite blocklength Recent developments on finite blocklength Channel codes (Capacity C for n ) Source codes (entropy H or rate distortion R(D)) [Ingber, Kochman11; Kostina, Verdu11 ] Separation not Optimal [Wang et. Al11; Kostina, Verdu12 ]
Grand Challenges Workshop: CTW Maui From the perspective of the cellular industry, the Shannon bounds evaluated by Slepian are within.5 dB for a packet size of 30 bits or more for the real AWGN channel at 0.5 bits/sym, for BLER = 1e-4. In this perhaps narrow context there is not much uncertainty for performance evaluations. For cellular and general wireless channels, finite blocklength bounds for practical fading models are needed and there is very little work along those lines. Even for the AWGN channel the computational effort of evaluating the Shannon bounds is formidable. This indicates a need for accurate approximations, such as those recently developed based on the idea of channel dispersion.
Diversity vs. Multiplexing Tradeoff Use antennas for multiplexing or diversity Diversity/Multiplexing tradeoffs (Zheng/Tse) Error Prone Low P e What is Infinite?
Backing off from: infinite SNR High SNR Myth: Use some spatial dimensions for multiplexing and others for diversity *Transmit Diversity vs. Spatial Multiplexing in Modern MIMO Systems, Lozano/Jindal Reality: Use all spatial dimensions for one or the other* Diversity is wasteful of spatial dimensions with HARQ Adapt modulation/coding to channel SNR
Diversity-Multiplexing-ARQ Tradeoff Suppose we allow ARQ with incremental redundancy ARQ is a form of diversity [Caire/El Gamal 2005] ARQ Window Size L=1 L=2L=3 L=4
Joint Source/Channel Coding Use antennas for multiplexing: Use antennas for diversity High-Rate Quantizer ST Code High Rate Decoder Error Prone Low P e Low-Rate Quantizer ST Code High Diversity Decoder How should antennas be used: Depends on end-to-end metric
Joint Source-Channel coding w/MIMO Index Assignment s bits i) Channel Encoder s bits i MIMO Channel Channel Decoder Inverse Index Assignment j) s bits j Increased rate here decreases source distortion But permits less diversity here Resulting in more errors Source Encoder Source Decoder And maybe higher total distortion A joint design is needed vjvj
Relaying in wireless networks Intermediate nodes (relays) in a route help to forward the packet to its final destination. Decode-and-forward (store-and-forward) most common: Packet decoded, then re-encoded for transmission Removes noise at the expense of complexity Amplify-and-forward: relay just amplifies received packet Also amplifies noise: works poorly for long routes; low SNR. Compress-and-forward: relay compresses received packet Used when Source-relay link good, relay-destination link weak Source RelayDestination Capacity of the relay channel unknown: only have bounds
Cooperation in Wireless Networks Relaying is a simple form of cooperation Many more complex ways to cooperate: Virtual MIMO, generalized relaying, interference forwarding, and one-shot/iterative conferencing Many theoretical and practice issues: Overhead, forming groups, dynamics, full-duplex, synch, …
Generalized Relaying and Interference Forwarding Can forward message and/or interference Relay can forward all or part of the messages Much room for innovation Relay can forward interference To help subtract it out TX1 TX2 relay RX2 RX1 X1X1 X2X2 Y 3 =X 1 +X 2 +Z 3 Y 4 =X 1 +X 2 +X 3 +Z 4 Y 5 =X 1 +X 2 +X 3 +Z 5 X 3 = f(Y 3 ) Analog network coding
Beneficial to forward both interference and message
In fact, it can achieve capacity S D PsPs P1P1 P2P2 P3P3 P4P4 For large powers P s, P 1, P 2, …, analog network coding (AF) approaches capacity : Asymptopia? Maric/Goldsmith12
Interference Alignment Addresses the number of interference-free signaling dimensions in an interference channel Based on our orthogonal analysis earlier, it would appear that resources need to be divided evenly, so only 2BT/N dimensions available Jafar and Cadambe showed that by aligning interference, 2BT/2 dimensions are available Everyone gets half the cake! Except at finite SNRs
Backing off from: infinite SNR High SNR Myth: Decode-and-forward equivalent to amplify-forward, which is optimal at high SNR* Noise amplification drawback of AF diminishes at high SNR Amplify-forward achieves full degrees of freedom in MIMO systems (Borade/Zheng/Gallager07) At high-SNR, Amplify-forward is within a constant gap from the capacity upper bound as the received powers increase (Maric/Goldsmith07) Reality: optimal relaying unknown at most SNRs: Amplify-forward highly suboptimal outside high SNR per-node regime, which is not always the high power or high channel gain regime Amplify-forward has unbounded gap from capacity in the high channel gain regime (Avestimehr/Diggavi/Tse11) Relay strategy should depend on the worst link Decode-forward used in practice
Capacity and Feedback Capacity under feedback largely unknown Channels with memory Finite rate and/or noisy feedback Multiuser channels Multihop networks ARQ is ubiquitious in practice Works well on finite-rate noisy feedback channels Reduces end-to-end delay Why hasnt theory met practice when it comes to feedback?
PtP Memoryless Channels: Perfect Feedback Shannon Feedback does not increase capacity of DMCs Schalkwijk-Kailath Scheme for AWGN channels – Low-complexity linear recursive scheme – Achieves capacity – Double exponential decay in error probability Encoder Decoder +
Backing off from: Perfect Feedback + Channel Encoder Decoder Feedback Module [Shannon 59]: No Feedback [Pinsker, Gallager et al.]: Perfect feedback Infinite rate/no noise [Kim et. al. 07/10]: Feedback with AWGN [Polyaskiy et. al. 10]: Noiseless feedback reduces the minimum energy per bit when nR is fixed and n
Objective: Choose and to maximize the decay rate of error probability Gaussian Channel with Rate-Limited Feedback + Channel Encoder Decoder Feedback Module Constraints Feedback is rate- limited ; no noise
A super-exponential error probability is achievable if and only if : The error exponent is finite but higher than no-feedback error exponent : Double exponential error probability : L-fold exponential error probability
m-bit Encoder m-bit Decoder m-bit Encoder m-bit Decoder Forward Channel Feedback Channel If, send Termination Alarm Otherwise, resend with energy Send back with energy If Termination Alarm is received, report as the decoded message Feedback under Energy/Delay Constraint Constraints Objective: Choose to minimize the overall probability of error
Depends on the error probability model ε( ) Exponential Error Model: ε(x)=βe -αx Applicable when Tx energy dominates Feedback gain is high if total energy is large enough No feedback gain for energy budgets below a threshold Feedback Gain under Energy/Delay Constraint Super-Exponential Error Model: ε(x)=βe -αx 2 -Applicable when Tx and coding energy are comparable -No feedback gain for energy budgets above a threshold
Backing off from: perfect feedback Memoryless point-to-point channels: Capacity unchanged with perfect feedback Simple linear scheme reduces error exponent (Schalkwijk-Kailath: double exponential) Feedback reduces energy consumption Capacity of feedback channels largely unknown Unknown for general channels with memory and perfect feedback Unknown under finite rate and/or noisy feedback Unknown in general for multiuser channels Unknown in general for multihop networks ARQ is ubiquitious in practice Assumes channel errors Works well on finite-rate noisy feedback channels Reduces end-to-end delay No feedback Feedback
Output feedback Channel information (CSI) Acknowledgements Something else? Noisy/Compressed How to use feedback in wireless networks? Interesting applications to neuroscience
For a given sampling mechanism (i.e. a new channel) What is the optimal input signal? What is the tradeoff between capacity and sampling rate? What known sampling methods lead to highest capacity? What is the optimal sampling mechanism? Among all possible (known and unknown) sampling schemes Sampling Mechanism (rate f s ) New Channel Backing off from: infinite sampling
Capacity under Sampling w/Prefilter Theorem: Channel capacity Folded SNR filtered by S(f) Determined by waterfilling: suppresses aliasing
Capacity not monotonic in f s Consider a sparse channel Capacity not monotonic in f s ! Single-branch sampling fails to exploit channel structure
Filter Bank Sampling Theorem: Capacity of the sampled channel using a bank of m filters with aggregate rate f s Similar to MIMO; no combining!
Equivalent MIMO Channel Model Theorem 3: The channel capacity of the sampled channel using a bank of m filters with aggregate rate is For each f Water-filling over singular values MIMO – Decoupling Pre-whitening
Selects the m branches with m highest SNR Example (Bank of 2 branches) highest SNR 2 nd highest SNR low SNR Joint Optimization of Input and Filter Bank low SNR Capacity monotonic in f s Can we do better?
Sampling with Modulator+Filter (1 or more) Theorem: Bank of Modulator+Filter Single Branch Filter Bank Theorem Optimal among all time-preserving nonuniform sampling techniques of rate f s zzzz zzzz zz equals
Power consumption via a network graph power consumed in nodes and wires Extends early work of El Gamal et. al.84 and Thompson80
Fundamental area-time-performance tradeoffs For encoding/decoding good codes, Stay away from capacity! Close to capacity we have Large chip-area More time More power Area occupied by wires Encoding/decoding clock cycles
Total power diverges to infinity! Regular LDPCs closer to bound than capacity-approaching LDPCs! Need novel code designs with short wires, good performance
Conclusions Information theory asympotia has provided much insight and decades of sublime delight to researchers Backing off from infinity required for some problems to gain insight and fundamental bounds New mathematical tools and new ways of applying conventional tools needed for these problems Many interesting applications in finance, biology, neuroscience, …