Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multipoll, FEC, Persistence, Portals

Similar presentations


Presentation on theme: "Multipoll, FEC, Persistence, Portals"— Presentation transcript:

1 Multipoll, FEC, Persistence, Portals
July 2001 Multipoll, FEC, Persistence, Portals Greg Chesson, Aman Singla, G. Chesson, A. Singla, Atheros

2 Multipoll poll description
July 2001 Multipoll poll description CF-Poll Both downlink and uplink data movement are provided Consider only uplink because Multipoll does not provide downlink Uplink sequences: Pn denotes the nth poll Ui denotes the ith uplink frame P1 <SIFS> U1 <SIFS> P2 <SIFS> U2 <SIFS> CF-End P1 <SIFS> U1 <SIFS> P2 <no response> <PIFS> P3 <SIFS> U3 <SIFS> CF-End Pi contains ACK for Ui-1 Airtime for N polls and uplink frames N * (frame header + SIFS) + N*(uplink frames + SIFS) + CF-End Assume headers are sent at low PHY rate Assume uplink frames at higher PHY rate if possible G. Chesson, A. Singla, Atheros

3 Multipoll multipoll description
July 2001 Multipoll multipoll description CF-Multipoll Uplink or sidelink only: assigns N fixed-duration TxOPs to stations Each station transmits after the previous station’s TxOPLimit (+SIFS) Entire TxOPLimit period is always used (even on short uplink frames) Use all the time all the time Non-response by station consumes the entire TxOPLimit slot Not so for CF-Poll Multipoll sequences MP[1-N] <SIFS> U1 <SIFS> U2 <SIFS> <no response> <SIFS> U4 // U3 damaged MP[1-N] ********************************************* // damaged MP Airtime for N polls MP Header + NStations*4 + N*(TxOPLimit + SIFS) G. Chesson, A. Singla, Atheros

4 Multipoll Timing at 6 Mb/s
July 2001 Multipoll Timing at 6 Mb/s seconds Large frames (2K) Small frames (100B) Poll Multipoll Number of polled stations G. Chesson, A. Singla, Atheros

5 Multipoll Timing at 36 Mb/s
July 2001 Multipoll Timing at 36 Mb/s seconds Large frames (2K) Small frames (100B) CF-Poll Multipoll Number of Polled Stations G. Chesson, A. Singla, Atheros

6 July 2001 Multipoll comparison Airtime improvement (reduction) using multipoll instead of polls assumes no errors and no wasted airtime Improvement is real, but small Penalties for using multipoll Entire allocated timeslot wasted on non-responding stations Expect this to happen on channel errors where MP is not received by all Not a problem with poll: coordinator recovers after PIFS if no response TxOplimit allocation must be set and processed for each timeslot Ongoing complexity for HC and stations (ESTA must parse entire MP frame) No variable timing or state needed with poll (one poll rather than variable N) Unused portion of timeslot is dead airtime (not reclaimed) Creates vulnerability from OBSS, DCF, and ad hoc stations Can become less efficient than per-station polling (no similar dead time) Not a problem with poll: uplink uses airtime only as needed ACK policy Not part of multipoll protocol, must become part of uplink handshake or come from a different ACK mechanism, introducing inefficiency and delay either way Not a problem with poll: ACKs are built-into the sequence of poll messages G. Chesson, A. Singla, Atheros

7 July 2001 Multipoll Resolution Multipoll as proposed does not provide broad efficiency improvements Multipoll as proposed has numerous penalities Multipoll, even without channel errors, can be less efficient than poll Proposal: Delete Multipoll and all references Use CF-Poll instead of Multipoll Allow CF-Poll syntax for use by HC at any time Subject to channel access rules for HC (when they are finalized) G. Chesson, A. Singla, Atheros

8 FEC observations Raw channel data:
July 2001 FEC observations Raw channel data: packet error rate (PER) as function of (range, PHY rate, frame size). Single transmitter, no interference, packet losses caused by channel PER increases with higher PHY rates, distance, larger frames. 11a and 11b systems show similar behavior Knee of PER curve at approx 10% PER, except for lowest PHY rate PER 90% 10% Distance G. Chesson, A. Singla, Atheros

9 FEC successful application
July 2001 FEC successful application FEC in satellite/broadcast downlinks PER of 10^-6 needed for high-quality mpeg-2 video Retransmission not possible it’s a broadcast-only downlink FEC must succeed (reduce PER to 10^-5 or better) per-frame FEC reduces PER by approx 10^-2 (only) Therefore, conservative (slow) PHY rates must be chosen Raw downlink PHY must have approx 1% PER (before FEC) FEC then reduces packet loss rate to 10^-4 or 10^-5 Note: high-quality video 8 Mbit/s base rate May burst to 12 Mbit/s or so Needs 500 frames/sec if 2KByte frames are used G. Chesson, A. Singla, Atheros

10 FEC TGe draft proposal FEC in 802.11 MAC
July 2001 FEC TGe draft proposal FEC in MAC Expect 10^-1 (10%) or more packet error rates (PER) for 11a, 11b Based on measurement and expected practice (except for lowest PHY rates) Much higher expected PER than in satellite applications Expect 10^-2 to 10^-3 PER after FEC May need 5 retransmissions/sec for an 8 MBit/s video stream (!) Retransmissions will contribute to increased jitter FEC overhead is 10% payload (redundancy bytes added to header and payload) Plus extra TxOP and airtime for Delayed Ack and its Ack Plus the residual retransmissions Unless slow PHY rate (with 1% PER) is selected (unlikely) This is a lot of overhead when compared to retransmission G. Chesson, A. Singla, Atheros

11 FEC implementation issues
July 2001 FEC implementation issues Optimistic/pipelined Decode Header can be validated in pipelined fashion The number of error blocks can be determined in near real-time ACK can be generated in SIFS time, knowing that correction will succeed Delayed ACKs When do you know that a Delayed Ack is not forthcoming? I.e. when do we retransmit? Adds even more delay jitter Sending MAC must maintain state for each non-Acked packet How many such packets? For how long? Receiving MAC needs out-of-order assembly buffers Size is related to delay-bandwidth product of Delayed Acks Size is related to number of Delay-Ack streams What is a minimum interoperable implementation? Need TCP-like sophistication to select good parameters G. Chesson, A. Singla, Atheros

12 FEC compared to retransmission
July 2001 FEC compared to retransmission FEC >10% channel overhead for redundant information Will also need retransmission (unless low PHY rate is used) Will need TCP-like out-of-order assembly and controls Does not improve jitter Retransmission Channel overhead is function of PER – not a constant 10% In-order delivery, no complex state has less overhead than FEC for practical PERs Conclusion FEC may not always be the best procedure FEC may be useful at low PHY rates for broadcast/multicast Delayed ACKs have significant unresolved issues G. Chesson, A. Singla, Atheros

13 FEC resolution Retain FEC Remove Delayed Acks
July 2001 FEC resolution Retain FEC Define Capability bit and FEC-encoded bit Remove Delayed Acks Proceed with two motions and editing instructions from document 01/458 G. Chesson, A. Singla, Atheros

14 Persistence Factor simulation models
July 2001 Persistence Factor simulation models Must include channel error model Channel measurements show that PER will be significant Simulated Latency-jitter-bandwidth numbers meaningless without PER Especially jitter Should incorporate 10% channel error rate for realism Based on 11ab measurements Based on expectations of rate adaptation implementations Demonstration observe media simulations with and without 10% PER G. Chesson, A. Singla, Atheros

15 July 2001 Persistence Simulation Scenario: 4 phones, 2 video, 10 background, 36 Mb/s (packet loss only from collisions) videos Remove loads background phones G. Chesson, A. Singla, Atheros

16 Persistence Same scenario, adding 10% PER
July 2001 Persistence Same scenario, adding 10% PER Bandwidth glitches Caused by retransmission Backoff delay when Background load Is applied G. Chesson, A. Singla, Atheros

17 Persistence (showing latency distribution per flow)
July 2001 Persistence (showing latency distribution per flow) Not good Phones: mean = 1.5ms, deviation = 5+ms Videos: mean = 15+ms, deviation = 30 ms G. Chesson, A. Singla, Atheros

18 Persistence Backoff-related Jitter Demonstrated with PER
July 2001 Persistence Backoff-related Jitter Demonstrated with PER “Optimistic” (no PER) simulation suggests that life is wonderful. “Realistic” (PER) simulation shows unacceptable jitter for media streams. Proposed solutions (both in Tge draft proposal) Persistence Factor Defines fractional exponent per TC instead of binary exponential backoff Hi-priority traffic would increase backoff window at slower (fractional) rate Implementation complexity: must compute uniform distributions over non-power-of-two windows (considered costly) Good for stream (slow rate of increasing backoff) Bad for channel (can increase congestion because of slow backoff increases) aCWMax[TC] Already present in MIB Provides upper bound for backoff window based on traffic category Retains simple power-of-two computations and scaling from existing MACs Good for stream (limits backoff window to an upper bound) Good for channel (fast backoff increase) G. Chesson, A. Singla, Atheros

19 July 2001 Persistence Same scenario using CWMax[TC] to bound retransmission window for video and voice traffic Bandwidth aberrations removed Lower background Bandwidth peaks G. Chesson, A. Singla, Atheros

20 July 2001 Persistence Improved latency and jitter for media flows retransmission jitter becomes negligible Phones before: mean = 1.5ms, deviation = 5+ms after: mean = [.95,1.6], deviation = [.7,1.98] Videos before: mean = 15+ms, deviation = 30 ms after: mean = [1.8,2.3], deviation = [1.7,2.2] G. Chesson, A. Singla, Atheros

21 Persistence Resolution
July 2001 Persistence Resolution Both mechanisms are in the draft standard Both mechanisms solve the problem (reduce backoff-related jitter) aCWMax[TC] Is better from a channel perspective (fast backoff to a limit) Is less complex (uses existing CWMin/CWMax and random generator) Persistence Factor Not as good from a channel perspective (can increase contention) Is more complex (adds fractional scaling, requires new random generator) Proposal Remove the redundant mechanism (Persistence) Move aCWMax[TC] from MIB to QoS Parameter Set Element CWMax[TC] G. Chesson, A. Singla, Atheros

22 Bridge Portals state of the art
July 2001 Bridge Portals state of the art Useful concept Non bridges Define unique protocols for discovery, routing, announcement, etc CableLabs-defined QoS bridges invoke RSVP/SBM IPv6 provides Neighbor Discovery protocol and messages Bluetooth has Service Discovery Protocol Universal Plug and Play (UPNP) is a candidate discovery layer Should define Yet Another Protocol (YAP) for Discovery? No compelling reason Unlikely to improve on existing art Very unlikely to replace existing art Should be usable with existing protocols? Absolutely Requires both DS and non DS G. Chesson, A. Singla, Atheros

23 Bridge Portals simple approach
July 2001 Bridge Portals simple approach Implement multiple station addresses in MAC “multi-homing”, but at Layer 2 Use one address to associate with BSS Use others to communicate with BPs Use BP-unique protocols to locate and acquire services No need to provide non discovery services in Notes: Technique mentioned by Michael Fischer and others Some implementations may already exist Useful technique for mobility protocols Equivalent to simultaneous membership in both BSS and IBSS Requires no new protocol G. Chesson, A. Singla, Atheros


Download ppt "Multipoll, FEC, Persistence, Portals"

Similar presentations


Ads by Google