Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sofia Pediaditaki and Mahesh Marina University of Edinburgh

Similar presentations


Presentation on theme: "Sofia Pediaditaki and Mahesh Marina University of Edinburgh"— Presentation transcript:

1 Sofia Pediaditaki and Mahesh Marina University of Edinburgh
A Machine Learning Approach to Loss Differentiation Solution in Wireless Networks Sofia Pediaditaki and Mahesh Marina University of Edinburgh

2 Introduction How can a sender infer the actual cause of loss with:
Causes of Packet Losses: Channel errors Interference (collisions or hidden terminals) Mobility, handoffs, queue overflows, etc. How can a sender infer the actual cause of loss with: No or little receiver feedback A lot of uncertainty (time-varying channels, interference, traffic patterns, etc.). Use machine learning algorithms!

3 Do we Need Loss Differentiation?
Rate Adaptation: Channel error Lower rate improves SNR Collision Lower rate worsens problem DCF mechanism: In , cause of loss is collision by default Doubling the contention window hurts performance if cause is channel error Various other applications (e.g. Carrier sensing threshold adaptation [Ma et al – ICC’07])

4 State of the Art Use RTS/CTS to infer cause of loss Drawbacks
Rate Adaptation Algorithms [CARA-Infocom’06, RRAA-MobiCom ’06] Use RTS/CTS to infer cause of loss Small frames resilient to channel errors Medium is captured Data packet is lost due to channel error Drawbacks RTS/CTS is rarely used in practice Extra overhead Hidden terminal issue not fully resolved Potential unfairness C D A B

5 Our Aim A general purpose loss differentiator which is:
Accurate and efficient: responsive and robust to the operational environment Supported by commodity hardware fully implementable in the device driver without e.g. MAC changes Has acceptable computational cost and low overhead Requires no (or little) information from the receiver

6 The Proposed Approach Loss differentiation can be seen as a “classification” problem Class labels: Types of losses Features: Observable data Goal: Assign each error to a class The Classification Process: Training Phase: < attributes, class > pairs as training data Operational Phase: Classify new “unlabeled” data (test data)

7 The Classification Process
< attributes, class > Training Phase Pre-processing Input Dataset Training Data Learning Algorithm: Naive Bayes Bayesian Nets Decision Trees Etc. Operational Phase < attributes, ?> < attributes, class > “Unlabeled” Data Trained Model Prediction

8 Performance Evaluation (1/2)
Training data using Qualnet Simulator Single-hop random topologies (WLANs) Varying number of rates and flows , with or without fading Multi-hop random topologies One-hop traffic, multiple rates, with or without fading Learning algorithms using Weka workbench (University of Waikato, New Zealand) Classes of interest: Channel errors Interference

9 Performance Evaluation (2/2)
Classification Features: Rate The higher the rate, the higher the channel error probability Retransmissions No Due to backoff, collision probability decreases across retransmissions Channel Busy Time Observed channel errors and collisions Easily obtained at the sender

10 Preliminary Results: No fading
Try the simple things first (K.I.S.S. Rule)! Bayes Method Prediction Accuracy% Training Time (sec) Naive Bayes WLAN WLAN-MH 0.01 99.5 95.9 29303 WLAN – WLAN-MH instances 10-fold Cross Validation Almost perfect predictor But things are not that simple!

11 Preliminary Results: All together
A small step for man ... Bayes Method Prediction Accuracy% Training Time (sec) Naive Bayes 87 0.06 Bayesian Net 87.7 0.15 instances 10-fold Cross Validation Naive Bayes assumes attributes are independent Bayesian Networks make Naive Bayes less “naive”

12 Discussion Which machine learning algorithm is more appropriate to use? Which features are the most representative? Is this solution generalizable? Can we use the solution as it is in real hardware? How much training is it required? What if we use semi-supervised learning?

13 Summary Why do we need a loss differentiator:
Rate adaptation algorithms, DCF mechanism, ... We propose a machine learning-based predictor Handles loss differentiation as “classification” problem There are still many things do be we should consider... So, can we use such solution? Yes, we can [Obama ‘08] Preliminary results show we could 

14 Thank you Questions?

15 Naive Bayes Bayes’s Theorem
Posterior probability = prior probability x likelihood / Evidence p(C| F1... Fn) = p(C).p(F1... Fn|C) / p(F1... Fn) Bayesian Classifier

16 Bayesian Networks A Bayes net (also called a belief network) is an augmented directed acyclic graph, represented by the pair V , E where: V is a set of vertices E is a set of directed edges joining vertices Each vertex in V contains the following information: The name of a random variable A probability distribution table: How the probability of this variable’s values depends on all possible combinations of parental values


Download ppt "Sofia Pediaditaki and Mahesh Marina University of Edinburgh"

Similar presentations


Ads by Google