Brett Higgins Balancing Interactive Performance and Budgeted Resources in Mobile Applications.

Slides:



Advertisements
Similar presentations
MCMC estimation in MlwiN
Advertisements

Intentional Networking: Opportunistic Exploitation of Mobile Network Diversity T.J. Giuli David Watson Brett Higgins Azarias Reda Timur Alperovich Jason.
Energy Efficiency through Burstiness Athanasios E. Papathanasiou and Michael L. Scott University of Rochester, Computer Science Department Rochester, NY.
Imbalanced data David Kauchak CS 451 – Fall 2013.
Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
Augmenting Mobile 3G Using WiFi Sam Baek Ran Li Modified from University of Massachusetts Microsoft Research.
Informed Mobile Prefetching T.J. Giuli Christopher Peplin David Watson Brett Higgins Jason Flinn Brian Noble.
Efficient Autoscaling in the Cloud using Predictive Models for Workload Forecasting Roy, N., A. Dubey, and A. Gokhale 4th IEEE International Conference.
1 Sensor Relocation in Mobile Sensor Networks Guiling Wang, Guohong Cao, Tom La Porta, and Wensheng Zhang Department of Computer Science & Engineering.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
1 Statistical Inference H Plan: –Discuss statistical methods in simulations –Define concepts and terminology –Traditional approaches: u Hypothesis testing.
Copyright 2004 David J. Lilja1 What Do All of These Means Mean? Indices of central tendency Sample mean Median Mode Other means Arithmetic Harmonic Geometric.
Chapter 12 - Forecasting Forecasting is important in the business decision-making process in which a current choice or decision has future implications:
Reducing the Energy Usage of Office Applications Jason Flinn M. Satyanarayanan Carnegie Mellon University Eyal de Lara Dan S. Wallach Willy Zwaenepoel.
Improving Robustness in Distributed Systems Jeremy Russell Software Engineering Honours Project.
Energy Efficient Prefetching – from models to Implementation 6/19/ Adam Manzanares and Xiao Qin Department of Computer Science and Software Engineering.
Introduction to Hypothesis Testing
Copyright © 2001 by The McGraw-Hill Companies, Inc. All rights reserved. Slide Thinking Like an Economist.
Intentional Networking Brett Higgins, Azarias Reda, Timur Alperovich, Jason Flinn, T.J. Giuli (Ford), Brian Noble, David Watson (Ford)
Lecture 10 Comparison and Evaluation of Alternative System Designs.
Bandwidth Allocation in a Self-Managing Multimedia File Server Vijay Sundaram and Prashant Shenoy Department of Computer Science University of Massachusetts.
Augmenting Mobile 3G Using WiFi Aruna Balasubramanian Ratul Mahajan Arun Venkataramani University of Massachusetts Microsoft Research.
Determining the Size of
New Challenges in Cloud Datacenter Monitoring and Management
Augmenting Mobile 3G Using WiFi Aruna Balasubramanian Ratul Mahajan Arun Venkataramani University of Massachusetts Microsoft Research.
Presented by Tao HUANG Lingzhi XU. Context Mobile devices need exploit variety of connectivity options as they travel. Operating systems manage wireless.
Multimedia and Mobile communications Laboratory Augmenting Mobile 3G Using WiFi Aruna Balasubramanian, Ratul Mahajan, Arun Venkataramani Jimin.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Soft Computing Lecture 17 Introduction to probabilistic reasoning. Bayesian nets. Markov models.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
1 System Models. 2 Outline Introduction Architectural models Fundamental models Guideline.
Eat all you can in an all-you-can-eat buffet: A case for aggressive resource usage Ratul Mahajan Jitu Padhye, Ramya Raghavendra, Brian Zill Microsoft Research.
Low-Power Wireless Sensor Networks
Integrating Fine-Grained Application Adaptation with Global Adaptation for Saving Energy Vibhore Vardhan, Daniel G. Sachs, Wanghong Yuan, Albert F. Harris,
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
An Integration Framework for Sensor Networks and Data Stream Management Systems.
SAMANVITHA RAMAYANAM 18 TH FEBRUARY 2010 CPE 691 LAYERED APPLICATION.
IBM Research June 14, 2007 An IP Continuum for Adaptive Interface Design Jeff Pierce © 2007 IBM Corporation.
Context-Aware Interactive Content Adaptation Iqbal Mohomed, Jim Cai, Sina Chavoshi, Eyal de Lara Department of Computer Science University of Toronto MobiSys2006.
Informed Mobile Prefetching T.J. Giuli † Christopher Peplin † David Watson †‡ Brett Higgins Jason Flinn Brian Noble †‡
Budget-based Control for Interactive Services with Partial Execution 1 Yuxiong He, Zihao Ye, Qiang Fu, Sameh Elnikety Microsoft Research.
Energy Efficient Location Sensing Brent Horine March 30, 2011.
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
OPERETTA: An Optimal Energy Efficient Bandwidth Aggregation System Karim Habak†, Khaled A. Harras‡, and Moustafa Youssef† †Egypt-Japan University of Sc.
Brett D. Higgins ^, Kyungmin Lee *, Jason Flinn *, T.J. Giuli +, Brian Noble *, and Christopher Peplin + Arbor Networks ^ University of Michigan * Ford.
Lecture 16 Section 8.1 Objectives: Testing Statistical Hypotheses − Stating hypotheses statements − Type I and II errors − Conducting a hypothesis test.
Distributed Information Systems. Motivation ● To understand the problems that Web services try to solve it is helpful to understand how distributed information.
Copyright , Dennis J. Frailey CSE7315 – Software Project Management CSE7315 M16 - Version 8.01 SMU CSE 7315 Planning and Managing a Software Project.
Confidence Interval Estimation For statistical inference in decision making:
Eduardo Cuervo – Duke University Aruna Balasubramanian - University of Massachusetts Amherst Dae-ki Cho - UCLA Alec Wolman, Stefan Saroiu, Ranveer Chandra,
1 BBN Technologies Quality Objects (QuO): Adaptive Management and Control Middleware for End-to-End QoS Craig Rodrigues, Joseph P. Loyall, Richard E. Schantz.
Adaptive Sleep Scheduling for Energy-efficient Movement-predicted Wireless Communication David K. Y. Yau Purdue University Department of Computer Science.
Energy Efficient Prefetching and Caching Athanasios E. Papathanasiou and Michael L. Scott. University of Rochester Proceedings of 2004 USENIX Annual Technical.
Network Weather Service. Introduction “NWS provides accurate forecasts of dynamically changing performance characteristics from a distributed set of metacomputing.
Uncertainty and confidence Although the sample mean,, is a unique number for any particular sample, if you pick a different sample you will probably get.
Doc.: IEEE /2200r2 Submission July 2007 Sandesh Goel, Marvell et alSlide 1 Route Metric Proposal Date: Authors:
Application-Aware Traffic Scheduling for Workload Offloading in Mobile Clouds Liang Tong, Wei Gao University of Tennessee – Knoxville IEEE INFOCOM
Discovering Sensor Networks: Applications in Structural Health Monitoring Summary Lecture Wireless Communications.
Center for Networked Computing. Motivation Model and problem formulation Theoretical analysis The idea of the proposed algorithm Performance evaluations.
Route Metric Proposal Date: Authors: July 2007 Month Year
Jacob R. Lorch Microsoft Research
Outline Introduction Related Work
Distribution and components
CPU SCHEDULING.
SAMANVITHA RAMAYANAM 18TH FEBRUARY 2010 CPE 691
Route Metric Proposal Date: Authors: July 2007 Month Year
CS639: Data Management for Data Science
Sofia Pediaditaki and Mahesh Marina University of Edinburgh
Towards Predictable Datacenter Networks
Presentation transcript:

Brett Higgins Balancing Interactive Performance and Budgeted Resources in Mobile Applications

Brett Higgins2

3

The Most Precious Computational Resource “The most precious resource in a computer system is no longer its processor, memory, disk, or network, but rather human attention.” – Gollum Garlan et al. Brett Higgins4 (12 years ago) 3 GB $$$

User attention vs. energy/data Brett Higgins5 3 GB $$$

Balancing these tradeoffs is difficult! Mobile applications must: Select the right network for the right task Understand complex interactions between: Type of network technology Bandwidth/latency Timing/scheduling of network activity Performance Energy/data resource usage Cope with uncertainty in predictions Brett Higgins6 Opportunity: system support

Spending Principles Spend resources effectively. Live within your means. Use it or lose it. Brett Higgins7

Thesis Statement By: Providing abstractions to simplify multinetworking, Tailoring network use to application needs, and Spending resources to purchase reductions in delay, Mobile systems can help applications significantly improve user-visible performance without exhausting limited energy & data resources. Brett Higgins8

Roadmap Introduction Application-aware multinetworking Intentional Networking Purchasing performance with limited resources Informed Mobile Prefetching Coping with cloudy predictions Meatballs Conclusion Brett Higgins9

10 Diversity of networks Diversity of behavior Fetch messages Fetch messages The Challenge Brett Higgins YouTube Upload video YouTube Upload video Match traffic to available networks

Current approaches: two extremes All details hidden All details exposed Result: mismatched Result: hard for traffic applications 11 Please insert packets Brett Higgins

Solution: Intentional Networking System measures available networks Applications describe their traffic System matches traffic to networks Brett Higgins12

Abstraction: Multi-socket Multi-socket: virtual connection Analogue: task scheduler Measures performance of each alternative Encapsulates transient network failure 13 ClientServer Brett Higgins

Abstraction: Label Qualitative description of network traffic Interactivity: foreground vs. background Analogue: task priority 14Brett Higgins ClientServer

Evaluation Results: Vehicular network trace - Ypsilanti, MI 3% 7x 15Brett Higgins

Intentional Networking: Summary Multinetworking state of the art All details hidden: far from optimal All details exposed: far too complex Solution: application-aware system support Small amount of information from application Significant reduction in user-visible delay Only minimal background throughput overhead Can build additional services atop this Brett Higgins16

Roadmap Introduction Application-aware multinetworking Intentional Networking Purchasing performance with limited resources Informed Mobile Prefetching Coping with cloudy predictions Meatballs Conclusion Brett Higgins17

Mobile networks can be slow Brett Higgins18 $#&*! ! No need for profanity, Brett. Data fetching is slow User: angry Fetch time hidden User: happy With prefetchingWithout prefetching

Mobile prefetching is complex Lots of challenges to overcome How do I balance performance, energy, and cellular data? Should I prefetch now or later? Am I prefetching data that the user actually wants? Does my prefetching interfere with interactive traffic? How do I use cellular networks efficiently? But the potential benefits are large! Brett Higgins19

Who should deal with the complexity? Users? Brett Higgins20 Developers?

What apps end up doing Brett Higgins21 The Data MiserThe Data Hog

Informed Mobile Prefetching Prefetching as a system service Handles complexity on behalf of users/apps Apps specify what and how to prefetch System decides when to prefetch Brett Higgins22 Application IMP prefetch()

Informed Mobile Prefetching Tackles the challenges of mobile prefetching Balances multiple resources via cost-benefit analysis Estimates future cost, decides whether to defer Tracks accuracy of prefetch hints Keeps prefetching from interfering with interactive traffic Considers batching prefetches on cellular networks Brett Higgins23

Balancing multiple resources Brett Higgins24 CostBenefit PerformanceEnergy Cellular data 3 GB $$$

Balancing multiple resources Performance (user time saved) Future demand fetch time Network bandwidth/latency Battery energy (spend or save) Energy spent sending/receiving data Network bandwidth/latency Wireless radio power models (powertutor.org) Cellular data (spend or save) Monthly allotment Straightforward to track Brett Higgins25 3 GB $$$

Weighing benefit and cost IMP maintains exchange rates One value for each resource Expresses importance of resource Combine costs in common currency Meaningful comparison to benefit Adjust over time via feedback Goal-directed adaptation Brett Higgins26 JoulesBytes Seconds

Users don’t always want what apps ask for Brett Higgins27 Some messages may not be read Low-priority Spam Should consider the accuracy of hints Don’t require the app to specify it Just learn it through the API App tells IMP when it uses data (or decides not to use the data) IMP tracks accuracy over time

Evaluation Results: Brett Higgins28 Time (seconds) Average fetch time Energy usage Energy (J) 3G data (MB) 3G data usage Budget marker ~300ms 2-8x Less energy than all others (including WiFi-only!) Less energy than all others (including WiFi-only!) 2x Only WiFi-only used less 3G data (but…) Only WiFi-only used less 3G data (but…) IMP meets all resource goals Optimal (100% hits) Optimal (100% hits)

IMP: Summary Mobile prefetching is a complex decision Applications choose simple, suboptimal strategies Powerful mechanism: purchase performance w/ resources Prefetching as a system service Application provides needed information System manages tradeoffs (spending vs. performance) Significant reduction in user-visible delay Meets specified resource budgets What about other performance/resource tradeoffs? Brett Higgins29

Roadmap Introduction Application-aware multinetworking Purchasing performance with limited resources Coping with cloudy predictions Motivation Uncertainty-aware decision-making (Meatballs) Decision methods Reevaluation from new information Evaluation Conclusion Brett Higgins30

Mobile Apps & Prediction Mobile apps rely on predictions of: Networks (bandwidth, latency, availability) Computation time These predictions are often wrong Networks are highly variable Load changes quickly Wrong prediction causes user-visible delay But mobile apps treat them as oracles! Brett Higgins31

Example: code offload Brett Higgins32 Likelihood of 100 sec response time 0.1%0.0001%

Example: code offload Brett Higgins33 Elapsed time Expected response time Expected response time (redundant) 0 sec sec sec

Example: code offload Brett Higgins34 Elapsed time Expected response time Expected response time (redundant) 9 sec 1.09 sec sec

Example: code offload Brett Higgins35 Elapsed time Expected response time Expected response time (redundant) 11 sec 89 sec sec Conditional expectation

Spending for a good cause It’s okay to spend extra resources! …if we think it will benefit the user. Spend resources to cope with uncertainty …via redundant operation Quantify uncertainty, use it to make decisions Benefit (time saved) Cost (energy + data) Brett Higgins36 ✖

Meatballs A library for uncertainty-aware decision-making Application specifies its strategies Different means of accomplishing a single task Functions to estimate time, energy, cellular data usage Application provides predictions, measurements Allows library to capture error & quantify uncertainty Library helps application choose the best strategy Hides complexity of decision mechanism Balances cost & benefit of redundancy Brett Higgins37

Roadmap Introduction Application-aware multinetworking Purchasing performance with limited resources Coping with cloudy predictions Motivation Uncertainty-aware decision-making (Meatballs) Decision methods Reevaluation from new information Evaluation Conclusion Brett Higgins38

Deciding to operate redundantly Benefit of redundancy: time savings Cost of redundancy: additional resources Benefit and cost are expectations Consider predictions as distributions, not spot values Approaches: Empirical error tracking Error bounds Bayesian estimation Brett Higgins39

Empirical error tracking Compute error upon new measurement Weighted sum over joint error distribution For multiple networks: Time: min across all networks Cost: sum across all networks ε(B P2 ) ε(B P1 ) observed predicted Brett Higgins error =

Error bounds Range + p(next value is in range) Student’s-t prediction interval Calculate bound on time savings of redundancy B P1 B P2 Bandwidth (Mbps) Network bandwidth T1T1 T2T2 Time (seconds) Time to send 10Mb Max time savings from redundancy Predictor says: Use network 2 41Brett Higgins

Error bounds Calculate bound on net gain of redundancy max(benefit) – min(cost) = max(net gain) Use redundancy if max(net gain) > 0 Energy to send 10Mb E1E1 E2E2 Energy (J) E both Min energy w/ redundancy T1T1 T2T2 Time (seconds) Time to send 10Mb Max time savings from redundancy Predictor says: Use network B P1 B P2 Bandwidth (Mbps) Network bandwidth 42Brett Higgins

Bayesian Estimation Basic idea: Given a prior belief about the world, and some new evidence, update our beliefs to account for the evidence, AKA obtaining posterior distribution using the likelihood of the evidence Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 43Brett Higgins

Bayesian Estimation Example: “will it rain tomorrow?” Prior: historical rain frequency Evidence: weather forecast (simple yes/no) Posterior: believed rain probability given forecast Likelihood: When it will rain, how often has the forecast agreed? When it won’t rain, how often has the forecast agreed? Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 44Brett Higgins

Bayesian Estimation Applied to Intentional Networking: Prior: bandwidth measurements Evidence: bandwidth prediction + implied decision Posterior: new belief about bandwidths Likelihood: When network 1 wins, how often has the prediction agreed? When network 2 wins, how often has the prediction agreed? Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 45Brett Higgins

Decision methods: summary Empirical error tracking Captures error distribution accurately Computationally intensive Error bounds Computationally cheap Prone to overestimating uncertainty Bayesian Computationally cheap(er than brute-force) Appears more reliant on history Brett Higgins46

Reevaluation: conditional distributions Brett Higgins47 Decision Elapsed Time One serverTwo servers 0 10 …

Evaluation: methodology Network trace replay, energy & data measurement Same as IMP Metric: weighted cost function time + c energy * energy + c data * data Brett Higgins48 Low-costMid-costHigh-cost c energy Energy per 1 second delay reduction 100 J10 J1 J Battery life reduction under average use (normally 20 hours) 6 min36 sec3.6 sec

Evaluation: IntNW, walking trace Brett Higgins49 Weighted cost (norm.) 2x 24% Low-resource strategies improve Meatballs matches the best strategy Error bounds leans towards redundancy Error bounds leans towards redundancy SimpleMeatballs

Evaluation: PocketSphinx, server load Brett Higgins50 23% Meatballs matches the best strategy SimpleMeatballs Error bounds leans towards redundancy Error bounds leans towards redundancy Weighted cost (norm.)

Meatballs: Summary Uncertainty in predictions causes user-visible delay Impact of uncertainty is significant Considering uncertainty can improve performance Library can help applications consider uncertainty Small amount of information from application Library hides complex decision process Redundant operation mitigates uncertainty’s impact Sufficient resources are commonly available Spend resources to purchase delay reduction Brett Higgins51

Conclusion Human attention is the most precious resource Tradeoffs between performance, energy, cellular data Difficult for applications to get right Necessitates system support – proper abstractions We provide abstractions for: Using multiple networks effectively Budgeted resource spending Hedging against uncertainty with redundancy Overall results Improved performance without overspending resources Brett Higgins52

Brett Higgins53

Future work Intentional Networking Automatic label inference Avoid modifying the server side (generic proxies) Predictor uncertainty Better handling of idle periods Increase uncertainty estimate as measurements age Add periodic active measurements? Brett Higgins54

Abstraction: Atomicity & Ordering Constraints App specifies boundaries, partial ordering Receiving end enforces delivery atomicity, order 55Brett Higgins Server Chunk 1 Chunk 3 Chunk 2 use_data()

Evaluation Results: Vehicular Sensing Trace #2: Ypsilanti, MI Brett Higgins56 48% 6%

Evaluation Results: Vehicular Sensing Trace #2: Ypsilanti, MI Brett Higgins57

IMP Interface Data fetching is app-specific App passes a fetch callback Receives a handle for data App provides needed info When prefetch is desired When demand fetch occurs When data is no longer wanted 58Brett Higgins Application IMP prefetch()

How to estimate the future? Current cost: straightforward Future cost: trickier Not just predicting network conditions When will the user request the data? Simplify: use average network conditions Average bandwidth/latency of each network Average availability of WiFi Future benefit Same method as future cost Brett Higgins59

Balancing multiple resources Cost/benefit analysis How much value can the resources buy? Used in disk prefetching (TIP; SOSP ‘95) Prefetch benefit: user time saved Prefetch cost: energy and cellular data spent Prefetch if benefit > cost How to meaningfully weigh benefit and cost? Brett Higgins60

Error bounds How to calculate bounds? Chebyshev’s inequality? Simple, used before; provides loose bounds Bounds turned out to be too loose Confidence interval? Bounds true value of underlying distribution Quantities (e.g., bandwidth) neither known nor fixed Prediction interval Range containing the next value (with some probability) Student’s-t distribution, alpha= Brett Higgins

How to decide which network(s) to use? First, compute the time on the “best” network T 1 = Σ s/B 1 * posterior(B 1 | B P1 > B P2 ) Next, compute the time using multiple networks T all = Σ min(s/B 1, s/B 2 ) * posterior(B 1, B 2 | B P1 > B P2 ) Finally, do the same with resource costs. If cost < benefit, use redundancy Brett Higgins62

T all = Σ min(s/B 1, s/B 2 ) * posterior(B 1, B 2 | B P1 > B P2 ) L(B P1 > B P2 |B 1, B 2 ) * prior(B 1, B 2 ) p(B P1 > B P2 ) Here is where each component comes from. = Σ s/max(B 1, B 2 ) * B 1 = 1 B 2 = 4 B 1 = 8 B 1 = 9 B 2 = 5B 2 = 6 likelihood(B P1 > B P2 |B 1, B 2 ) p(B P1 > B P2 ) p(B P1 < B P2 ) prior(B 1 ) prior(B 2 )

The weight of history A measurement’s age decreases its usefulness Uncertainty may have increased or decreased Meatballs gives greater weight to new observations Brute-force, error bounds: decay weight on old samples Bayesian: decay weight in prior distributions Brett Higgins64

Reevaluating redundancy decisions Confidence in decision changes with new information Suppose we decided not to transmit redundantly Lack of response is new information Consider conditional distributions of predictions Meatballs provides interface for: Deferred re-evaluation “Tipping point” calculation: when will decision change? Brett Higgins65

Applications Intentional Networking Bandwidth, latency of multiple networks Energy consumption of network usage Reevaluation (network delay/change) Speech recognition (PocketSphinx) Local/remote execution time Local CPU energy model For remote execution: network effects Reevaluation (network delay/change) Brett Higgins66

Evaluation: IntNW, driving trace Brett Higgins67 Not much benefit from using WiFi Not much benefit from using WiFi SimpleMeatballs Weighted cost (norm.)

Evaluation: PocketSphinx, walking trace Brett Higgins68 Benefit of redundancy persists more % % >2x Meatballs matches the best strategy SimpleMeatballs Weighted cost (norm.)