Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Quest for an Internet Video Quality-of-Experience Metric A. Balachandran, V. Sekar, A. Akella, S. Seshan, I. Stoica and H. Zhang In Proceedings of the.

Similar presentations


Presentation on theme: "A Quest for an Internet Video Quality-of-Experience Metric A. Balachandran, V. Sekar, A. Akella, S. Seshan, I. Stoica and H. Zhang In Proceedings of the."— Presentation transcript:

1 A Quest for an Internet Video Quality-of-Experience Metric A. Balachandran, V. Sekar, A. Akella, S. Seshan, I. Stoica and H. Zhang In Proceedings of the 11th ACM Workshop on Hot Topics in Networks Seattle, WA, USA October 29-30, 2012

2 Introduction Content delivery costs down, subscription-based services up  Internet video traffic predicted to increase – Possibly surpassing television-based viewership in the future Many players: content providers, content delivery networks, video player designers, and users All face challenge: lack of standardized approach to measure Quality of Experience (QoE) Need one to allow objective comparison of competing designs

3 New Notions of Quality for Internet Video Measuring quality – Internet video using HTTP over CDN – Largely reliable, so “loss” (PSNR) not so relevant – Instead: buffering, bitrate, frame rate, bitrate switching, startup delay Measuring experience – with ads and subscriptions, opinions in controlled study != engagement for business – Instead: fraction of video played, number of visits to provider

4 Today, Metrics Fall Short Adaptive video players do ad hoc tradeoffs between bitrate, startup delay and buffering [16, 20, 32] Frameworks for multi-CDN optimization use primitive QoE metrics that only capture buffering, not bitrate switching [28, 29] Content providers have no systematic way to evaluate cost-performance tradeoffs from different CDNs [1]

5 Robust QoE Measure Need unified understanding – Set of quality metrics together affect impact Rather than each in isolation – Natural since there are tradeoffs for individual metrics E.g. lower bitrate means lower buffering but reduces quality Need quantitative understanding – Beyond simple “metric M impacts engagement” – Instead: “changing metric M from x to y changes engagement from a to b”

6 Key Factors for Internet Video QoE Complex relationships – relationship between metrics and user experience complex, even counter intuitive – E.g., higher bitrate not always highest quality Metric dependencies – metrics subtle interdependencies and tradeoffs – E.g., switching quality reduces buffering, but can annoy users Impact of content – nature of content can confound factors – E.g., live versus video-on-demand have different viewing patterns. – E.g., users interest in content affects tolerance.

7 Goal Identify feasible roadmap to robust, unified, quantitative QoE metric Cast QoE measure as machine learning problem – Build appropriate model to predict engagement (e.g. play time) as a function of quality metrics Content-induced effects addressed using domain-specific measurements

8 Preliminary Results Decision tree-based classifier provides 50% accuracy in predicting engagement Carefully setting up inputs and features could lead to 25% gain in accuracy

9 Outline Introduction(done) Use Cases for Video QoE(next) Challenges in Measuring QoE Predictive Model for QoE Inference Preliminary Results Discussion Conclusion

10 Use Cases for Video QoE Netflix objectively evaluate CDN – Also Multi-CDN optimizers CDNs efficiently distribute resources for users Video player to make tradeoffs (bitrate vs. buffer) Users to make choices beyond content. Also, some ISPs have bandwidth quotas Industry agreement? – Set of quality metrics – Need for “in the wild” data, not controlled studies Internet Video Ecosystem

11 Outline Introduction(done) Use Cases for Video QoE(done) Challenges in Measuring QoE(next) Predictive Model for QoE Inference Preliminary Results Discussion Conclusion

12 Challenges in Measuring QoE Approach Example from 2 large content providers – One serves TV episodes – One serves live sports events Industry standard QoE metrics [6] – Errors, delays, video quality Mini-outline Complex relationships Interaction between metrics Externalities

13 Complex Relationships Counter-intuitive effects – Higher quality should have higher engagement – But!  lower quality led to longer play times – Why? For live sports in the background, low quality meant low CPU. When high, terminated Non-monotone effects – Higher average bitrate not always higher quality – Bitrate values in discrete steps. When average “between” steps, means a switch – annoys users Threshold effects – Rates up to 0.5 switches/minute no effect on engagement – Higher rates, users quit early

14 Interaction Between Metrics Switching versus Buffering – Should switch proactively to avoid buffering – But can annoy users (see previous graph) Join time versus Bitrate – Higher bitrate implies higher quality – But means takes longer to start (to fill buffer)

15 Externalities (1 of 2) Confounding external factors that affect user engagement Genre – Live similar to VoD in terms of quality (right) – But user engagement different (left)

16 Externalities (2 of 2) User interest – Sample videos and quit, independent of quality issues – Regional issues for live sports Quality the same (right) But locals watch avg 10 minutes longer (left)

17 Outline Introduction(done) Use Cases for Video QoE(done) Challenges in Measuring QoE(done) Predictive Model for QoE Inference(next) Preliminary Results Discussion Conclusion

18 Towards Predictive Model for QoE Inference Engagement = f({QualityMetric i }) Engagement: e.g. playtime, visits to website QualityMetric i : e.g. buffering ratio, bitrate Dependencies & hidden relationships handled through machine learning – As long as there are sufficiently large datasets Fortunately, content providers gather (e.g. Netflix) Confounding effects tackled by domain specific insights – Select input data and feed in – Or identify confounding features and let algorithm handle

19 Outline Introduction(done) Use Cases for Video QoE(done) Challenges in Measuring QoE(done) Predictive Model for QoE Inference(done) Preliminary Results(next) Discussion Conclusion

20 Confirm Intuition of Approach 1 month of video viewership, 10 million video sessions 10-fold cross-validation – Divide data into 10 equal sized pieces – Train on 9 pieces, test on 1 piece – Repeat 10 times Two solutions – Strawman – Domain-specific refinement

21 Strawman Solutions (1 of 2) Use “play time” as engagement Classes, based on fraction of video viewed – E.g.: 5 classes [0-20%, 20-40%, 40-60%, 60-80%, 80-100%] Varied learning algorithms (standard) – Naïve Bayes – Simple regression – Classic binary decision tree

22 Strawman Solutions (2 of 2) Decision trees best Bayes tends to be better when independent Simple regression not good when not linear, e.g., non- monotonic All worse with more classes (finer granularity)

23 Domain-specific Solutions (1 of 3) Decision trees can capture some complexity, but not confounding effects – Refine with measurements Genre-specific refinement: live and VoD different, so segment into two parts and run separately User interest-based refinement: since users tend to “sample” video, ignore early quitters (those < 5 minutes)

24 Domain-specific Solutions (2 of 3) About a 20% increase in accuracy

25 Domain-specific Solutions (3 of 3) About an additional 5% increase in accuracy

26 Outline Introduction(done) Use Cases for Video QoE(done) Challenges in Measuring QoE(done) Predictive Model for QoE Inference(done) Preliminary Results(done) Discussion(next) Conclusion

27 Discussion (1 of 2) Metrics – for engagement, need more than playtime – Ad impressions, user loyalty to return, total number of videos viewed – Past work [19] suggests quality affects engagement differently, depending upon metric e.g. delay may not affect specific viewing, but may hurt likelihood to return May need to weigh different engagement metrics Externalities – is everything covered? – User ISP or device viewing may have impact? – Individual user preference has impact? – Motivates more measurement studies – May need more feature selection – May need user profile information

28 Discussion (2 of 2) Intuitive models – inferring cause for lower engagement tough given confounding factors (user interest, tolerance for low quality) – But need intuitive model so designers and practitioners can make sense of tradeoffs – But machine learning models can be black-box (e.g. Principle Component Analysis – PCA) – Fortunately, decision trees have techniques to turn into more intuitive explanations [27] and equations [25] Validation – how to validate if useful? – Could have test group with system driven by metric

29 Conclusions Many industries suffer where lack of understandings lead to deceptive marketing – Quote individual metrics to look good, without explanation to grander scheme (e.g. clock speed for CPU, or megapixels for camera) With proliferation of quality factors, Internet video could suffer similar fate Goal: robust, unified, quantified QOE metric Preliminary results – reason to be hopeful

30 Future Work?

31 Future Work Additional measures of engagement – E.g. return visits Explanations of decision trees Accounting for end-devices – E.g. PC versus tablet versus phone Accounting for last-mile connection – E.g. WiFi versus 4G versus fiber


Download ppt "A Quest for an Internet Video Quality-of-Experience Metric A. Balachandran, V. Sekar, A. Akella, S. Seshan, I. Stoica and H. Zhang In Proceedings of the."

Similar presentations


Ads by Google