Presentation is loading. Please wait.

Presentation is loading. Please wait.

Automated Testing of Massively Multi-Player Games Lessons Learned from The Sims Online Larry Mellon Spring 2003.

Similar presentations


Presentation on theme: "Automated Testing of Massively Multi-Player Games Lessons Learned from The Sims Online Larry Mellon Spring 2003."— Presentation transcript:

1 Automated Testing of Massively Multi-Player Games Lessons Learned from The Sims Online Larry Mellon Spring 2003

2 Context: What Is Automated Testing?

3 Classes Of Testing System Stress Load Random Input Feature Regression Developer QA

4 Automation Components Collection & Analysis System Under Test Repeatable, Sync’ed Test Inputs System Under Test Startup & Control System Under Test

5 What Was Not Automated? Startup & Control Repeatable, Synchronized Inputs Results Analysis Visual Effects

6 Lessons Learned: Automated Testing Time (60 Minutes) 1/31/3 Wrap-up & Questions What worked best, what didn’t Tabula Rasa: MMP / SPG Fielding: Analysis & Adaptations Design & Initial Implementation Architecture, Scripting Tests, Test Client Initial Results 1/31/3 1/31/3

7 Requirements Load Testing Load Regression Testing High Code “Churn Rate” Regression

8 Design Constraints Load Regression Churn Rate Automation (Repeatable, Synchronized Input) (Data Management) Strong Abstraction

9 Test Client Single, Data Driven Test Client Regression Load Single API Reusable Scripts & Data

10 Test Client Data Driven Test Client Regression Load Single API Reusable Scripts & Data Single API Configurable Logs & Metrics Key Game States Pass/Fail Responsiveness “Testing feature correctness” “Testing system performance”

11 Problem: Testing Accuracy Load & Regression: inputs must be –Accurate –Repeatable Churn rate: logic/data in constant motion –How to keep testing client accurate? Solution: game client becomes test client –Exact mimicry –Lower maintenance costs

12 Test Client == Game Client Test Control State Game GUI Client-Side Game Logic Commands State Presentation Layer Test Client Game Client

13 Game Client: How Much To Keep? Game Client View Logic Presentation Layer

14 What Level To Test At? Game Client Mouse Clicks Presentation Layer Regression: Too Brittle (pixel shift) Load: Too Bulky Regression: Too Brittle (pixel shift) Load: Too Bulky View Logic

15 What Level To Test At? Game Client Internal Events Presentation Layer Regression: Too Brittle (Churn Rate vs Logic & Data) Regression: Too Brittle (Churn Rate vs Logic & Data) View Logic

16 Gameplay: Semantic Abstractions NullView Client View Logic Presentation Layer Buy LotEnter Lot Use Object … Buy Object ~ ¾ ~ ¼ Basic gameplay changes less frequently than UI or protocol implementations.

17 Scriptable User Play Sessions SimScript –Collection: Presentation Layer “primitives” –Synchronization: wait_until, remote_command –State probes: arbitrary game state Avatar’s body skill, lamp on/off, … Test Scripts: Specific / ordered inputs –Single user play session –Multiple user play session

18 Scriptable User Play Sessions Scriptable play sessions: big win –Load: tunable based on actual play –Regression: constantly repeat hundreds of play sessions, validating correctness Gameplay semantics: very stable –UI / protocols shifted constantly –Game play remained (about) the same

19 SimScript: Abstract User Actions include_scriptsetup_for_test.txt enter_lot$alpha_chimp wait_untilgame_stateinlot chat I’m an Alpha Chimp, in a Lot. log_message Testing object purchase. log_objects buy_objectchair 10 10 log_objects

20 SimScript: Control & Sync # Have a remote client use the chair remote_cmd $monkey_bot use_object chair sit set_data avatar reading_skill 80 set_data bookunlock use_object book read wait_untilavatarreading_skill 100 set_recordingon

21 Client Implementation

22 Composable Client - Scripts - Cheat Console - GUI Event Generators Game Logic Presentation Layer

23 Composable Client - Scripts - Console - GUI - Console - Lurker - GUI Any / all components may be loaded per instance Event Generators Viewing Systems Game Logic Presentation Layer

24 Lesson: View & Logic Entangled Game Client View Logic

25 Few Clean Separation Points Game Client View Logic Presentation Layer

26 Solution: Refactored for Isolation Game Client View Logic Presentation Layer

27 Logic Presentation Layer Lesson: NullView Debugging ? Without (legacy) view system attached, tracing was “difficult”.

28 Logic Presentation Layer Solution: Embedded Diagnostics Diagnostics Timeout Handlers …

29 Talk Outline: Automated Testing Time (60 Minutes) Wrap-up & Questions Lessons Learned: Fielding Design & Initial Implementation Architecture & Design Test Client Initial Results 1/31/3 1/31/3 1/31/3

30 Mean Time Between Failure Random Event, Log & Execute Record client lifetime / RAM Worked: just not relevant in early stages of development –Most failures / leaks found were not high-priority at that time, when weighed against server crashes

31 Monkey Tests Constant repetition of simple, isolated actions against servers Very useful: –Direct observation of servers while under constant, simple input –Server processes “aged” all day Examples: –Login / Logout –Enter House / Leave House

32 QA Test Suite Regression High false positive rate & high maintenance –New bugs / old bugs –Shifting game design –“Unknown” failures Not helping in day to day work.

33 Talk Outline: Automated Testing Time (60 Minutes) ¼ ½ ¼ Wrap-up & Questions Fielding: Analysis&Adaptations Non-Determinism Maintenance Overhead Solutions & Results Monkey / Sniff / Load / Harness Design & Initial Implementation

34 Analysis: Testing Isolated Features

35 Analysis: Critical Path Failures on the Critical Path block access to much of the game. enter_house () Test Case: Can an Avatar Sit in a Chair? use_object () buy_object () buy_house () create_avatar () login ()

36 Solution: Monkey Tests Primitives placed in Monkey Tests –Isolate as much possible, repeat 400x –Report only aggregate results Create Avatar: 93% pass (375 of 400) “Poor Man’s” Unit Test –Feature based, not class based –Limited isolation –Easy failure analysis / reporting

37 Talk Outline: Automated Testing Time (60 Minutes) Wrap-up & Questions Lessons Learned: Fielding Non-Determinism Maintenance Costs Solution Approaches Monkey / Sniff / Load / Harness Design & Initial Implementation 1/31/3 1/31/3 1/31/3

38 Analysis: Maintenance Cost High defect rate in game code –Code Coupling: “side effects” –Churn Rate: frequent changes Critical Path: fatal dependencies High debugging cost –Non-deterministic, distributed logic

39 Turnaround Time Tests were too far removed from introduction of defects.

40 Critical Path Defects Were Very Costly

41 Solution: Sniff Test Pre-Checkin Regression: don’t let broken code into Mainline.

42 Solution: Hourly Diagnostics SniffTest Stability Checker –Emulates a developer –Every hour, sync / build / test Critical Path monkeys ran non-stop –Constant “baseline” Traffic Generation –Keep the pipes full & servers aging –Keep the DB growing

43 Analysis: CONSTANT SHOUTING IS REALLY IRRITATING Bugs spawned many, many, emails Solution: Report Managers –Aggregates / correlates across tests –Filters known defects –Translates common failure reports to their root causes Solution: Data Managers –Information Overload: Automated workflow tools mandatory

44 ToolKit Usability Workflow automation Information management Developer / Tester “push button” ease of use XP flavour: increasingly easy to run tests –Must be easier to run than avoid to running –Must solve problems “on the ground now”

45 Sample Testing Harness Views

46 Load Testing: Goals Expose issues that only occur at scale Establish hardware requirements Establish response is playable @ scale Emulate user behaviour –Use server-side metrics to tune test scripts against observed Beta behaviour Run full scale load tests daily

47 Load Testing: Data Flow Client Metrics Game Traffic Resource Metrics Debugging Data Test Driver CPU Load Control Rig Server Cluster Load Testing Team System Monitors Internal Probes Test Client Test Client Test Client Test Driver CPU Test Client Test Client Test Client Test Driver CPU Test Client Test Client Test Client

48 Load Testing: Lessons Learned Very successful –“Scale&Break”: up to 4,000 clients Some conflicting requirements w/Regression –Continue on fail –Transaction tracking –Nullview client a little “chunky”

49 Current Work QA test suite automation Workflow tools Integrating testing into the new features design/development process Planned work –Extend Esper Toolkit for general use –Port to other Maxis projects

50 Talk Outline: Automated Testing Time (60 Minutes) Wrap-up & Questions Lessons Learned: Fielding Design & Initial Implementation Biggest Wins / Losses Reuse Tabula Rasa: MMP & SSP 1/31/3 1/31/3 1/31/3

51 Biggest Wins Presentation Layer Abstraction –NullView client –Scripted playsessions: powerful for regression & load Pre-Checkin Snifftest Load Testing Continual Usability Enhancements Team –Upper Management Commitment –Focused Group, Senior Developers

52 Biggest Issues Order Of Testing –MTBF / QA Test Suites should have come last –Not relevant when early & game too unstable –Find / Fix Lag: too distant from Development Changing TSO’s Development Process –Tool adoption was slow, unless mandated Noise –Constant Flood Of Test Results –Number of Game Defects, Testing Defects –Non-Determinism / False Positives

53 Tabula Rasa How Would I Start The Next Project?

54 Tabula Rasa PreCheckin Sniff Test There’s just no reason to let code break.

55 Tabula Rasa PreCheckin SniffTest Hourly Monkey Tests Keep Mainline working Useful baseline & keeps servers aging.

56 Tabula Rasa Dedicated Tools Group PreCheckin SniffTest Keep Mainline working Hourly Stability Checkers Baseline for Developers Continual usability enhancements adapted tools To meet “on the ground” conditions. Continual usability enhancements adapted tools To meet “on the ground” conditions.

57 Tabula Rasa PreCheckin SniffTest Keep Mainline working Hourly Stability Checkers Baseline for Developers Dedicated Tools Group Easy to Use == Used Executive Level Support Mandates required to shift how entire teams operated.

58 Tabula Rasa PreCheckin SniffTest Keep Mainline working Hourly Stability Checkers Baseline for Developers Easy to Use == Used Load Test: Early & Often Executive Support Radical Shifts in Process Dedicated Tools Group

59 Tabula Rasa PreCheckin SniffTest Keep Mainline working Hourly Stability Checkers Baseline for Developers Easy to Use == Used Distribute Test Development & Ownership Across Full Team Load Test: Early & Often Break it before Live Executive Support Radical shifts in Process Dedicated Tools Group

60 Next Project: Basic Infrastructure Control Harness For Clients & Components Reference Client Self Test Reference Feature Regression Engine Living Doc

61 Building Features: NullView First Control Harness Reference Client NullView Client Self Test Reference Feature Regression Engine Living Doc

62 Build The Tests With The Code NullView Client Login Self Test Monkey Test Nothing Gets Checked In Without A Working Monkey Test. Control Harness Reference Client Reference Feature Regression Engine

63 Conclusion Estimated Impact on MMP: High –Sniff Test: kept developers working –Load Test: ID’d critical failures pre-launch –Presentation Layer: scriptable play sessions Cost To Implement: Medium –Much Lower for SSP Games Repeatable, coordinated inputs @ scale and pre-checkin regression were very significant schedule accelerators.

64 Conclusion Go For It…

65 Talk Outline: Automated Testing Time (60 Minutes) Wrap-up Questions Lessons Learned: Fielding Design & Initial Implementation 1/31/3 1/31/3 1/31/3


Download ppt "Automated Testing of Massively Multi-Player Games Lessons Learned from The Sims Online Larry Mellon Spring 2003."

Similar presentations


Ads by Google