Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bottlenecks Stress Test Demo 20170504.

Similar presentations


Presentation on theme: "Bottlenecks Stress Test Demo 20170504."— Presentation transcript:

1 Bottlenecks Stress Test Demo

2 Contents Some Considerations on Stress Testing
Highlights of Discussion in Testperf Stress Tests for Danube Test Cases in Discussion Preliminary Works Determine baseline for throughput Life-cycle events for ping Stress Test Demo Testing Contents Video Demo

3 Some Consideration of Stress Testing
Stress testing is a kind of testing determines the robustness of software by testing beyond the limits of normal operation Stress testing does not break the system for fun It will provides level of confidence of the system to users.  It also allows observation of how system react to the failures. Additional purpose behind this madness is to make sure that the system fails and recovers gracefully - a quality known as recoverability.  Load Testing Stress Testing

4 Stress Testing for Danube – Highlights Summary
Stress testing principles and test cases discussion in Testperf Test Requirements The stress test should from a user perspective be ... Easy to understand (what the test does and how the system is being stressed) Easy to run (i.e. "out-of-the-box" ... having deployed the OPNFV platform it should be possible to run the test with minimal effort) Where possible use existing proven methods, test-cases and tools Should be possible to work with all OPNFV release scenarios For Danube the stress test result is not part of our release criteria however for future releases a stress test threshold (metric TBD) should be part of the release criteria It should be possible to increase and decrease the load ("stress") and monitor the effect on the system with a simple performance metric The application running on SUT must be highly optimized to avoid being the bottleneck

5 Test Cases in Discussion - Highlights Summary
VM1 VM2 spawn destroy throughput cpu limit ping TC1 TC2 TC3 TC5 TC4 Data-Plane Traffic Determine baseline test cases Life-Cycle Events Perform VM pairs/stacks testing Availability Robustness confidence  Release criteria? 1 Hour Max? General Heavy Load Easy-to-Run Easy-to-Understand

6 Test Cases in Discussion - Highlights Summary
Categories Test Case Description Data-plane Traffic for a virtual or bare metal POD TC1 –Determine baseline for throughput Initiate one v/b POD and generate traffic Increase the package size Throughput, latency and PKT loss up to x% TC2 - Determine baseline for CPU limit Decrease the package size Measure CPU usage up to x% Life-cycle Events for VM pairs/stacks TC3 – Perform life-cycle events for ping Spawn VM pairs, do pings and destroy VM pairs Increase the number of simultaneous live VMs latency and PKT loss up to x% Testing time, count of failures, OOM killer? TC4 – Perform life-cycle events for throughput Spawn VM pairs, generate traffic and destroy VM pairs Serially or paralleled increase the package size Max load and PKT loss vs load Throughput, latency and PKT loss up to x% for either pair TC5 – Perform life-cycle events for CPU limit Serially or paralleled Decrease the package size Measure the CPU usage up to x% Former update: Discussion address:

7 Preliminary Works in Danube
TC1 – Determine baseline for throughput Load Manager Load Category “Data-Plane Traffic” Preliminary work from Bottlenecks & Yardstick

8 Preliminary Works in Danube
Type 1 TC3 – Perform life-cycle events for ping Initial stress test Load Manager Run in parallel Time ends or Fail Start End Iterate Increase load Criteria Check Testing Flow create destroy ping run in parallel Bottlenecks Yardstick t1 t4 t5 t0 t2 t3 T6? 5 10 20 50 100 200? Load Manager If VMs successly built If VMs successly built If VMs successly built If VMs successly built Visualization If VMs successly built Quotas Can not determine the exact time for expected number of VMs

9 Stress Testing Demo for Ping
Testing Contents Executing Stress Test and Provide comparison results for different installers (Installer A and Installer B) Up to 100 stacks for Installer A (Completes the test) Up to 40 stacks for Installer B (System fails to complete the test) Testing Steps Enter the Bottlenecks Repo: cd /home/opnfv/bottlenecks Prepare virtual environment: . pre_virt_env.sh Executing ping test case: bottlenecks testcase run posca_factor_ping Clean the virtual environment: . rm_virt_env.sh Video Demo Link will be provided in OPNFV official YouTube channel

10 Testing Results for the Demo
Testing for Installer A Up to 100 stacks in configuration file for Installer B 1 stack SSH error when number of stack raised to 50 When stack number up to 100, most of the errors are heat response time out 100 stacks are established successfully in the end Testing for Installer B Up to 40 stacks in configuration file for Installer B When stack number up to 30, the system fails to create all the stacks 21 stacks are either created failure or keeping in creation To verify the system performance, we choose to do clean-up and run the test again When stack number up to 20, same situation happens as in the last test The system performance degrades Different to the test for Installer A, we do the verification step because the system clearly malfunctions. Which not shown in the demo is that after 3 rounds of the stress test, the system fail even to create 5 stacks

11 Thank You Jan 2016


Download ppt "Bottlenecks Stress Test Demo 20170504."

Similar presentations


Ads by Google