Presentation on theme: "Scalability Testing Results and Conclusions. Scope of Testing User Load Concurrent connections Transaction Rates Scalability."— Presentation transcript:
Scalability Testing Results and Conclusions
Scope of Testing User Load Concurrent connections Transaction Rates Scalability
Approach Establish performance benchmarks Preparation Test Environment and Data Progressive loading of system, and fine tuning Results Conclusions
Performance Goals Performance Benchmarking Use baseline data available from Tanzania and Zambia Goal is to support a country on the scale of Nigeria 4 times the workload of Tanzania: 4x population, 4x facilities, 4x system users, 12x Requisitions – based on modeling activity with monthly rather than quarterly replenishment cycles, as are currently run in Tanzania. (detailed test metrics are listed in the appendix) Define extreme-case hypothetical test scenario : Requisitions submitted for all Programs every month 25% of all monthly user activity occurs on the last day of the month Historical data preloaded in DB
Steps Environment Setup App Server and Web Server deployed on a shared VM Production Database Engine deployed on a VM Reporting Database Engine deployed on a separate VM Nagios used for system monitoring JMeter running on multiple machines to generate simulated users activity
Steps, continued Preparation Reference Data (used to populate new Requisitions, etc.) Historical transaction data JMeter Scripts to synthesize all users activities Execution Merge reference data into JMeter Scripts Execution of JMeter scripts Profile system with YourKit to identify any memory leaks Analyze system logs to identify performance bottlenecks
First Rounds of Testing Test run with three VM system configuration (Production Database Server, Replication/Reporting Database Server, App+Web Server) * Percentage of cumulative timed out requests Target Country Duration of Run Number of Users Total Number of Users Requests Time out Rate % * (preliminary stress testing) 5 min % (preliminary stress testing) 10 min % Nigeria30 min %
Server Configuration, first round of tests
Performance Tuning & System Refinements Application Tuning JSON payload optimization for Save-Requisition and Approve- Requisition work flows Non Full Supply product data selectively loaded while viewing requisition Database indexes created to improve query response time Bug Fixing Environment Modifications &Tuning Add an additional VM to separate the App Server and the Web Server Apache configuration optimized to support higher user load c3p0 (connection pooling) tuned to maximize usage of database connection pool Tomcat configuration optimized to maximize number of concurrent requests Distributed JMeter instances across multiple workstations to be able to generate a simulated user load of 10,000 parallel users
Second Rounds of Testing Tests run with four VM system configuration (Production Database Server, Replication/Reporting Database Server, App Server, Web Server) * Percentage of cumulative timed out requests Target Load Duration of Run Number of Users Total Number of User Requests Time out Rate % Tanzania30 min % Nigeria30 min %
Server Configuration, second round of tests
Summary of Results The projected work loads for Tanzania and Zambia were covered by the system running on a three-server environment. The system scales to support substantially larger workloads by running the Application Server and the Web Server on individual dedicated machines.
Conclusions A three-server environment would be the minimum configuration for supporting the workloads of Tanzania or Zambia. System performance can be improved by Using individual dedicated machines for the Application Server and the Web Server. Add additional application servers and a load balancer.
Considerations In retrospect, our worst-case testing scenario was excessive. No organization would allow all their health centers to wait until the last minute to submit their requisitions. In order to maintain an even workload at the warehouses and for the delivery fleet, the organization would instead divide the health centers into groups, and schedule their replenishment- cycle activities (including deadlines for submitting their requisitions) uniformly throughout the month.
Considerations, contd Our testing tools (i.e., the set of computers running Jmeter to simulate users activity) had a stable internet connection to the VMs hosted in AWS cloud. The absence of a stable internet connection could render a cloud-hosted production system totally inaccessible at random times throughout the work day.
Test criteria: Scalability Test Data MoH Organizational infrastructure and user base metrics ZambiaTanzaniaNigeria Number of Facilities Number of Named Users Average Number of Programs, and associated Requisitions per Month, per Facility 445 Total Number of Requisitions submitted per month Total Products Available2000 Average Number of Full Supply products per R&R 35 for 85% of R&Rs, 200 for 10% of R&Rs, 400 for 5% of R&Rs 35 for 85% of R&Rs (e.g. TB, Mal, ART) 200 for 10% of R&Rs Regional Hosp) 400 for 5% of R&Rs Natioinal Hosp) Average Number of Non Full Supply products to be loaded on the R&R per Program 100 Average Number of Non Full Supply products to be added to the R&R per Program 10 Other supply-chain operating parameters: Number of Facility Types5 Product-to-Facility Type mappings: 20% mapped to all facility types 20% mapped to only one facility type 60% mapped to 3 facility types Number of levels in the approval hierarchy2 Number requisition groups240 (avg 100 facilities per group) Number of geographic zone levels(country / province / district) Number of geographic zones35 provinces/states; 25 districts each Replenishment-cycle schedules All Facilities submit Requisitions for all their Programs on a common monthly schedule; all full-supply Products must be ordered, reviewed and approved.
Test criteria, contd: Concurrent Users and Transaction Volumes on the end-of-month Busiest Day Assume users wait until the very last day of the month to complete and submit 25% of all the Requisitions that due for the month Transaction Nigeria Concurrent User during the peak hour Nigeria Concurrent actions during the peak hour Nigeria (average actions per minute) Load for R&Rs initiated during the peak hour on last day of the period Assume 15% of these "last minute" R&Rs are initiated during the peak hour of the last day Load for R&Rs submitted during the peak hour on last day of the period Assume 50% of these "last minute" R&Rs are submitted during the peak hour of the last day Load for R&Rs authorized during the peak hour on last day of the period Assume 10% of these "last minute" submitted R&Rs are separately authorized during the peak hour of the last day Load for view R&Rs waiting for my Approval Assume 10% of these "last minute" submitted R&Rs are retrieved for review and approval during the peak hour of the last day Load for Reviewing and Approving R&Rs Assume 10% of these "last minute" submitted R&Rs are approved during the peak hour of the last day Load for Converting R&Rs to Orders5102 Assume 10 batches of approved R&Rs (avg 300 R&Rs per batch) are converted to orders during the peak hour, and 2 of these happen during the same minute Load for viewing my R&R Assume 25% of the R&Rs for the month are currently in thepipeline, and 10% of their owners are checking the status of their R&R during the peak hour of the last day Total Load during the peak hour of last day of month