Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Performance Measurement of a Prosspero Based Trouble Ticket System Nagendra Nagarajayya Staff Engineer Sun Microsystems, Inc. 1 Vincent Perrot OSS/J.

Similar presentations


Presentation on theme: "1 Performance Measurement of a Prosspero Based Trouble Ticket System Nagendra Nagarajayya Staff Engineer Sun Microsystems, Inc. 1 Vincent Perrot OSS/J."— Presentation transcript:

1 1 Performance Measurement of a Prosspero Based Trouble Ticket System Nagendra Nagarajayya Staff Engineer Sun Microsystems, Inc. 1 Vincent Perrot OSS/J Architect Sun Microsystems, Inc.

2 2 Business Problem Everyone is knowledgeable about the OSSJ promise: > Significantly reduce integration and maintenance costs > Significantly improve business agility > Get back Service Providers in the drivers seat > Safe and robust foundation Not knowledgeable about, how does my OSS Perform once OSSJ APIs are adopted? > No Methodology to measure Performance > No Benchmarks

3 3 Why Methodology/Specification? Measure the performance of OSS/J applications in a standard and predictable way Establish comparison bases > Metrics and Measures – Operations / second > Costs – $ per operation Measurement reflects different OSS components performance Modelize the environment constraints

4 4 Application Server Communication layers Provider/Consumer Model Customer facing workload Network facing workload Estimated Performance Actual Performance OSSJ APIs Working towards a solution: Trouble Ticket Scenario

5 5 Use Open Source Business Delegate named ossj-clientsproviding: > Support for generic operations like create, update, etc. > Support of all Profiles – Java (JavaEE/JMS) – XML over JMS – Web Services (WS) > Multi-threaded to generate load and scale > Support for Customization: – get sequences of operation from property files – get implementations of entities from property files Benchmark Design: load Generation

6 6 Typical Customer Facing load: > More Create trouble tickets operations (18%) > followed by Update Ticket (28%) > GetByKey operation (36%) > Cancel Ticket (8%) > Close Ticket (10%) > GetAll operations (0%) Typical Network Facing load: > More Update trouble tickets operations (40%) > followed by Create Ticket (10%) > GetByKey operation (30%) > Cancel Ticket ( 10%) > Close Ticket (12%) > GetAll operations Benchmark Design: load Models

7 7 A create trouble ticket operation uses the following attributes: > TroubleState > TroubleStatus > TroubledObject > PreferredPriority > TroubleDetectionTime > TroubleDescription > Originator > TroubleFound > TroubleType Benchmark Design: Data (Entities)

8 8 Metric: Number of Operations via the OSSJ interface TT system is pre-loaded with 10,000 tickets Create TT clients start and create tickets for 5 minutes GetByKey, Update clients start after 5 minutes > Primary Keys of created tickets used > Benchmark Monitor communicates keys Cancel clients start after 20 minutes > Cancel pre-created tickets in this version Close clients start after 25 minutes > Close pre-created tickets in this version GetAll operation starts after 15 minutes Scenario 1: 100 clients / 1 hour

9 9 Number of trouble ticket operations calculated using > TT Ops = (Create TT ops + (Create TT ops–Cancel ops ) + (Create TT ops–Close TT ops) + (GetByKey ops–Create TT ops) + (Update Ticket–Create TT ops)) / total types of operation Example: assuming 100K tickets are created in an 1 hour in the Customer Facing Workload ~ 91K TT ops > Create TT (18%)= 100,000 > getByKey (36%)= (36*100,000/18) = 200,000 (achieved metric should be within +/-5%) > cancelTicket (8%)=(5,000) = 5,000 (+/-5%) > closeTicket (10%)= (5,000) = 5,000 (+/-5%) > updateTicket (28%)-=(28*100,000/18)= 155,000 (+/-5%) > = (100,000 + (100,000 – 5,000) + (100,000 – 5,000)+ (200, ,000) + (155, ,000))/5 Model: Expected Metric

10 10 Calculate Achieved Metric > TT Ops = sum of tickets operated / number of operations > Should be within +-5% of modeled expected metric – For eg: – TTs Created = – TT retrieved by GetByKey = (not within +- 5%) – UpdateTicket = (limit to +5%) – CancelTicket = 4948 (voluntarily limited to 5000) – CloseTicket = 4948 (voluntarily limited to 5000) – = ( ) + (93937 – 4948) + ( ) +( ))/5 = Cost Metric (cost of operations) > Modeled metric vs Actual achieved metric > $ TTOps / sec = $ Cost of hardware, software (total operations /no of profiles) Calculate Achieved and Cost Metric

11 11 Real OSS environment has monitoring enabled Monitoring a requirement in the benchmark to measure the cost of monitoring > Needs System monitoring to be enabled – Eg. CPU usage through vmstat > Application to be monitored – Container, TT component, JVM > Messaging System (topics and queues) – Number of messages (in/out), rate and size > Network to be monitored – Number of packets (in/out), size of packets > Storage to be monitored – Read/write requests/sec, %busy, %wait, response time Monitoring Requirements

12 12 To be generated automatically with tools to standardize reporting Some of the sections > Diagrams of Measured and Priced Systems > Measured Configuration – Hardware, software, network, storage > Metrics – Expected – Achieved > Commercial Of The Shelf Software > Tuning Options > Driver Section > Pricing Reporting Requirements (2)

13 13 Diagrams of Measured and Priced Systems > Measured is a T2000 Measured Configuration > Sun T2000 (2 P, 2 cores, 2 threads), 1 Ghz, 16 GB of memory, 2 internal hard disks > OS: Sun Solaris 10 > OSS/J TT Components: – FROX premioss-tt v2.23 > Middleware – Application Server: Sun JES FYO5Q4 – JMS Server: Sun JESFY05Q4 > Database Server: None Report: Results from an actual benchmark, premioss-tt v2.23 on a Sun T2000

14 14 Diagrams of Measured and Priced Systems Report (2): Results from an actual benchmark

15 15 ACHIEVED METRIC > JVT TT Create OPS = > JVT TT Update OPS = > JVT TT GetByKey OPS = > JVT TT Cancel OPS = 4948 > JVT TT Close OPS = 4948 > Total JV T TT OPS = ACHIEVED METRIC OPS = COST METRIC > Achieved $ per TT OPS/hr = $0.34 (assuming the cost of the system is about $20k) Reporting Requirements (3): Results from an actual benchmark

16 16 First version of the benchmark specification achieves measuring the cost of operating a TT system More needed > Specifying ticket life cycle with state transitions > Response times > The sequence of operations with expected behavior > Linking – Inventory API – Customers, – Products and Services > Additional Scenarios Conclusion

17 17 Measure the current behavior of your certified TT implementation Improve and compare the performance of different OSS/J TT profiles Improve performance > Hardware (change system, CPU, memory or disk as example) > OS, the java virtual machine or the middleware stack > Your application itself or other certified implementation Finally measure in terms of operations and cost > $ TTops / sec Recap – TTPerf specification 1.1

18 18 TTperf specification TTPerf results TTperf case study > TTperf project > Open source ( CDDL license ) – Generic OSSJ client code – https://ossj-clients.dev.java.net OSS Trouble Ticket API – More Information

19 19 TTPerf 1.1 Nagendra Nagarajayya 19 Vincent Perrot


Download ppt "1 Performance Measurement of a Prosspero Based Trouble Ticket System Nagendra Nagarajayya Staff Engineer Sun Microsystems, Inc. 1 Vincent Perrot OSS/J."

Similar presentations


Ads by Google