Download presentation
Presentation is loading. Please wait.
Published bySam Wold Modified over 11 years ago
1
SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
Analyzing, testing and tuning ESB/JMS performance Hello. A word about my background. Logistics: . David Hentchel Principal Solution Engineer
2
Agenda Review the recommended approach to project and procedures
Analyzing, testing and tuning ESB/JMS performance Methodology: Review the recommended approach to project and procedures Analysis: Understand how to characterize performance requirements and platform capacity Testing: Learn how to simulate performance scenarios using the Sonic Test Harness Tuning: Know the top ten tuning techniques for the Sonic Enterprise Messaging backbone A word about the topic and level of technical detail. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
3
Setting Performance Expectations
D I S C L A I M E R System performance is highly dependent on machine, application and product version. Performance levels described here may or may not be achievable in a specific deployment. Tuning techniques often interact with specific features, operating environment characteristics and load factors. You should test every option carefully and make your own decision regarding the relative advantages of each approach. Note that Performance Engineering is an imprecise science, sometimes referred to as a Black Art. D I S C L A I M E R SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
4
Methodology: Agenda Analyzing, testing and tuning ESB/JMS performance
Review the recommended approach to project and procedures Analysis: Understand how to characterize performance requirements and platform capacity Testing: Learn how to simulate performance scenarios using the Sonic Test Harness Tuning: Know the top ten tuning techniques for the Sonic Enterprise Messaging backbone A word about the topic and level of technical detail. “Performance Engineers need a deep knowledge of the architecture and a common sense approach to experimentation.” SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
5
Performance Concepts and Methodology
Terms and definitions Performance engineering concepts Managing a performance analysis project Skills needed for the project Performance Tools Project timeline Main Point(s): In this section we talk about general things that apply to almost any performance project. Later we’ll get to ESB specifics. Outline: I. Performance Concepts and Methodology [10 minutes] Script:: SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
6
Performance Engineering Terms
“Load” = “Sessions” * “Delivery Rate” “Platform” “System Metric” “System Under Test” R V “Test Harness” “Variable” = client param, app param, system param V V Main Point(s): Be able to identify these components very early in the project, because they allow you to scope the effort. Outline: I.A. Terms and definitions Script:: In standard QA tests, we apply a known input to the system, and determine whether the output is correct In performance tests, we aren’t concerned about the input or output content, but rather the pattern of the flow and the rate of arrival. We will use the techniques of the Scientific Experimental Method to control, run and measure each performance test. Don’t get too ambitious – it’s easy to expand scope later, but starting with a very complex System Under Test can jeopardize your ability to get in enough test iterations to draw meaningful conclusions. But never lose track of the thread that connects the tests you are running with the real business costs, benefits and risks you are there to evaluate. “Latency” = ReceiveTime – SendTime “Test Components” “External Components” Expert Tip: Limit scope to those test components that are critical to performance and under your control SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
7
Concepts: Partitioning Resource Usage
% CPU ( Overhead X Message rate ) = Load ∑ (Writes/sec) (msg/sec) (writes/msg) svcs Partitionable resources can be broken down as the sum of the contributions of each test component on the system Total resource usage is limited by system capacity Becomes the bottleneck as it nears 100% Goal is linear scalability as additional resource is added Vertical versus Horizontal scalability Total latency is the sum across all resource latencies, i.e.: Latency = CPU_time + Disk_time + Socket_time + wait_sleep Main Point(s): This is a reality check that will help you triage what areas will require attention. Outline: I.B.1. Partitioning resource usage Script:: Sequential operations that use basic system resources in a predictable, scalable way tend to be cumulative. While extrapolation is never safe, interpolation can be very accurate in these cases. Another useful exercise is to execute very-simple single-dimensional tests that help you derive the theoretical best-possible performance for some aspect of your performance equation. For sequentially structured processes, it is extremely useful to analyze the time of individual components. This lets you triage those areas most amenable to improvement and to determine which parts contribute to any differences you see. Bottom-Up Rule: Test and tune each component before you test and tune the aggregate. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
8
Concepts: Computer Platform Resources
CPU time Memory (in use, swap) # Threads Network I/O (send/receive) Disk I/O (read/write) Favorite tools: task mgr, perfmon, top, ping –s, traceroute Main Point(s): In the end all latency and throughput numbers result from the combined effect of these system resources. Outline: I.D. General computer performance concepts Script:: How you monitor and track depends on which performance question you are asking: Throughput: Identify bottlenecks, i.e. resources nearing their limits. This is often CPU, Disk or memory; but it can also be some higher level resource, such as thread limits, JVM heap specs, data locks or a specific disk subsystem. Latency: Look for the pieces of the system that are traditionally the slowest, especially: Any disk activity is slow, relative to computing Network activity is faster than disk, but still slow Certain components are CPU intensive, such as XML parsing Use level of detail appropriate to the question being asked Machine resource (such as %CPU) artifacts: side effects from uncontrolled applications timing of refresh interval correlation with test intervals Scaling across different platforms and resource types SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
9
The Performance Engineering Project
For each iteration Test performance vs goal Identify likeliest area for gain Test Analyze Tune Startup tasks Define project completion goals Staff benchmarking skills Acquire test harness Main Point(s): Avoid the classic waterfall approach and plan on managing many rapid prototyping iterations instead. Outline: I.C. Managing a performance analysis project Script: The simple instructions for a winning benchmark are “go thru these three steps, then spin until dry”. In essence, you keep questioning and experimenting, zeroing in on the best answer you can get in the time you have. The value of your work depends on: the quality of analysis and hypotheses your experts can come up with the precision of the test your engineers design to evaluate these the relevance of the requirements you use to set priorities as you go Daily meetings are a chance to clear the air, identify ‘dead ends’, reset priorities, escalate technical questions and track the project. The Project is Goal Driven Expert Tip: Schedule daily meetings to share results and reprioritize test plan. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
10
Performance Project Skills
Requirements Expert SLA/QoS levels – minimal & optimal Predicted load – current & future Distribution topology Integration Expert Allowable design options Cost to deploy Cost to maintain Testing Expert Load generation tool Bottleneck diagnosis Tuning and optimization R.E. Cost/Benefit Load/Distribution SOLUTION Main Point(s): Identify the key players who can bring these skills to the table, and arrange for their time in your schedule. Outline: I.C.1 Skills Script: Sonic consulting can provide the Integration Expert and/or Testing Expert, but not the Requirements Expert. I.E. T.E. Design Options SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
11
Tools for a Messaging Benchmark
System Under Test Test Harness Component Test Analyzer Platform Test Configurator Main Point(s): Select tools that balance functionality with productivity. Avoid tools with a long learning curve or too complex setup. Outline: I.C.2 Tools Script:: Configurator – creates conditions to bring system under test into known state Harness – the platforms and components whose performance response is being measured Analyzer – tools and procedures to make meaningful conclusions based on result data. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
12
Performance Project Timeline
System Test Development Project Deployment Plan Service Dev Sizing Process Dev Design Application Tuning Performance Project Main Point: Results are best when you coordinate the performance project with the general development project. Outline: I.C.3. Timeline Script:: Performance testing helps make better decisions at three critical points in the process: building a design/architecture that will scale improving cost/performance of individual modules estimating the hardware configuration needed to go live Finding design issues in scalability or QoS early on can greatly reduce risk and costs. Application tuning should be somewhat opportunistic – look for the biggest potential payoff and don’t worry about less important or low load services, etc. Also, prioritize those app decisions that will be harder to back out of later. The key to system sizing is accurate portrayal of the production system and incoming load. Normally you must have much of the application built to do this well. Beware of compounded “fudge factors”. Perf Prototype Launch Week SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
13
Analysis: Agenda Analyzing, testing and tuning ESB/JMS performance
Methodology: Review the recommended approach to project and procedures Analysis: Understand how to characterize performance requirements and platform capacity Testing: Learn how to simulate performance scenarios using the Sonic Test Harness Tuning: Know the top ten tuning techniques for the Sonic Enterprise Messaging backbone A word about the topic and level of technical detail. “Performance analysis begins by understanding business needs, then going on to characterize system load and scalability factors” SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
14
Performance Analysis Performance scenarios: requirements and goals
Efficiency (units/msg) X Load (msg/sec) Utilization (% total) = Capacity (units/sec) Performance scenarios: requirements and goals Some generic performance scenarios System characterization: platforms and architecture Test cases: specification for benchmark Main Point(s): Because throughput and capacity determine the relationship between hardware capacity and performance, we need a really good definition of our particular requirements. Outline: II. Performance analysis for the ESB Script:: This section discusses how you analyze You reason backwards: You have specific performance goals for delivered scalability (load) and Quality of Service (performance). You then need to test your ability to achieve this based on different potential hardware configurations (capacity) and design/tuning decisions (efficiency). SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
15
Performance Scenario Specification
First, triage performance-sensitive processes substantial messaging load and latency requirements impact of resource-intensive services Document only process messaging & services leave out Application specific logic – this is a prototype Set specific messaging performance goals Message rate and size Minimum and average latency required Try to quantify actual value and risk Why this use case matters Main Point(s): This document is key in ensuring that your work will have a concrete business benefit, it codifies the business requirements. Outline: II.A. Defining performance requirements and goals Script: There are good reasons we focus on only the messaging aspects of performance: it keeps focus on architectural components critical to scalability it enables you to find the bottlenecks in the total system it provides a framework for identifying candidates for tuning among the actual services, with a cost/benefit curve to guide the value of each option In other words, we recommend you start out treating business logic components as black boxes you cannot change. If you find things in them you want to change (tune or debug) as the project proceeds, think of these tasks as belonging to a separate project. Changes in the core application code will modify the baseline of the benchmark, so they may require a revalidation of previous results. Note: we will discuss three generic examples of scenarios. Rule of Thumb: Focus on broker loads over 10 msg/sec or 1 MByte/sec, and service loads over 10,000 per hour. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
16
Generic Scenario: Decoupled process
Asynchronous, loosely coupled, distributed services. Assumptions: Services allow concurrent, parallel distribution Messaging is lightweight, pub sub End-to-end process completes in real time May be part of a Batch To Real Time pattern Factors to analyze: Speed and scalability of invoked services Distributed topology Quality of Service Aggregate Latency over time across batched invocations Main Point(s): This is classic ESB asynchronous processing, optimized beyond any other technology through the use of process itineraries. Outline: II.A.1.a. asynchronous message stream Script: Asynchronous means that the application program is not hung while waiting for a reply back. Different programs or threads can be sending requests and managing result messages concurrently and independently. Aggregate latency builds up over a temporal window of execution, say, 100,000 messages over 2 hours. Since a true process is involved, the pattern of accumulating data as the ESB steps from service to service is critical, and latency is measured as the end result of many steps. Expert Tip: The easiest way to manage decoupled SOA processes is the ESB Itinerary. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
17
Generic Scenario: Real time data feed
High speed distribution of real time events Assumptions: Read-only data pushed dynamically to users Messages are small Service mediation is simple and fast Latency is very important, but QoS needs are modest Factors to analyze: Quality of Service, esp. worst case for outage and message loss Message rate and fanout (pub sub) Scalability of consumers Main Point(s): This is typical of financial applications and real-time controls applications where the most important thing is to have the most-recent readings of some shared, distributed variables. Outline: II.A.2.a. distributed high speed data feed Script: These kinds of ESB interactions are seen from the business level as point-to-point event streams, rather than multi-step processes. If fact, you may avoid an itinerary altogether and implement using services directly via their endpoints. Important sub-patterns include real-time data synchronization, event notification and monitoring, and data feed analysis. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
18
Generic Scenario: Simple request reply
Typical web service call that waits for response Assumptions: client is blocked, pending response small request message, response is larger latency is critical Factors to analyze: Latency of each component service Load balancing of key services Recovery strategy if loop is interrupted Client network, protocol and security specs Main Point(s): This is the most typical scenario in App Server and Web Service worlds, but it actually doesn’t make the best use of ESB. Outline: II.A.2.b. synchronous request reply Script: This always runs slower than the asynchronous process scenario, because of the resources tied up in the reply/acknowledgment cycles. The increase in service-level latency, in turn, requires that you work harder for a scaling model, to increase the number of service instances running in parallel. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
19
Example: Performance scenario specification
Overall project scope: Project goals and process Deployment environment System architecture For each Scenario: Description of business process Operational constraints (QoS, recovery, availability) Performance goals, including business benefits Main Point(s): The scenario documentation is a tool for communicating the values and costs of the solution between the performance team and the business application stakeholders. Outline: II. A. 3. Script: The specification must be relevant, i.e. have a provable connection to business value. It must also be concrete, so that the performance team can evaluate which aspects of the solution may have performance issues, and determine how to test that impact. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
20
Characterizing platforms and architecture
Scope current and future hardware options available to the project Identify geographical locations, firewalls and predefined service hosting restrictions Define predefined Endpoints and Services Define data models and identify required conversions and transformations. Main Point(s): Outline: II.B. Characterizing system platforms and architecture Script: Typically there are two ways the hardware question may be approached: Hardware already selected and you need to determine whether there is a risk of serious performance degradation. Hardware not selected and you want to determine the most cost-effective alternative to purchase. The Distributed architecture is very important, as that is a common area of performance failure, as well as a key reason for moving to ESB. Predetermined endpoints (which implies some commitment to an implementation) and services (often Web Services they are looking to leverage) help you determine the fixed part of the software equation. Ideally, at this time you make some evaluation of how large the impact would have to be before the company would reconsider their commitment to this infrastructure. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
21
Platform configuration specification
Network bandwidth latency speed Field DMZ DMZ CPU number type speed Memory size speed Firewall cryptos latency Disk type speed Main Point(s): This is a tool to help the team develop an intuitive feel for the scope and reach of the performing system. Outline: II. B. 1. Script: Don’t take excessive time on this, you can always come back for more details later. This is mainly important if you are doing capacity planning or comparing hardware alternatives. Computation of rules of thumb: Network 100 mbits or 1 gbit, divided by 8bits per byte Disk: measurements with our Java2Disk program Define system topology and capacity System: “uname –X”, Control Panel Disk: “df –g”, Drive properties, Java2Disk Network: “ping”,“tracert”, “traceroute”, NetPing Rule of Thumb: LAN 15 – 150 MBytes / second Disk: 2 – 10 MBytes / second XSLT: 200 – 300 KBytes / second SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
22
Platform Profile: Real-time messaging
System resource limitations 90% 5% 20% % capacity Main Point(s): Real-time messaging means basic guaranteed messages, always-available consumers and a system tuned for throughput. Outline: Script:: Typical Multi-Send-Carrier-Sense/Collision Detect (MSCS/CD) networks, are typical Ethernet Cat-5 segments, and can only transmit one control at a time. So there is a very hard physical limit on packet throughput. FDDI is dual looped Gigabit and will give you more bang for the buck, but then starts to get disk bound at about 8-9,000 msg/sec in ASYNC Disk I/O Mode. 70% Rule of Thumb: Real-time, 1 KB messages, broker performance is about 1,000 to 10,000 msg/sec for each 1 gHz cpu power. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
23
Platform Profile: Queued requests
System resource limitations 50% 85% 40% Main Point(s): Queued requests is a pattern where the messaging system is expected to buffer and manage peak loads that require time to service. Outline: Script: The slower run rate is a direct result of the increase in disk i/o required to manage this messaging pattern. Areas that contribute to latency include broker overhead writing to log and message store, and client thread contention. 20% % capacity Rule of Thumb: Queued msgs, 1 KB messages, broker performance is about 100 to 1,000 msg/sec for each 1 gHz cpu power. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
24
Architecture Spec: Service distribution
ESB ESB ESB Partner ESB Field Front Office Back Office Partner PoS CRM Finance SFA CRM Tracking Service SCM Integration Broker ERP Adapter SCM Adapter Main Point(s): You need a first-cut model of service distribution in order to configure the most realistic possible test setup. Outline: Script: This description will enable you to configure and deploy the brokers and service containers into the test environment. In fact, you can simply perform this deployment using Sonic admin tools, rather than write up a spec document. Remember the bottom-up rule: you should plan to do baseline testing with simple, single-broker ESB segments and unique service instances, then scale up once you understand the unit behavior. Identify services performance characteristics Identify high-bandwidth message channels Decide which components can be modified Annotate with message load estimates SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
25
Architecture Spec: Data Integration
Data Schemas 1…n Data Schemas 2…m ? Approximate the complexity of data schemas Identify performance critical transformation events Estimate message size Identify acceptable potential services Main Point(s): While data integration can be a key performance component, you want to avoid the complexities. Outline: Script: There is some debate whether to analyze data integration services, or to consider them black boxes similar to application services. We recommend you include analysis of this aspect if you have leeway to change the transformation services, schemas or other key components. Expert Tip: Transform tools vary in efficiency: XSLT – slowest (but most standard) Semantic modeler – generally faster (e.g. Sonic DXSI) Custom text service – fastest, but not as flexible SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
26
DEMO: Test lab setup Test hardware guidelines for lab computers
setting up the lab network Test architecture location of test components installation of brokers configuration of service containers Test design assets sample service definitions & wsdls sample test documents Main Point(s): Look for the most efficient setup for doing rapid prototyping, which is all performance testing really is. Outline: Script: Some expert tips: Use Activation Daemon or nohup to launch ESB containers Open command windows for test harness clients, using putty or similar telnet tool Create multiple user names in the Sonic Domain and assign a different one to each ESB connection and test harness client. Keep the console window open, and check connections, queues and durables regularly Always use actual host name for connections, rather than localhost NEVER use NFS mounts for messaging files SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
27
Specifying Test Cases Factors to include: Details to avoid:
Load, sizes, complexity of messaging Range of scalability to try (e.g. min/max msg size) Basic ESB Process model Basic distribution architecture Details to avoid: Application code (unless readily available) Detailed transformation maps Define relevant variables: Fixed factors Test Variables Dependent measures Main Point(s): You only need to scope the first test iteration – after that you’ll have plenty of clues that will lead you on to further investigation. Outline: II. C. Script:: An ESB is driven by the nature of the message events, and ESB performance depends mainly on the number and size of these messages and the complexity of the services manipulating them. It is also critical to understand the impact of various distribution, reliability and security schemes. Since this is rapid prototyping, avoid getting too wedded to your Test Case definition document. You will change the actual tests implemented based on new knowledge, as you go along. It may make sense to roll major changes in technique back to the original document, so that it can be used in the final report. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
28
Typical test variables
JMS Client variables: Connection / session usage Quality of Service Interaction mode: Message size and shape ESB container variables Service provisioning and parameters Endpoint / Connection parameters Process implementation and design Routing branch or key field for lookup Main Point(s): Test variables dictate what knowledge you will gain. Outline: II. C. 3. Script:: Connect/Sess: Connection Topology Number of connections and sessions Fanout of pub/sub Qos: Send/receive mode: persistent, nonpersistent, durable, etc Ack mode: Auto, Dups OK, Client JMS transactions Interaction: Fire and forget Batch to Real Time Request / reply Messages: Message structure: multipart, XML, text, etc Message size and complexity Services: Container location, number of listeners Endpoints: All client JMS factors JMS vs web service vs HTTP Processes: endpoint / CBR routing itineraries persistent process (BPEL) Expert Tip: External JMS client variables are easily managed with the Test Harness. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
29
Example: Test Case Specification
For each identified Test Case there is a section specifying the following: Overview of test How this use case relates to the scenario Key decision points being examined Functional definition How to simulate the performance impact Description of ESB processes and services Samples messages Design alternatives that will be compared Test definition Variables manipulated Variables measured Completion criteria Throughput and latency goals Issues and options that may merit investigation Main Point(s): It doesn’t have to be in writing, but you have to think about what you need to test before you start doing it! Outline: II. C. 4. Script:: The level of detail depends on the size and cohesiveness of the stakeholders who need to understand the performance testing. Ideally, the spec should include a sample of how the results will be shown. This is a good time to create a skeleton of the Final Report, with dummy placeholders for the information you hope to accumulate. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
30
Testing: Agenda Analyzing, testing and tuning ESB/JMS performance
Methodology: Review the recommended approach to project and procedures Analysis: Understand how to characterize performance requirements and platform capacity Testing: Learn how to simulate performance scenarios using the Sonic Test Harness Tuning: Know the top ten tuning techniques for the Sonic Enterprise Messaging backbone A word about the topic and level of technical detail. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
31
Testing Performance Staging test services in the test bed
Staging brokers and containers Configuring the Sonic Test Harness Running performance tests and gathering data Evaluating results for each test case Main Point(s): Topics we will cover in this section. Outline: III. Script:: “The key to performance testing is to accurately simulate your use cases and environment” SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
32
Staging Test Services Deploying existing services
Appropriate to use actual implementation of a service IF robust implementation exists minimal effort to set up in test environment no side effects with other test components Production ready services merit special treatment: perform unit load tests to get baseline document possible tuning / scaling options Main Point(s): You only need to understand the performance aspect – don’t lose time over-engineering the prototype. Outline: III. A. 1. Script: Obviously, the actual service is the best implementation for simulation. The key is not to waste time cleaning up any outstanding problems. Just visualize your best-guess of what the production implementation will be and take the shortest route to simulate the performance characteristics of that. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
33
Staging Test Services Prototyping proposed new services
Prototype should include: Correct routing logic for Use Case process Approximately correct resource usage Generic data Prototype does not need: Detailed business logic Exception handling code Invocation of non-critical library calls It’s a prototype. Just keep it simple Main Point(s): Plan to iterate over several successive approximations of the service. Outline: III. A. 2. Script:: Concentrate on: usage of the ESB API logic that impacts other services, especially routing SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
34
Staging Test Services Simulating non-essential services
Use ‘stub’ service as placeholder for service step that are not performance-sensitive Can return generic data Ensures ESB process for target use case will run correctly Useful stub services: Transform service GetFile service PassThrough service Enrichment service Prototype service (version or later) Main Point(s): The Sonic code share library has a number of useful prototype services that help simulate the impact of real ones. Outline: III. A. 3. Script:: A fully-supported Prototype service is planned to be available in the next release (or as a patch to 7.5.0). SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
35
Demo: Provisioning test services
Business Use Case Performance Test Case Web Portal Test Harness Status Request Status Query WSI: Address Svc WSI: Address Svc Addr Info Status Query Status Info XForm: Build query PassThru: Status Query Status Query Main Point(s): We will show how a performance prototype typically looks. Outline: III. A. 4. Script:: Go into the sample test itinerary and each step’s configuration. Show test messages and how you set up unit and process test/debug. Show container deployment and discuss how you manipulate that within the benchmark DBSvr: Query DBSvr: Query Query Result Query Result Adapter: M/F Callout PassThru: Sleep Enriched Result Query Result SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
36
Provisioning test brokers
Test broker must be similar to production Correct Sonic release and JVM Expected deployment heap size and settings CAA configuration Optimize network for replication channels Locate on separate host to avoid bottlenecks If failover testing is part of plan: define fault tolerant (JNDI) connections DRA configuration Set up subset of clusters and WAN simulations Measure local broker configs first, then expand Main Point(s): Accuracy of the broker is often important because that is the basis for software pricing with Sonic ESB. Outline: III. B. 1. Script: Also use a unique user name for the connection, so you can easily track it in the console. Make certain you enable ‘fault tolerant client’ in the connection, if the tests include failover. Expert Tip: For each client connection scenario define a named JNDI Connection Factory and document its characteristics. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
37
Provisioning ESB containers
Use ESB Container to manage service groups name accordingly to service group role plan to reshuffle services during tuning phase provision jar files out of sonicfs Use MF Container to control distribution name according to domain/host location configure Java heap appropriately for IBM jdk make -Xms = -Xmx for caching services (e.g. Transform, XML Server), add extra memory for locally-cached data Main Point(s): Be especially meticulous in naming, so you can easily communicate what is tested and why that is relevant. Outline: III. B. 3. Script: If you have a service that is problematic, you can enable multiple levels of trace, and then track the container’s log file using the tail –f cmd in a window. You can also view logs of any containers launched with an activation daemon using the mgmt console. Rule of Thumb: Default heap size is fine for most ESB containers; if memory is limited, reduce size, but no smaller than: 8MB + ∑(service jar size) + (#Listeners) * (200KB) SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
38
Demo: Setting up containers for test
Workbench view of containers Coding and debugging the prototype Runtime view of containers managing the distributed environment reinitializing back to a known state Main Point(s): No new ESB skills are required for the performance benchmark, it’s just routine ESB development. Outline: III. B. 4. Script: Container reload is adequate for almost all configuration and code changes. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
39
Simulating endpoint producers and consumers
System Under Test Test Harness Endpoint protocols and performance Test Driver options for various protocols Simulating process/thread configuration Implementing endpoint interaction modes Configuring client Quality of Service (QoS) Generating message content Demo of client/endpoint simulation Main Point(s): You must accurately simulate the input load to the system, or your measurements are meaningless. Outline: III. C. Script: Try to avoid having the benchmark becoming limited by the client. This means: use TestHarness or similar lightweight driver tool be prepared to deploy on multiple, fast-cpu machines monitor CPU usage on client machines, and scale up as necessary In the real world there may be hundreds or thousands of client machines, so you don’t expect a bottleneck in capacity on that side. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
40
Endpoint protocols and performance
JMS fastest client protocol strongest range of QoS and Failover HTTP moderate performance and QoS rigid connection model (requires client or router reconnect logic) Web Services slowest performance QoS and recovery depend on WS-* extensions File-based flat file pickup / dropoff, FTP limited to disk speeds (i.e. 1 to 5 MB / sec) appropriate for batch processing scenarios JCA appropriate for EJB server scenarios limited to EJB transaction speeds (i.e. 100 to 1000 msg / sec) Main Point(s): These are listed in order of performance; there may be possible compromises to balance performance with existing environmental needs. Outline: III. C. 1. Script: There is significant advantage in http because it uses a technique similar to Adaptive Pacing (http chunking) to enhance performance. Each http connection still uses 3 sockets and they are not "tagged", meaning there is no way to group them. So a load balancer, which works at the socket level will accidentally create a situation where portions of a connection are pointing to broker A and portions to broker B. Rule of Thumb: HTTP acceptors typically achieve ½ to ¼ the performance of comparable JMS/TCP acceptors. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
41
Client session configuration
Broker performance depends on scalability of connections and sessions JMS best-practice is one thread per session JMS sessions can efficiently share a connection Use session pool for clients and app servers For test simulation: determine allowable range of client threads test connection/session numbers up to max # threads distribute client processes / drivers across multiple machines, if needed, to avoid client-side bottleneck Main Point(s): Misconfiguration of sessions and threads is the #1 reason for unexpected, horrible performance. Outline: III. C. 2. Script: One of the most common ‘naïve’ mistakes is to under-configure client threads and sessions. The next most common is to repeatedly destroy and create the sessions and/or connections. Rule of Thumb: Create one connection for every 10 to 20 sessions SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
42
Configuring client Quality of Service (QoS)
HTTP and Web Services clients best possible QoS is at least once even with WS-Reliability JMS Client: CAA w NonPersistentReplicated → exactly once Many shared sub’s versus one durable sub NonPersistent Request/Reply → at least once Discardable send to avoid queue backup Flow to disk to prevent blocked senders ESB service: Exactly once uses JMS transaction At least once uses client ack Best effort uses dups_ok ack Broker: Sync (default) versus Async disk i/o Main Point(s): You should experiment a little even with modes you don’t think you’ll use, in order to build understanding of the underpinnings. Outline: III. C. 3. Script:: Total QoS is a combination of factors Broker fault tolerance Guaranteed messaging protocol reinforced via synchronous ack and persistent msg logging Service redundancy, resilience and transaction recovery Sonic CAA ranks at the top in QoS price performance Network-based replication cheaper than hardware cluster solutions Ease of setup and management reduces operational cost Dynamic failover reduces message loss on client and service side Notes: Discardable is also an option In general, real-time messaging provides exactly-once at endpoints, but this translates to at-least-once within service scope You can use XA connections, but this violates real-time processing by introducing sync points and in-doubt txns. Setting async disk i/o to be discussed later (Tuning Tip #5) Expert Tip: Best compromise is at least once QoS based on CAA, Persistent, Async disk and DUPS_OK. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
43
Generating message content
Status Request priority=2 org=1 Cust Info gen random int transform rule gen sample xml Addr svc lookup Simulate message size / distribution for accurate results Message content may trigger ESB routing rules Some services depend on message content key values must match existing data / rules duplicate key value could cause error services that cache content require accurate key distribution Simulating content in the client / driver file-based message pool message template generation Java / object message generator Message properties Main Point(s): Mainly an issue when you need a mixed sample of valid messages to trigger existing services. Outline: III. C. 4. Script: Test harness lets you select among a group of files previously generated, generate text based on a dynamic template that substitutes string values, or invoke a user Java object that generates object content. Sonic Test Harness supports all these SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
44
Demo: Simulating clients with Test Harness
System Under Test Test Harness Connection Broker JNDI Msg Pool Reply Request Message Generation Main Point(s): A quick step-through of a real Test Harness run. Outline: III. C. 5. Script: Important Test Harness features: Control over connections, sessions, destinations Management of message request/reply loop Generation of messages Dynamic prompt versus specified properties versus command line args Logging to comma-delimited output for spreadsheet consumption. JNDI connection configuration Producer / Consumer parameters Message generation SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
45
Running performance test iterations
Logistics of test orchestration managing multiple Test Harness clients configuring test intervals test warm-up and cool-down Data collection correlation Ensuring repeatability of results Demo of Test Harness iterations Main Point(s): Keep the process simple and low-tech and you won’t regret it. Outline: III. D. Script: We will cover each of these topics. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
46
Logistics of test orchestration
Managing multiple Test Harness clients Simplest option is multiple command windows use telnet sessions for remote hosts initiate test and warmup hit <enter> key in each window Advanced environments can use distributed driver: Grinder, SLAMD, JMeter, LoadRunner, … Configuring test intervals long enough to detect trend effects short enough to allow fast iteration across tests Test warm-up and cool-down helps eliminate first-time test artifacts ensures steady-state performance numbers Main Point(s): You need to make your own decision on the tradeoff between efficiency, effectiveness and accuracy. Outline: III. D. 1. and 2. Script: Warm-up iterations include messages sent to initialize brokers and services but not measured in the test. Warm-down iterations are additional messages sent to ensure all test clients complete under full load. Without these, you normally see lower throughput rates and artificially slow latencies in the first test iteration. Rule of Thumb: Start with intervals of 30 secs; if steady state is soon reached, reduce to 10 intervals of 10 seconds, plus warmup. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
47
Data collection correlation
Test Harness output: Throughput (msg/sec) Latency (msecs per round trip) Message size (bytes) System measures: CPU usage (%usr, %sys) Disk usage (writes/sec) Broker metrics: Messaging rate (bytes/second) Peak queue size (bytes) Main Point(s): Since most of these measures quickly hit steady state you can get accurate readings without going through hoops. Outline: III. D. 3. Script: Tracking system level resources has two advantages: it helps identify which resource is the limiting factor, and whether there are discontinuities that could explain some artifact ‘stealing cycles’ from your test. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
48
Ensuring repeatability of results
Experimental method requirement critical in measuring impact of change validate by rerunning identical test Most common artifacts impacting repeatability messages left in queue duplicate files dropped in file system growing database size / duplicate keys disconnected Durable subscribers cached Service artifacts (ESB default) Main Point(s): Determine the expected steady state conditions in production and reliably reproduce that state before every test.. Outline: III. D. 4. Script: Common rewind techniques: shell script to truncate or delete user files use Sonic Management Console (or JMX script) to delete messages from all queues and durable subscribers use SMC or script to reload all ESB containers take save of database and restore after each test sequence alternatively, have SQL scripts that undo critical database operations, e.g. delete rows to be created in the test recycle external app servers, etc. in extreme cases, you could reboot, but verify this actually makes a difference before doing it; also be sure to run warmup tests after a reboot to prevent first-time testing effects. Expert Tip: Run initial tests twice in succession; if results differ by more than 3% or so, investigate why. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
49
Demo of Test Harness iterations
Baseline test Change test harness properties Rerun test Show spreadsheet across tests Main Point(s): Simple show and tell of the Test Harness running in a command window. Outline: III. D. 5. Script: Show producer test harness started in prompt mode, receiver test harness launched with a pre-built properties file. Open other consoles to monitor queue depth, CPU, etc. Note issue of monitoring tools that can themselves impair overall performance. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
50
Evaluating performance Measurement against goal
Short of goal Perform bottleneck analysis / attempt tuning Review option of scaling up resources Review design change options Give up and re-think goal Meet or exceed goal Continue scaling up and tuning ‘til it breaks Give up and declare success Main Point(s): Here is where you fine tune the performance project itself to keep it on track. Outline: III. E. Script: This is the most intuitive part of benchmarking. Rely on your own common sense and advice of collaborating technical experts. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
51
Evaluating performance Bottleneck analysis
Review of resource consumption Determine cpu-bound, disk-bound, net-bound Identify components using the resource Possibility of offloading to other hosts Examine trends in scalability tests Possibility of improving throughput by adding more client sessions, ESB listeners, clustered brokers, etc. Option of rebalancing (# threads, java heap, priority Go through Top Ten tuning tips and others… Last resort: recode or redesign to save cycles Main Point(s): This is more or less an art, but it helps to have a strong visualization of the entire system architecture. Outline: III. E. 2. Script: Before you make major decisions based on bottleneck analysis try to perform one or two tests to validate this hypothesis. For example, set sleep times to run at exactly ½ of the achieved maximum, and verify that the critical resource usage is approximately 50%. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
52
Evaluating performance Compare across test runs
Carefully planned test runs yield fertile comparisons: estimate cost/benefit of a feature or option estimate incremental overhead of a tunable parameter narrow the field of concerns and alternatives Advice in collating and analyzing test runs collect test summary results in spreadsheet save raw data and logs in a separate place save test config, so you can replicate later schedule ad hoc review after each test sequence Main Point(s): The scientific method . Outline: III. E. 3. Script: Some additional things to keep in mind at this stage of work: Give thought to packaging and archiving test runs, so you can re-run important tests again at some later time Give thought to how these results will be presented to higher-level decision makers; if you draft that presentation as you go along, you’re likelier to collect all the information you need. Put some time and thought into naming and organizing the directories, files and documents you are collecting, to avoid chaos later. Expert Tip: Verify key conclusions by replicating key test runs that led to that conclusion SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
53
Demo: Example test result matrix
Update Scenario Query Scenario Msg Size DB Svc Thruput * Latency ** Test 1 ORX 1 KB 112 331 10 KB 146 460 100 KB 152 1197 XSVR 121 68 632 139 2113 688 Test 2 Test 3 Main Point(s): You need a good written summary to leverage what you’ve learned. Outline: III. E. 4. Script: Spreadsheets are especially important if you are deciding on infrastructure hardware; in that case you’ll want to draft the load performance curves for your most representative tests. Most performance measures show a U-shaped curve as you manipulate a particular variable; the results table can help you find the optimal point on the graph. Test 4 Test 5 Test 6 * kbytes / second ** milliseconds SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
54
Tuning: Agenda Analyzing, testing and tuning ESB/JMS performance
Methodology: Review the recommended approach to project and procedures Analysis: Understand how to characterize performance requirements and platform capacity Testing: Learn how to simulate performance scenarios using the Sonic Test Harness Tuning: Know the top ten tuning techniques for the Sonic Enterprise Messaging backbone A word about the topic and level of technical detail. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
55
Performance Tuning with Sonic ESB®
Diagnostics Review of ESB architecture Factors influencing message throughput Factors influencing message latency Factors influencing scalability Top Ten Tuning Tips Other tuning issues Broker parameters Java tuning ESB tuning Specialized ESB services Main Point(s): Tuning is an art and a science – bring in experts from Progress to help. Outline: IV. Script:: “Success in performance tuning requires balancing tradeoffs among speed, scalability and functionality” SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
56
Review ESB Architecture
SOA view System view Router/Switch Router/Switch SERVICES COMMUNICATION BACKBONE NETWORK SERVICE CONTAINER ESB CONTROL Message Log DBMS Cache / Swap Threads VM I/O Controller Memory Network Interface Main Point(s): System view helps understand the absolute limitations and the reason bottlenecks exist, but the SOA view is where you test and tune. Outline: IV. A. 1. Script:: While the SOA components shown correlate loosely to the system components, there is overlap and you need to review any simplifying assumptions that you make. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
57
ESB System Usage: CPU Sources of CPU cost: Application code
XML Parsing CPU cost of i/o Network sockets Log/Data disks Web Service protocol Web Service security Security Authorization Message or channel encryption Main Point(s): Usually if you’ve tuned the system until it’s CPU bound you’ve done the best possible. Outline: IV. A. 2. Script: Sonic ESB v7.5 has done a lot to reduce CPU usage through smarter use of caching and reduced internal message processing. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
58
ESB System Usage: Disk Sources of Disk overhead: Database services
Large File / Batch services Message recovery log Might not be used if other guarantee mechanisms work Message data store Disconnected consumer Queue save threshold overflow Flow to disk overflow Message storage overhead depends on disk sync behavior Explicit file synchronization ensures data retained on crash Tuning options at disk, filesystem, JVM and Broker levels Main Point(s): Disk is a core resource for transactional and logging primitive operations; actual impact depends on complex causality. Outline: IV. A. 2. Script: diskperf or Java2Disk often shows numbers that are at odds with achievable performance. You should be prepared to analyze WHY. Is there extra disk I/O? Thread contention? what? Otherwise you will never reach the theoretical maximum predicted based on disk performance alone. Expert Tip: To estimate best-case message log performance, use the DiskPerf utility bundled with Test Harness. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
59
ESB System Usage: Network
Sources of Network overhead: Client application messages and replies Service to service steps within the ESB (except intra-container) ESB Web Service callouts CAA broker replication messages Metadata (JMX, cluster, DRA) messages (normally < 1%) Computing network bandwidth: Network card: 100 mbit ~= 12 MB/sec, 1 gbit ~= 120 MB/sec Network switches are individually rated Computing network load: message rate X message size include response messages and intermediate steps add ack packets (very small) for each message send Main Point(s): Network is normally the cheapest resource to scale up, especially at the NIC level. Outline: IV. A. 2. Script: Intra-container messaging is a good way to reduce network hops. Configure separate, high performance network links for CAA replication channels (e.g. second gbit NIC’s linked by crossover cable) Network performance to remote sites over unreliable connections can be improved using a remote broker connected to the main bus via DRA. Expert Tip: To estimate best-case network performance, use the DiskPerf utility bundled with Test Harness. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
60
Tip #1: Increase sender and listener threads
Tip #1: Increase sender and listener threads to make service & client scale Container1 ServiceX … Container2 ServiceX … … Increase # Listeners for key entry Endpoints Add more Service/Process instances Warning: Intra-Container ignores endpoint split scalable service into separate container turn intra-container messaging off note: sub-Process is always intra-container Main Point(s): Sonic ESB was built to scale, so you should take advantage of that in your design. Outline: IV. B. 1. Script: Carefully analyze any way in which the service itself may limit scalability: is it multi-threaded?, process concurrency? There is an artificial limit of 100 Listeners per component built into the ESB, but in the rare cases where a greater number is workable, you can launch additional containers on the same host. Expert Tip: Stop increasing Listeners when CPU usage nears maximum acceptable. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
61
Tip #2: Implement optimal QoS to balance speed versus precision
ESB QoS MQ Setting Message Loss Events Duplicate Msg Events N/A Discardable Buffer overflow, Any crash Never Best Effort NonPersistent, DupsOK ack Broker crash, Client crash At Least Once Persistent, Exactly Once Transacted Main Point(s): QoS implementation involves expensive disk, network and thread operations, so it’s fertile ground for tuning. Outline: IV. B. 2. Script: Without CAA your biggest improvement is in going from exactly-once QoS to at-least-once. With CAA the biggest improvement tends to come from turning off disk sync operations, using advanced broker parameters, and relying on CAA replication rather than disk log backup. (Based on CAA brokers and fault-tolerant connections) Expert Tip: With CAA configured, Best Effort service is equivalent to At Least Once, with substantially lower overhead. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
62
Tip #3: Re-use JMS objects to reduce setup costs
Objects with client and broker footprint: Connection Session Sender/Receiver/Publisher/Subscriber Temporary destination Tuning strategies: Reuse JMS objects in client code Share each Connection across sessions Share Sessions across Producers and Consumers but not across JVM Threads For low-load topics/queues Use Anonymous Producer Use wildcard or multi-topic subscription Main Point(s): SonicMQ is built to be massively multi-threaded, but that depends on applications using Sessions effectively. Outline: IV. B. 3. Script: Repeated creation/destruction of connections and sessions is the #1 biggest beginner mistake with JMS. There is substantial overhead to creation/removal of temp topics or queues, so you should reuse them, once created, or maybe use a separate persistent topic/queue instead. Why fewer queues than topics?: Topics share broker buffer / hash space more effectively Topics allow more dynamic low-level ack protocols Queues require more space and work for replication Rule of Thumb: Up to 500 queues per Broker and 10,000 topics per broker. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
63
Tip #4: Use intra-container service calls to avoid broker hops
Inter-Container Messaging Receive Msg Unmarshall Msg Call onMessage Instantiate Proc … Dispatch Outbox Marshall Msg Send Msg … Broker Intra-Container Messaging v7.5: better! faster! Main Point(s): Get all the benefits of in-memory procedure calls without the lock-in to an app server stack. Outline: IV. B. 4. Script:: Intra-Container messaging reduces message overhead and latency no JMS send/receive, no network call ESB message marshalling still occurs metadata cached to improve performance* Execution is sequential within the current ESB listener thread may work better to scale across containers Intra-container performance is greatly optimized in version 7.5, up to 10X faster for simple pass-through test. … Dispatch Outbox Unmarshall Msg Marshall Msg Call onMessage Send Msg SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
64
Tip #5: Use NonPersistentReplicated mode to reduce disk overhead
Normal broker mechanisms require disk sync contributes to latency across the board interferes with batching of packets limited bandwidth Disabling disk sync eliminates this overhead Send mode NonPersistentReplicated Optional broker params to disable entirely WARNING: Log-based recovery will lose recent messages BUT: CAA failover will not Main Point(s): QoS is quite manageable in these scenarios, but apps with absolute message guarantee requirements should be cautious. Outline: IV. B. 5. Script: In v7.0.1 and beyond, NPR is the default if you send NonPersistent, but you can modify this with advanced broker parameters. ESB QoS best-effort uses NonPersistent send, so you can get NPR implicitly. However, transaction logic and total performance is usually better if you use at-least-once QoS (and Persistent/DUPS_OK ack for JMS clients) while disabling log/store disk sync. Rule of Thumb: Network overhead for Replication channel is about ½ the Persistent Msg load of the broker. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
65
Tip #6: Use XCBR instead of CBR to eliminate Javascript overhead
CBR rules implemented via Javascript dynamic change with complex rules very high overhead for runtime engine XCBR rules extract data fields for comparison only simple comparisons supported no script engine overhead use message property data key for best effect Main Point(s): Avoid the Javascript engine wherever you can. Outline: IV. B. 6. Script: Sonic uses the Rhino jscript engine for CBR rules, but this requires massive instantiation of the interpreter for each rule set executed. Note that any service can be configured to be a logical Content-Based Routing step, and the ESB API provides a rich capability for performing dynamic routing. Rule of Thumb: Invocation of the Rhino javascript engine requires about 100 milliseconds cpu time on a typical platform. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
66
Tip #7: Use message batching to accelerate message streams
Consumer Producer Msg … Ack Broker Msg Ack … Message transfer overhead is generally fixed Hidden ack messages amenable to tuning: AsyncNonPersistent mode decouples ack latency Transaction Commit allows 1 ack per N messages DupsOK ack mode allows ‘lazy’ ack from consumer Pre-Fetch Count allows batched transmit to consumer ESB Design option: send one multi-part message instead of N individual messages XML transforms and other services handle multi-record data efficiently Main Point(s): Client and broker options let you amortize some expensive ops against multiple message events. Outline: IV. B. 7. Script:: Message overhead consists of the send/receive, but also the broker’s ack message. Normal transactions don’t reduce message net overhead, but will batch the ack to 1 per txn Non-persistent async buffers messages in the client & streamlines ack, improving latency & reducing net packets, but risking message loss if client crashes Advanced tuning techniques include Adaptive Pacing of producers, using time/size algorithm to optimize message batch. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
67
Tip #8: Minimize XML/SOAP operations to avoid parsing overhead
Input Message propY=9 propX=1 XML Transform XCBR Input Message Input Message JAXB Obj Msg part Custom JAXB JAXB Service Bypass SOAP and Web Services processing Use HTTP Direct Basic instead of SOAP or WS Risk of invalid XML if source is unreliable Combine multiple XML parsing steps into one Save target XPath results as Message props Also relevant for BPEL correlation ID’s Main Point(s): XML DOM is slow, so bypass it when you can Outline: IV. B. 8. Script: Any custom service that uses pull-parser technology will outperform the default XPath parsing within ESB. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
68
Tip #9: Use high-speed encryption to reduce security overhead
Default SSL encryption uses old RCA stack At least 2X slower than more modern options Change to any JSSE compliant stack: modify client DSSL_PROVIDER_CLASS to: progress.message.net.ssl.jsse.jsseSSLImpl change broker SSL provider from RSA to JSSE Use more efficient cipher suites: RSA_With_Null_MD5 is the smallest and fastest Reduce broker memory overhead by deleting any unused ciphers Main Point(s): Match your encryption strategy to firewall architecture and company policy. Outline: IV. B. 9. Script: At some level, CPU cost of encryption is proportional to the strength of the encryption. However, modern encryption ciphers are much more efficient than older ones. Most efficient option is to not do it at all – this may be acceptable for message channels that occur entirely behind the firewall, for insecure data or where the data content may be passed through as an encrypted bytes message that is never opened up. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
69
Tip #10: Use stream API's to improve large message performance
SonicMQ Recoverable File Channels Uses JMS layer to manage large file xfer Queue-based initiation of transfer High-speed JMS pipeline for blocks of data Recovery continues at point interrupted Sonic ESB open-source Large Message Service Provides dynamic provisioning Interacts with ESB processes SonicStream API (version 7.5 or later) Topic-based, pipeline into Java Stream api No recovery Main Point(s): Sonic very strong on managing very large files. Outline: IV. B. 10. Script: Recoverable files: essential in batch to real time pattern Stream-based: perfect for generic clients where light footprint and continuous processing are key Note the default message size limit for a broker is 10 MBytes, but that can be easily expanded to 100 MBytes or more if you carefully resize queues, buffers and JVM heap. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
70
Broker Tuning Parameters
Core Resources: JVM heap size JVM thread, stack limits DRA, HTTP and Transaction threads TCP settings Message storage: Queue size and save threshold Pub/sub buffers Message log and store Message management Encryption Flow control and flow-to-disk Dead message queue management Connections Mapping to NIC’s Timeouts Load balancing Main Point(s): Very little tuning of the broker is normally needed. Outline: IV. C. 1. Script:: Usually the tuning change that makes the biggest performance difference is to disable disk sync on the recovery log and SonicMQStore files. More detail tuning becomes important when you have backlog in the messaging system, i.e. the consumers are unable to keep up with the producers of messages. Rule of Thumb: For non-trivial queues, multiply default settings by 10 to 100. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
71
Java Tuning Options ‘Fastest’ JVM depends a little on the application and a lot on the platform VM heap needs to be large enough to process load, but small enough to avoid system swapping Garbage Collection: default settings good for optimal throughput use advanced (jdk4 or later) GC options to optimize worst-case latency Main Point(s): Standard Java tuning applies here, using a server oriented configuration. Outline: IV. C. 2. Script: Sonic ESB is pure Java, so tuning hints you get from the Java community are always relevant. In general, the source code for the message broker is tuned to avoid substantial thread and memory overheads, but of course a large percentage of allocation, garbage collection, etc is unavoidable. Unless your system is hitting system limits, don’t expect JVM tuning to make more than a 5% to 10% difference. Rule of Thumb: On windows platforms, the Sun JVM is 10% to 50% slower than the default IBM JVM. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
72
ESB Tuning Options Load balancing and scalability of services:
number of distributed service instances number of service listener threads Container Java VM heap size Intra-Container messaging Endpoint and connection parameters same principles as JMS client Main Point(s): Mostly a matter of achieving reasonable scalability for each service. Outline: IV. C. 3. Script:: You scale services up by increasing the number of service instances (which imply distribution across multiple JVM’s) and the number of listeners (which map to threads in the operating system). Quickly review how intra-container messaging works. Expert Tip: Start with small Java heap and only increase -Xms size if it improves performance. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
73
Discussion of Service tuning
Transformations XML Server BPEL Server Database Service DXSI Service Main Point(s): Look carefully at the key resources used by the service and that will provide hints on what tuning may be useful. Outline: IV.C.4. Script: Indexing is the most important generic technique. Caching is very important to performance, but mostly built-in (although you can set global cache size for XSvr) SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
74
Other fun things you can tune
Database: indexing, query optimization SOA patterns: federated query, temporal aggregation, split/join, caching XML: DOM, SAX, XStream Disk: Device balancing, RAID, mount params Network: Nagle algorithm, timeouts Main Point(s): The triage process should bring various components to the top, based on progress made in each iteration. Outline: IV.C.4. Script: You can search the internet for sources of expertise in all these areas. In general you can incorporate these ideas into an ESB service and host it on the bus. Network tuning should concentrate on optimizing throughput and latency for the medium sized packets (normally 100 bytes to 16 kilobytes) typically transferred between broker and client. Expert Tip: If you save service resources in sonicfs, the ESB container will dynamically load and cache them. SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
75
Other Performance Engineering Resources
No “Magic Bullet”, but plenty of places you can go for info: SonicMQ Performance Tuning Guide Benchmarking Enterprise Messaging Systems white paper Sonic Test Harness User Guide Progress Professional Services Developer training courses Sonic Performance Analysis package Main Point(s): This is a constantly moving target, make an effort to keep informed. Outline: Script: Other potential resources: SonicMQ Broker Memory Usage Guide System Performance Tuning by Mike Loukides SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
76
Questions? SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
77
Thank you for your time SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
78
SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.