Mark Nesson June, 2008 Fine Tuning WebFOCUS for the IBM Mainframe (zSeries, System z9)

Slides:



Advertisements
Similar presentations
Copyright 2007, Information Builders. Slide 1 Performance and Tuning Mark Nesson, Vashti Ragoonath June 2008.
Advertisements

Module 13: Performance Tuning. Overview Performance tuning methodologies Instance level Database level Application level Overview of tools and techniques.
Performance Testing - Kanwalpreet Singh.
SQL Based Data Access Bodo Bachmann.
1 Enabling OpenVMS for Data & Application Integration 30, 2005 *John Apps – HP Strategic Planning and Architecture *Mark Peterson.
Copyright 2007, Information Builders. Slide 1 Workload Distribution for the Enterprise Mark Nesson, Vashti Ragoonath June, 2008.
1 Web Server Performance in a WAN Environment Vincent W. Freeh Computer Science North Carolina State Vsevolod V. Panteleenko Computer Science & Engineering.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Threads Irfan Khan Myo Thein What Are Threads ? a light, fine, string like length of material made up of two or more fibers or strands of spun cotton,
Web Server Hardware and Software
Server Platforms Week 11- Lecture 1. Server Market $ 46,100,000,000 ($ 46.1 Billion) Gartner.
Integration of Applications MIS3502: Application Integration and Evaluation Paul Weinberg Adapted from material by Arnold Kurtz, David.
5 Creating the Physical Model. Designing the Physical Model Phase IV: Defining the physical model.
1 © Prentice Hall, 2002 The Client/Server Database Environment.
February 11, 2003Ninth International Symposium on High Performance Computer Architecture Memory System Behavior of Java-Based Middleware Martin Karlsson,
Chapter 9 Overview  Reasons to monitor SQL Server  Performance Monitoring and Tuning  Tools for Monitoring SQL Server  Common Monitoring and Tuning.
Network Simulation Internet Technologies and Applications.
Confidential ODBC May 7, Features What is ODBC? Why Create an ODBC Driver for Rochade? How do we Expose Rochade as Relational Transformation.
Performance and Capacity Experiences with Websphere on z/OS & OS/390 CMG Canada April 24, 2002.
Measuring zSeries System Performance Dr. Chu J. Jong School of Information Technology Illinois State University 06/11/2012 Sponsored in part by Deer &
Copyright 2007, Information Builders. Slide 1 WebFOCUS Authentication Mark Nesson, Vashti Ragoonath Information Builders Summit 2008 User Conference June.
Shilpa Seth.  Centralized System Centralized System  Client Server System Client Server System  Parallel System Parallel System.
Computer System Architectures Computer System Software
Databases and the Internet. Lecture Objectives Databases and the Internet Characteristics and Benefits of Internet Server-Side vs. Client-Side Special.
1 Progress Software’s OpenEdge Platform Which database is right for your environment? Simon Epps.
Copyright 2007, Information Builders. Slide 1 Performance and Tuning Tips Mark Nesson/Vashti Ragoonath October 2008.
Jeff Shiley. Start Point System Environment User Experience Our “Unique” Requirements Solution System Evaluation & Prototype Single Sign-on Component.
Physical Database Design & Performance. Optimizing for Query Performance For DBs with high retrieval traffic as compared to maintenance traffic, optimizing.
TWSd - Security Workshop Part I of III T302 Tuesday, 4/20/2010 TWS Distributed & Mainframe User Education April 18-21, 2010  Carefree Resort  Carefree,
DataMigrator Data Analysis with WebFOCUS. 2 Metadata Data Lineage Data Profiling Data Transformation Administration Connectivity Portability DataMigrator.
Introduction to the Adapter Server Rob Mace June, 2008.
The Client/Server Database Environment Ployphan Sornsuwit KPRU Ref.
Mainframe (Host) - Communications - User Interface - Business Logic - DBMS - Operating System - Storage (DB Files) Terminal (Display/Keyboard) Terminal.
Dan Grady The search for the killer productivity application is over… Copyright 2009, Information Builders. Slide 1.
Instant Information Access With Magnify Search Dr. Rado Kotorov Technical Director Strategic Product Mgt.
Database Farming For Improved Performance Presented By: Russell Yong Supervisor: Prof Wentworth.
Case Study: A Database Service CSCI 8710 September 25, 2008.
Embedded System Lab. 정범종 A_DRM: Architecture-aware Distributed Resource Management of Virtualized Clusters H. Wang et al. VEE, 2015.
WebFOCUS Magnify: Search Based Applications Dr. Rado Kotorov Technical Director of Strategic Product Management.
Copyright 2007, Information Builders. Slide 1 Machine Sizing and Scalability Mark Nesson, Vashti Ragoonath June 2008.
Introduction to the new mainframe © Copyright IBM Corp., All rights reserved. 1 Main Frame Computing Objectives Explain why data resides on mainframe.
Infrastructure for Data Warehouses. Basics Of Data Access Data Store Machine Memory Buffer Memory Cache Data Store Buffer Bus Structure.
Dynamo: Amazon’s Highly Available Key-value Store DAAS – Database as a service.
Enterprise Network Systems Client/ Server Mark Clements.
CROSS PLATFORM MOBILITY
Database Connectivity and Server-Side Scripting Chapter 12.
Performance Testing Test Complete. Performance testing and its sub categories Performance testing is performed, to determine how fast some aspect of a.
UNIT-3 Performance Evaluation UNIT-3 IT2031. Web Server Hardware and Performance Evaluation Key question is whether a company should host their own Web.
Node.Js 1. 2 Contents About Node.Js Web requirement latest trends Introduction Simple web server creation in Node.Js Dynamic Web pages Dynamic web page.
Slide 6-1 Chapter 6 System Software Considerations Introduction to Information Systems Judith C. Simon.
Copyright 2007, Information Builders. Slide 1 Performance, Scalability, and Reliability With iWay Software Mark Nesson June, 2008.
Cloud Computing – UNIT - II. VIRTUALIZATION Virtualization Hiding the reality The mantra of smart computing is to intelligently hide the reality Binary->
Copyright 2007, Information Builders. Slide 1 iWay Web Services and WebFOCUS Consumption Michael Florkowski Information Builders.
CLIENT SERVER COMPUTING. We have 2 types of n/w architectures – client server and peer to peer. In P2P, each system has equal capabilities and responsibilities.
IBM Systems Group © 2004 IBM Corporationv 3.04 This presentation is intended for the education of IBM and Business Partner sales personnel. It should not.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Unix Server Consolidation
Understanding and Improving Server Performance
Consulting Services JobScheduler Architecture Decision Template
Consulting Services JobScheduler Architecture Decision Template
Virtualization Engine console Bridge Concepts
Network Load Balancing
Where are being used the OS?
The Client/Server Database Environment
Software Architecture in Practice
Scalable, distributed database system built on multicore systems
Performance And Scalability In Oracle9i And SQL Server 2000
Client/Server Computing and Web Technologies
Presentation transcript:

Mark Nesson June, 2008 Fine Tuning WebFOCUS for the IBM Mainframe (zSeries, System z9)

Copyright 2007, Information Builders. Slide 2 Why WebFOCUS for z  Runs natively on MVS, Linux  IBM has brand new specialty engines you can take advantage of  Ability to create partitions on z to centralize business intelligence on a single server – where the databases and applications reside

Copyright 2007, Information Builders. Slide 3 Information Builders products used in benchmark  WebFOCUS  iWay Software  iWay Service Manager, is a unique and powerful Enterprise Service Bus (ESB) that invoked as Web services to provide event-driven integration and B2B interaction management.

Copyright 2007, Information Builders. Slide 4 There’s an easier way! RDBMS HTTP ClientsWeb Server App Server/ Servlet Container Reporting Server RC Repository RC Distribution Server RDBMS DB Servers ibi_html ibi_bid approot ibi_apps rcaster basedir worp adapters focexecs synonym data reports HTTP/ HTTPS HTTP/ HTTPS/ Proprietary TCP HTTP/ HTTPS TCP/JDBC TCP/ JDBC TCP via Client Driver or JDBC TCP/JDBC

Copyright 2007, Information Builders. Slide 5 Benchmark Objectives  Test WebFOCUS, on the proven strengths of System z hardware running the Linux open source OS with UDB, and z/OS with DB2.  Evaluate the scalability and performance of WebFOCUS and iWay Service Manager in all operating systems environment on IBM z Server and the benefit of using various specialty engines on IBM z server.  All test configurations accurately and faithfully replicated prior benchmarks run on other UNIX and Windows platforms.  Test results therefore represent the true performance of WebFOCUS workload on that hardware vendor machine.  Testing was done at IBM Gaithersburg (Washington Systems Center) in November of 2006 by a combined team from IBM and Information Builders.

Copyright 2007, Information Builders. Slide 6 UDB and DB2 database size  Linux system IBILIN03 under zVM and z/OS system IBI1 are used in the various benchmark configurations to host the test databases.  Two databases are defined on IBILIN03 and IBI1  With multiple tables defined to each database  2 million rows of data  7 million rows of data  Each row is 256 bytes long.

Copyright 2007, Information Builders. Slide 7 Benchmark Test Workload used  Workload-small: Query to retrieve 61 rows of data  Workload-large: Query to retrieve 3000 rows of data  Workload-complex: CPU intensive query, involves 4 table join retrieve 5118 rows

Copyright 2007, Information Builders. Slide 8 How IBI tests were measured  For each given test configuration, Information Builders used the same parameter settings, i.e. interval time, keep alive time, to run small, large and complex workload by varying the concurrent active user numbers, then measure the end to end user response time.

Copyright 2007, Information Builders. Slide 9 Test Environment – 1 (all on Linux)

Copyright 2007, Information Builders. Slide 10 Test Environment – 1 (Linux) Scenarios  Scenario1:  IBILIN02: 2/4/8 CP, 2 GB, WebFOCUS 7.6 Reporting Server, DB2 Connect  IBILIN03: 2/4 CP, 2 GB, UDB 8.2  Scenario2:  IBILIN01: 2/4 CP, 2/4 GB, WAS , WebFOCUS Client 7.6  IBILIN02: 2/4/8 CP, 2 GB, WebFOCUS 7.6 Reporting Server, DB2 Connect  IBILIN03: 2/4 CP, 2 GB, UDB 8.2  Scenario3:  IBILIN02: 2/4/8 CP, 2 GB, iWay Service Manager, DB2 JDBC type 4  IBILIN03: 2/4 CP, 2 GB, UDB 8.2

Copyright 2007, Information Builders. Slide 11 Test Env. – 1 (Linux), Scenario – 1 (WebFOCUS Reporting Server, DB2 Connect, Workload-small) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Small Small

Copyright 2007, Information Builders. Slide 12 Filters and Specs: Operating System:SUSE SLES 9 z/VM 5.2 Memory:2 GB RDBMS DB2 Rows Returned: 61 Number of CPUS: 2, 4, 8 Protocol: TCP Access Method: CLI Concurrent Users: 10, 25, 50, 75, 100, 200, 500 Keep Alive:60 Interval:.05 WebFOCUS 7.6 Performance Statistics for z-Linux Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 13 Test Env. – 1 (all on Linux), Scenario – 1 (WebFOCUS Reporting Server, DB2 Connect, Workload-small)

Copyright 2007, Information Builders. Slide 14 Test Env.– 1 (all on Linux), Scenario – 1 (WebFOCUS Reporting Server, DB2 Connect, WL-large) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Large ** Large ** Linux system swap occurred

Copyright 2007, Information Builders. Slide 15 Filters and Specs: Operating System:SUSE SLES 9 z/VM 5.2 Memory:2 GB RDBMS DB2 Rows Returned: 3000 Number of CPUS: 2, 4, 8 Protocol: TCP Access Method: CLI Concurrent Users: 10, 25, 50, 75, 100, 200, 500 Keep Alive:60 Interval:.05 WebFOCUS 7.6 Performance Statistics for z-Linux Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 16 Test Env.– 1 (all on Linux), Scenario – 1 (WebFOCUS Reporting Server, DB2 Connect, Workload-Large)

Copyright 2007, Information Builders. Slide 17 Test Env. – 1 (all on Linux), Scenario – 1 (WebFOCUS Reporting Server, DB2 Connect, WL-Complex ) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Complex Complex Complex

Copyright 2007, Information Builders. Slide 18 Filters and Specs: Operating System:SUSE SLES 9 z/VM 5.2 Memory:2 GB CPU Speed:550 Mips Rows Returned: 5118 Number of CPUS: 2, 4, 8 Protocol: TCP Access Method: CLI Concurrent Users: 10, 25, 50, 75, 100, 200, 500 Keep Alive:60 Interval:.05 WebFOCUS 7.6 Performance Statistics for z-Linux Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 19 Test Env. – 1 (all on Linux), Scenario – 1 (WebFOCUS Reporting Server, DB2 Connect, WL-Complex )

Copyright 2007, Information Builders. Slide 20 Test Env. – 1 (all on Linux), Scenario – 2 (WAS, WebFOCUS Reporting Server, DB2 Connect, WL-small) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Small 3.659** Small ** Switched to use JLINK

Copyright 2007, Information Builders. Slide 21 Filters and Specs: Operating System:SUSE SLES 9 z/VM 5.2 Memory:2 GB Rows Returned: 61 Number of CPUS: 2, 4, 8 Protocol: SERVLET Access Method: CLI Concurrent Users: 10, 25, 50, 75, 100, 200, 500 Keep Alive:60 Interval:.05 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 22 Test Env. – 1 (all on Linux), Scenario – 2 (WAS, Web FOCUS Reporting Server, DB2 Connect, WL-small)

Copyright 2007, Information Builders. Slide 23 Test Env. – 1 (all on Linux), Scenario – 2 (WAS, WebFOCUS Reporting Server, DB2 Connect, WL-large) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Large ** Large ** switched to use JLINK

Copyright 2007, Information Builders. Slide 24 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 25 Test Env. – 1 (all on Linux), Scenario – 2 (WAS, WebFOCUS Reporting Server, DB2 Connect, WL-large)

Copyright 2007, Information Builders. Slide 26 Test Env. – 1 (all on Linux), Scenario – 2 (WAS, WebFOCUS Reporting Server, DB2 Connect, WL-Complex) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Complex Complex Complex

Copyright 2007, Information Builders. Slide 27 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 28 Test Env. – 1 (all on Linux), Scenario – 2 (WAS, WebFOCUS Reporting Server, DB2 Connect, WL-Complex)

Copyright 2007, Information Builders. Slide 29 Test Env. – 1 (all on Linux), Scenario – 3 (iWay Service Manager, DB2 JDBC Type 4, WL-Small) # user Response Time in seconds 2 cp4 cp8 cp Workload Type * 1.796* 3.451* Small 6.255* 0.734* 1.221* 4.71* 2.62* Small *7.34* Small * JVM size 1024 MB

Copyright 2007, Information Builders. Slide 30 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 31 Test Env. – 1 (all on Linux), Scenario – 3 (iWay Service Manager, DB2 JDBC type 4, WL-Small)

Copyright 2007, Information Builders. Slide 32 Test Env. – 1 (all on Linux), Scenario – 3 (iWay Service Manager, DB2 JDBC type 4, WL-Large) # user Response Time in seconds 2 cp4 cp8 cp Workload Type * * * * Large * 74.42* * Large Large * JVM size 1024 MB.

Copyright 2007, Information Builders. Slide 33 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 34 Test Env. – 1 (all on Linux), Scenario – 3 (iWay Service Manager, DB2 JDBC type 4, WL-Large )

Copyright 2007, Information Builders. Slide 35 Benchmark Test Environment – 2 (App and driver on Linux, DB on z/OS )

Copyright 2007, Information Builders. Slide 36 Benchmark Test Environment – 2 Scenarios  Scenario1:  IBILIN02: 2/4/8 CP, 2 GB, WebFOCUS Reporting Server 7.6, DB2 Connect, Native Data Driver (CLI)  (z/OS) IBI1: 8 CP, 8 GB, DB2, 2 tables

Copyright 2007, Information Builders. Slide 37 Test Env. – 2, Scenario – 1 (App on Linux, DB on z/OS) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Small Small (WebFOCUS Reporting Server, DB2, WL=Small)

Copyright 2007, Information Builders. Slide 38 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 39 Test Env. – 2, Scenario – 1 (App on Linux, DB on z/OS) (WebFOCUS Reporting Server, DB2, WL=Small)

Copyright 2007, Information Builders. Slide 40 Test Env. – 2, Scenario – 1 (App on Linux, DB on z/OS) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Large Large (WebFOCUS Reporting Server, DB2, WL=Large)

Copyright 2007, Information Builders. Slide 41 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 42 Test Env. – 2, Scenario – 1 (App on Linux, DB on z/OS) ( WebFOCUS Reporting Server, DB2, WL=Large)

Copyright 2007, Information Builders. Slide 43 Benchmark Test Environment – 3 (WAS, WebFOCUS Reporting Server, iWay Service Manager – IBI2, DB on z/OS –IBI1)

Copyright 2007, Information Builders. Slide 44 Benchmark Test Environment – 3 Scenarios  Scenario1:  IBI2: 2/4/8 CP, 8 GB, ISM 5.5, JDBC Type-4 Driver.  IBI1: 2/4/8 CP, 8 GB, DB2, 2 tables  Scenario2:  IBI2: 2/4/8 CP, 8 GB, WAS 6.1, WebFOCUS Reporting Server 76, WF Client, CLI.  IBI1: 2/4/8 CP, 8 GB, DB2, 6 tables  IBI2 communicates with IBI1 via Hipersocket  1 zIIP engine  Scenario3:  IBI2: 2/4/8 CP, 8 GB, WebFOCUS Reporting Server 76, CLI  IBI1: 2/4/8 CP, 8 GB, DB2, 6 tables  IBI2 communicates with IBI1 via Hipersocket  1 zIIP engine

Copyright 2007, Information Builders. Slide 45 Test Env – 3 (2 separate z/OS), Scenario – 1 (ISM, JDBC Type4, WL=Small, 1 zIIP) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Small Small Small

Copyright 2007, Information Builders. Slide 46 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 47 Test Env – 3 (2 separate z/OS), Scenario – 1 (ISM, JDBC Type4, WL=Small, 1 zIIP)

Copyright 2007, Information Builders. Slide 48 Test Env – 3 (2 separate z/OS), Scenario – 1 (ISM, JDBC Type4, WL=Large, 1 zIIP) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Large Large Large

Copyright 2007, Information Builders. Slide 49 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 50 Test Env – 3 (2 separate z/OS), Scenario – 1 (ISM, JDBC Type4, WL=Large, 1 zIIP)

Copyright 2007, Information Builders. Slide 51 Test Env – 3 (2 separate z/OS), Scenario – 2 (WAS, WebFOCUS Reporting Server, CLI, WL=Small, 1 zIIP) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Small Small Small * 4 Clustered WAS Application Servers

Copyright 2007, Information Builders. Slide 52 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 53 Test Env – 3 (2 separate z/OS), Scenario – 2 (WAS, WebFOCUS Reporting Server, CLI, WL=Small, 1 zIIP)

Copyright 2007, Information Builders. Slide 54 Test Env – 3 (2 separate z/OS), Scenario – 2 (WAS, WebFOCUS Reporting Server, CLI, WL=Large, 1 zIIP) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Large Large Large

Copyright 2007, Information Builders. Slide 55 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 56 Test Env – 3 (2 separate z/OS), Scenario – 2 (WAS, WebFOCUS Reporting Server, CLI, WL=Large, 1 zIIP)

Copyright 2007, Information Builders. Slide 57 Test Env – 3 (2 separate z/OS), Scenario – 3 (WebFOCUS Reporting Server, CLI, WL=Small, 1 zIIP) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Small Small Small

Copyright 2007, Information Builders. Slide 58 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 59 Test Env – 3 (2 separate z/OS), Scenario – 3 (WebFOCUS Reporting Server, CLI, WL=Small, 1 zIIP)

Copyright 2007, Information Builders. Slide 60 Test Env – 3 (2 separate z/OS), Scenario – 3 (WebFOCUS Reporting Server, CLI, WL=Large, 1 zIIP) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Large Large Large

Copyright 2007, Information Builders. Slide 61 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 62 Test Env – 3 (2 separate z/OS), Scenario – 3 (WebFOCUS Reporting Server, CLI, WL=Large, 1 zIIP)

Copyright 2007, Information Builders. Slide 63 Test Env – 3 (2 separate z/OS), Scenario – 3 (WebFOCUS Reporting Server, CLI, WL=Complex, 1 zIIP ) # user Response Time in seconds 2 cp4 cp8 cp Workload Type Complex * Complex * Complex * RMF report indicated that 1 zIIP engine utilization was at 100%

Copyright 2007, Information Builders. Slide 64 Average Request Processing Time (in seconds) Across Number of CPUs by Concurrent Users

Copyright 2007, Information Builders. Slide 65 Test Env – 3 (2 separate z/OS), Scenario – 3 (WebFOCUS Reporting Server, CLI, WL=Complex, 1 zIIP )

Copyright 2007, Information Builders. Slide 66 Proven Value of zIIP Specialty Engine 06/11/17 01:04 Thread Summary ROW 85 TO 107 OF Display Filter View Print Options Help SDSF DA IBI1 IBI1 PAG 0 CPU/L 100/100 LINE 1-5 (5) COMMAND INPUT ===> SCROLL ===> CSR PREFIX=DSN* DEST=(ALL) OWNER=* SYSNAME= NP JOBNAME SP ASIDX Real Paging SPAG SCPU% ECPU-Time ECPU% SzIIP% Racf Id DSN8MSTR 1 004D DB2USER DSN8DIST 1 004F 11T DB2USER DSN8IRLM DB2USER DSN8DBM T DB2USER DSN8SPAS DB2USER 06/11/17 01:14 Thread Summary ROW 1 TO 23 OF Display Filter View Print Options Help SDSF DA IBI1 IBI1 PAG 0 CPU/L 100/100 LINE 1-5 (5) COMMAND INPUT ===> SCROLL ===> CSR PREFIX=DSN* DEST=(ALL) OWNER=* SYSNAME= NP JOBNAME SP ASIDX Real Paging SPAG SCPU% ECPU-Time ECPU% SzIIP% Racf Id DSN8MSTR 1 004D DB2USER DSN8DIST 1 004F 12T DB2USER DSN8IRLM DB2USER DSN8DBM T DB2USER DSN8SPAS DB2USER

Copyright 2007, Information Builders. Slide 67 zIIP Actual Versus Projected  CP configuration: IBI1 - 8 CPs, 1 zIIP  IBI2 - 8 CPs, 0 zIIP  WebFOCUS Large IBI IBI  Users Time CPU% zIIP% IIPCP I/O Rate CPU% I/O Rate  50 00:  :  :  :  Complex Query IBI IBI  Users Time CPU% zIIP% IIPCP I/O Rate CPU% I/O Rate  50 01:  :  :  :

Copyright 2007, Information Builders. Slide 68 Test Env – 3 (2 separate z/OS), Scenario – 3b (WebFOCUS Reporting Server, CLI, WL=Complex, 8CP,Vary zIIPs) # user Response Time in seconds Workload Type Complex Complex Complex NO zIIP 1 zIIP 3 zIIP6 zIIP

Copyright 2007, Information Builders. Slide 69 Test Env – 3 (2 separate z/OS),Scenario – 3b (WebFOCUS Reporting Server, CLI, WL=Complex, 8 CP. Vary zIIPs)

Copyright 2007, Information Builders. Slide 70 WSC Benchmark Team  Mary Hu  John Bishop  Richard Lewis  Joe Consorti  Jennie Liang  John Goodyear  Kenneth Hain  Glenn Materia  Dennis McDonald

Copyright 2007, Information Builders. Slide 71 Questions ?

Copyright 2007, Information Builders. Slide 72 Appendix A Workload-Small Query SELECT T1."F3SSN",T1."F3ALPHA10",T1."F3INTEGER5", T1."F3FLOAT9X8",T1."F3DBL15X2" FROM TST2MLN T1 WHERE (T1."F3SSN" <= ' ') FOR FETCH ONLY;

Copyright 2007, Information Builders. Slide 73 Appendix A Workload-Large Query SELECT T1."F3SSN",T1."F3ALPHA10",T1."F3INTEGER5", T1."F3FLOAT9X8",T1."F3DBL15X2" FROM TST2MLN T1 WHERE (T1."F3SSN" <= ' ') FOR FETCH ONLY;

Copyright 2007, Information Builders. Slide 74 Appendix A Workload-Complex Query SELECT T1."M_MKTING_NBR",T1."M_TRIP",T1."M_STAT", T1."M_VISIT_POINTER",T4."M_C_USE",T4."M_C_CONC_CD", T4."EFFECT_DATE",T4."EXPIRE_DATE", MAX(T1."TRIP_CODE"), MAX(T3."TRIP_NAME"), MAX(T1."M_DELVRY_DATE"), MAX(T1."DELVRY_YEAR"), SUM(T4."CUSTOMER_COUNT") FROM ( ( ( MKTING_GLOBAL T1 INNER JOIN TRIP_GLOBAL T2 ON T2."VYD_TRIP" = T1."M_TRIP" ) LEFT OUTER JOIN TRANSPORT_NAME_GLOBAL T3 ON T3."TRANSPORT_CODE" = T1."TRANSPORT_CODE" ) INNER JOIN MKTING_CUSTMR_SURVEY T4 ON T4."M_MKTING_NBR" = T1."M_MKTING_NBR" AND T4."M_TRIP" = T1."M_TRIP" ) WHERE (T1."BUSS_CODE" = 'A') AND (T2."VYD_TRIP_STAT" IN('A', 'H')) AND (T2."REPORT_YEAR" ='2006') AND (T4."EXPIRE_DATE" > ' ') AND (T4."EFFECT_DATE" <= ' ') AND (T4."M_C_CUSTMR_STAT" = 'C') AND (((T1."M_GROUP_TYPE" ='H') AND (T4."M_C_CUSTMR_ID" = '2')) OR ((T1."M_GROUP_TYPE" <> 'H') OR T1."M_GROUP_TYPE" IS NULL)) GROUP BY T1."M_MKTING_NBR",T1."M_TRIP",T1."M_STAT",T1."M_VISIT_POINTER",T4."M_C_USE", T4."M_C_CONC_CD",T4."EFFECT_DATE",T4."EXPIRE_DATE" ORDER BY T1."M_MKTING_NBR",T1."M_TRIP",T1."M_STAT", T1."M_VISIT_POINTER",T4."M_C_USE",T4."M_C_CONC_CD", T4."EFFECT_DATE",T4."EXPIRE_DATE" FOR FETCH ONLY;