Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Similar presentations


Presentation on theme: "© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008."— Presentation transcript:

1 © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008

2 Agenda This is mean to be more of an open discussion No set time per topic - each topic has just enough info to spur questions and/or open a dialog – so talk Feel free to ask questions about other topics No cell phones with ringer turned on (use vibrate) Email only during breaks under penalty of death Customers generally fail with BMF due to: –Lack of preparation (70%) –Unreasonable expectations (30%) 2 

3 Benchmarks Require Preparation

4 Basics Architectural diagram of the database server, network and IO setup Reviewed the official benchmark specification to fully understand the test – critical step!!! Defined goals for satisfactory benchmark performance TPS, Average Transaction Time, IO throughput, CPU utilization, memory consumption, network utilization, swapping level, etc… Verified the assumed capacity of each and every hardware component Select a database benchmarking tool (i.e. load generator) – BMF Select a database monitoring/diagnostic tool – TOAD DBA, PA & Spotlight Select an operating system monitoring/diagnostic tool – TOAD DBA, Spotlight & Foglight Select a database performance resolution/corrective tool – TOAD DBA Storage Number of storage arrays being used Are the storage arrays virtualized or shared Storage array nature (i.e. SAN, NAS, iSCSI, NFS, etc) Storage array connective bandwidth per storage array and total Amount of cache memory per storage array and total How many spindles per storage array and total Number of processors per storage array and total Amount of memory cache per storage array and total Storage array caching allocation settings read vs. write Storage array caching size/algorithm for read-ahead settings Nature, size, speed and cache of disks per storage array and total Number of LUN’s available for usage per storage array and total RAID level, stripe width and stripe size/length of the LUN’s Database Benchmarking Prep Checklist – Pg 1

5 Servers Number of database servers being used (usually one, unless clustering or replicating) Are the database servers virtualized or shared Database server architecture/nature (i.e. uni-processor, SMP, DSM, NUMA, ccNUMA, etc) Database server CPU word-size and architecture/nature (i.e. RISC vs. CISC) Database server CPU physical count (slots) per database server and total Database server CPU logical count (cores) per database server and total Database server CPU speed and cache per logical unit (core) Hyper-threading turned off if it is available – critical, otherwise will negatively skew results Amount, type and speed of RAM per database server and total Number and throughput of HBA’s per database server and total HBA interconnect nature and speed (i.e. fiber, infiniband, 1GB Ethernet, 10GB Ethernet, etc) Number and throughput of NIC’s per database server and total Database server interconnect nature and speed (if clustering or replicating) Operating System Operating system word-size Operating system basic optimization parameters set or tuned Operating system database optimization parameters set or tuned Disk array and inter-node Ethernet NIC’s set to utilize jumbo frames Network Matching cabling and switch/router throughput to fully leverage NIC’s Disk array and inter-node Ethernet switches set to utilize jumbo frames Disk array and inter-node paths on private networks or private VLAN’s Database Benchmarking Prep Checklist – Pg 2

6 Database Database version (e.g. partition differently under 10g vs. 11g) Database word-size Database basic optimization parameters set or tuned Database specific optimization parameters set or tuned for given benchmark Benchmark Factory Using most recent Benchmark Factory software version available (i.e. currently 5.7.0) Using the best available database driver for that database platform (native vs. ODBC) Starting a number of agents = max total concurrent users for the benchmark / 900 Place no more than four concurrent agents for the same test on a single app server Customize the Benchmark Factory project meta-data for your specific needs: Partition or cluster tables Partition or cluster indexes Collect optimizer statistics Collect performance snapshot (e.g. Oracle Stats Pack or AWR snapshot) Run the workload Collect performance snapshot (e.g. Oracle Stats Pack or AWR snapshot) Database Benchmarking Prep Checklist – Pg 3

7 Standard Benchmark Options? TPC-A measures performance in update-intensive database environments typical in on-line transaction processing applications. (Obsolete as of 6/6/95) TPC-B measures throughput in terms of how many transactions per second a system can perform. (Obsolete as of 6/6/95) TPC-D represents a broad range of decision support (DS) applications that require complex, long running queries against large complex data structures. (Obsolete as of 4/6/99) TPC-R is a business reporting, decision support benchmark. (Obsolete as of 1/1/2005) TPC-W is a transactional web e-Commerce benchmark. (Obsolete as of 4/28/05) TPC-C is an on-line transaction processing benchmark. (showing its age – soon to be replaced by TPC-E) TPC-E is a new On-Line Transaction Processing (OLTP) workload. TPC-H is an ad-hoc, decision support benchmark. 7

8 Know Thy Test – Read The Spec !!! 8 If you don’t know this info, how can you set BMF parameters??? http://tpc.org/tpcc/spec/tpcc_current.pdf

9 Understand Database Design – TPC-C 9

10 Understand Database Design – TPC-H 10

11 Data Model if Unsure (TPC-H) 11

12 Say Goodbye to Simple Designs – TPC-E 12

13 Know Some of the Workload 13 If you don’t know what the database is being asked to do, then how can you tune the database instance parameters? 2.6 Shipping Priority Query (Q3) This query retrieves the 10 unshipped orders with the highest value. 2.6.1 Business Question The Shipping Priority Query retrieves the shipping priority and potential revenue, defined as the sum of l_extendedprice * (1-l_discount), of the orders having the largest revenue among those that had not been shipped as of a given date. Orders are listed in decreasing order of revenue. If more than 10 unshipped orders exist, only the 10 orders with the largest revenue are listed. 2.6.2 Functional Query Definition Return the first 10 selected rows select l_orderkey, sum(l_extendedprice*(1-l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = '[SEGMENT]' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < date '[DATE]' and l_shipdate > date '[DATE]' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate; 2.6.3 Substitution Parameters Values for the following substitution parameters must be generated and used to build the executable query text: 1. SEGMENT is randomly selected within the list of values defined for Segments in Clause 4.2.2.13; 2. DATE is a randomly selected day within [1995-03-01.. 1995-03-31].

14 Time to Introduce BMF to Equation BMF does three things (all per spec) –Create ANSI SQL standard database objects (tables, indexes, views) –Load those objects with appropriate amount of data for the scale factor –Creates the concurrent user workload or stream workload on db server What BMF does NOT do: –Does not partition, cluster, or any other advanced storage parameters –Does not know about storage array & LUNS – so does not spread IO –Does not know about db optimization techniques: e.g. “gather stats” –Does not monitor benchmark workload – other than to show progress –Does not diagnose database tuning/optimization required to improve –Does not diagnose operating system tuning parms required to improve –Does not diagnose hardware configuration tuning required to improve –Does not offer a “push one single button” benchmarking solution It’s a basic tool required to do the job, but it does not do the job for you – the user has to own & drive the process 14

15 Step 1 – Create Static Hold Schema & Load 15 That way refresh of data can occur in a few minutes Using CTAS rather than waiting on BMF client load!!!

16 Step 2 – Copy Schema, Gather Stats, Run, etc. 16

17 Copy User Script 17 set verify off drop user &1 cascade; CREATE USER &1 IDENTIFIED BY "&1" DEFAULT TABLESPACE "USERS" TEMPORARY TABLESPACE "TEMP" PROFILE DEFAULT QUOTA UNLIMITED ON "USERS"; GRANT "CONNECT" TO &1; GRANT "RESOURCE" TO &1; grant select any table to &1; ALTER USER &1 DEFAULT ROLE "CONNECT", "RESOURCE"; purge recyclebin; exit

18 Copy/Create Tables 18 set verify off CREATE TABLE C_WAREHOUSE TABLESPACE USERS NOLOGGING NOCOMPRESS NOCACHE PARALLEL (DEGREE &1) NOMONITORING AS SELECT * FROM &4..C_WAREHOUSE; create cluster c_district_cluster ( d_id number, d_w_id number ) TABLESPACE USERS single table hashkeys 1008000 hash is ( ((d_w_id * 10) + d_id) ) size 1448; CREATE TABLE C_DISTRICT cluster c_district_cluster (d_id, d_w_id) NOCOMPRESS NOMONITORING AS SELECT * FROM &4..C_DISTRICT; … Why did I create a cluster? Why this table in cluster? BMF not really or easily support doing these kinds of things -Spec -Disclosure reports

19 19 CREATE TABLE C_ORDER TABLESPACE USERS PARTITION BY HASH (O_W_ID, O_D_ID, O_C_ID, O_ID) PARTITIONS &3 NOLOGGING NOCOMPRESS NOCACHE PARALLEL (DEGREE &1) NOMONITORING AS SELECT * FROM &4..C_ORDER; CREATE TABLE C_ORDER_LINE TABLESPACE USERS PARTITION BY HASH (OL_W_ID, OL_D_ID, OL_O_ID, OL_NUMBER) PARTITIONS &3 NOLOGGING NOCOMPRESS NOCACHE PARALLEL (DEGREE &1) NOMONITORING AS SELECT * FROM &4..C_ORDER_LINE; CREATE TABLE C_ITEM TABLESPACE USERS PARTITION BY HASH (I_ID) PARTITIONS &3 NOLOGGING NOCOMPRESS NOCACHE PARALLEL (DEGREE &1) NOMONITORING AS SELECT * FROM &4..C_ITEM; Why did I create a this partitioning scheme??? -Spec -Disclosure reports

20 Disclosure Reports 20 http://tpc.org/tpcc/results/tpcc_perf_results.asp

21 Disclosure Report – Lots of Info 21 This is where people document exactly what advanced database feature and storage parameters they used – info is invaluable

22 Disclosure Report – Appendix B 22

23 23 Try that with BMF Lessons people have learned: e.g. TPC-H results entirely rely on # disks and nothing else – need well over 100 spindles for just 300 GB test

24 Top 10 Benchmarking Misconceptions 24

25

26

27

28

29

30

31 31 Results – Average Response Time Sub Second

32

33 33 Apply Top-Down Analysis & Revision 1. Benchmark Factory Industry standard benchmark: TPC-C & Trace Files Key Metric = Avg Response Time 3. TOAD with DBA Module AWR/ADDM & Stats Pack Again record before & after for improvements confirmation Performance Testing Process (using tools) 2. Spotlight on RAC Record before & after results Confirm improvements

34 34 TOAD ® with DBA Module Expedite typical DBA management & tuning tasks Great Productivity Enhancing Features –Database Health Check –Database Probe –Database Monitor –AWR/ADDM Reports –UNIX Monitor –Stats Pack Reports See Toad World paper –Title: “Maximize Database Performance Via Toad for Oracle” –http://www.toadworld.com/Education/ToadWorldPapersandPodcasts/tabid/82/Default.aspxhttp://www.toadworld.com/Education/ToadWorldPapersandPodcasts/tabid/82/Default.aspx Let’s DBA concentrate on task at hand – Correcting (i.e. Fixing) Toad to the Rescue (as usual)

35 35

36 36

37 Config So Toad has that Performance Data

38 Wrap Up I can email everyone the slides, scripts, projects etc Time Permitting – show new BMF TOAD Integration BMF has a new product management directive –# concurrent users sweet spot = 1000 –# concurrebt users max we’ll support = 2500 This limit is not because of BMF, but rather that people don’t prepare or have right expectations We cannot afford to keep doing their benchmarking projects for them when they do tens of thousands  38


Download ppt "© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008."

Similar presentations


Ads by Google