© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Slides:



Advertisements
Similar presentations
Advanced Oracle DB tuning Performance can be defined in very different ways (OLTP versus DSS) Specific goals and targets must be set => clear recognition.
Advertisements

Performance Tuning Compiled from: Oracle Database Administration, Session 13, Performance, Harvard U Oracle Server Tuning Accelerator, David Scott, Intec.
Module 13: Performance Tuning. Overview Performance tuning methodologies Instance level Database level Application level Overview of tools and techniques.
Adam Jorgensen Pragmatic Works Performance Optimization in SQL Server Analysis Services 2008.
Exadata Distinctives Brown Bag New features for tuning Oracle database applications.
© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Oracle 11g R1 Top Features for Developers & DBA’s August 2008.
Performance and Scalability. Optimizing PerformanceScaling UpScaling Out.
Copyright © 2006 Quest Software RAC be Nimble, RAC be Quick Bert Scalzo, Domain Expert, Oracle Solutions
Database Optimization & Maintenance Tim Richard ECM Training Conference#dbwestECM Agenda SQL Configuration OnBase DB Planning Backups Integrity.
C-Store: Introduction to TPC-H Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY Mar 20, 2009.
Cacti Workshop Tony Roman Agenda What is Cacti? The Origins of Cacti Large Installation Considerations Automation The Current.
SQL Server, Storage And You Part 2: SAN, NAS and IP Storage.
Chapter 14 Chapter 14: Server Monitoring and Optimization.
Module 14: Scalability and High Availability. Overview Key high availability features available in Oracle and SQL Server Key scalability features available.
Russ Houberg Senior Technical Architect, MCM KnowledgeLake, Inc.
SQL Server 2005 Performance Enhancements for Large Queries Joe Chang
Using Standard Industry Benchmarks Chapter 7 CSE807.
Copyright © 2007 Quest Software The Changing Role of SQL Server DBA’s Bryan Oliver SQL Server Domain Expert Quest Software.
Performance and Scalability. Performance and Scalability Challenges Optimizing PerformanceScaling UpScaling Out.
Module 18 Monitoring SQL Server 2008 R2. Module Overview Monitoring Activity Capturing and Managing Performance Data Analyzing Collected Performance Data.
Module 10 Configuring and Managing Storage Technologies.
Scalable Data Warehouse & Data Marts ReportsAnalysis SQL Server DBMS SQL Server Integration Services Custom OLTP Increase usage & trust.
Introduction and simple using of Oracle Logistics Information System Yaxian Yao
Troubleshooting SQL Server Enterprise Geodatabase Performance Issues
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 10 Database Performance Tuning and Query Optimization.
Performance Concepts Mark A. Magumba. Introduction Research done on 1058 correspondents in 2006 found that 75% OF them would not return to a website that.
Oracle Challenges Parallelism Limitations Parallelism is the ability for a single query to be run across multiple processors or servers. Large queries.
1 Robert Wijnbelt Health Check your Database A Performance Tuning Methodology.
Copyright © 2010, Scryer Analytics, LLC. All rights reserved. Optimizing SAS System Performance − A Platform Perspective Patrick McDonald Scryer Analytics,
Physical Database Design & Performance. Optimizing for Query Performance For DBs with high retrieval traffic as compared to maintenance traffic, optimizing.
Communicating with the Outside. Overview Package several SQL statements within one call to the database server Embedded procedural language (Transact.
Informix IDS Administration with the New Server Studio 4.0 By Lester Knutsen My experience with the beta of Server Studio and the new Informix database.
Oracle9i Performance Tuning Chapter 1 Performance Tuning Overview.
Oracle Tuning Considerations. Agenda Why Tune ? Why Tune ? Ways to Improve Performance Ways to Improve Performance Hardware Hardware Software Software.
Oracle Tuning Ashok Kapur Hawkeye Technology, Inc.
Designing and Deploying a Scalable EPM Solution Ken Toole Platform Test Manager MS Project Microsoft.
Oracle RAC and Linux in the real enterprise October, 02 Mark Clark Director Merrill Lynch Europe PLC Global Database Technologies October, 02 Mark Clark.
DBI313. MetricOLTPDWLog Read/Write mixMostly reads, smaller # of rows at a time Scan intensive, large portions of data at a time, bulk loading Mostly.
Oracle9i Performance Tuning Chapter 12 Tuning Tools.
DONE-08 Sizing and Performance Tuning N-Tier Applications Mike Furgal Performance Manager Progress Software
Database Design and Management CPTG /23/2015Chapter 12 of 38 Functions of a Database Store data Store data School: student records, class schedules,
© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Perfmon and Profiler 101.
Achieving Scalability, Performance and Availability on Linux with Oracle 9iR2-RAC Grant McAlister Senior Database Engineer Amazon.com Paper
SQLRX – SQL Server Administration – Tips From the Trenches SQL Server Administration – Tips From the Trenches Troubleshooting Reports of Sudden Slowdowns.
CS Operating System & Database Performance Tuning Xiaofang Zhou School of Computing, NUS Office: S URL:
Srik Raghavan Principal Lead Program Manager Kevin Cox Principal Program Manager SESSION CODE: DAT206.
Copyright 2007, Information Builders. Slide 1 Machine Sizing and Scalability Mark Nesson, Vashti Ragoonath June 2008.
Infrastructure for Data Warehouses. Basics Of Data Access Data Store Machine Memory Buffer Memory Cache Data Store Buffer Bus Structure.
Your Data Any Place, Any Time Performance and Scalability.
1 Copyright © 2005, Oracle. All rights reserved. Following a Tuning Methodology.
IMS 4212: Database Implementation 1 Dr. Lawrence West, Management Dept., University of Central Florida Physical Database Implementation—Topics.
Oracle Architecture - Structure. Oracle Architecture - Structure The Oracle Server architecture 1. Structures are well-defined objects that store the.
Copyright © 2006 Quest Software Quest RAC Tools Bert Scalzo, Domain Expert, Oracle Solutions
Introduction to Exadata X5 and X6 New Features
Copyright Sammamish Software Services All rights reserved. 1 Prog 140  SQL Server Performance Monitoring and Tuning.
I NTRODUCTION OF W EEK 2  Assignment Discussion  Due this week:  1-1 (Exam Proctor): everyone including in TLC  1-2 (SQL Review): review SQL  Review.
Configuring SQL Server for a successful SharePoint Server Deployment Haaron Gonzalez Solution Architect & Consultant Microsoft MVP SharePoint Server
9 Copyright © 2004, Oracle. All rights reserved. Getting Started with Oracle Migration Workbench.
Exadata Distinctives 988 Bobby Durrett US Foods. What is Exadata? Complete Oracle database platform Disk storage system Unique to Exadata – intelligent.
This document is provided for informational purposes only and Microsoft makes no warranties, either express or implied, in this document. Information.
Get the Most out of SQL Server Standard Edition
Installation and database instance essentials
Oracle Database Monitoring and beyond
Software Architecture in Practice
Introduction of Week 6 Assignment Discussion
Introduction of Week 3 Assignment Discussion
Oracle Storage Performance Studies
Cloud computing mechanisms
Performance And Scalability In Oracle9i And SQL Server 2000
Database System Architectures
Presentation transcript:

© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008

Agenda This is mean to be more of an open discussion No set time per topic - each topic has just enough info to spur questions and/or open a dialog – so talk Feel free to ask questions about other topics No cell phones with ringer turned on (use vibrate) only during breaks under penalty of death Customers generally fail with BMF due to: –Lack of preparation (70%) –Unreasonable expectations (30%) 2 

Benchmarks Require Preparation

Basics Architectural diagram of the database server, network and IO setup Reviewed the official benchmark specification to fully understand the test – critical step!!! Defined goals for satisfactory benchmark performance TPS, Average Transaction Time, IO throughput, CPU utilization, memory consumption, network utilization, swapping level, etc… Verified the assumed capacity of each and every hardware component Select a database benchmarking tool (i.e. load generator) – BMF Select a database monitoring/diagnostic tool – TOAD DBA, PA & Spotlight Select an operating system monitoring/diagnostic tool – TOAD DBA, Spotlight & Foglight Select a database performance resolution/corrective tool – TOAD DBA Storage Number of storage arrays being used Are the storage arrays virtualized or shared Storage array nature (i.e. SAN, NAS, iSCSI, NFS, etc) Storage array connective bandwidth per storage array and total Amount of cache memory per storage array and total How many spindles per storage array and total Number of processors per storage array and total Amount of memory cache per storage array and total Storage array caching allocation settings read vs. write Storage array caching size/algorithm for read-ahead settings Nature, size, speed and cache of disks per storage array and total Number of LUN’s available for usage per storage array and total RAID level, stripe width and stripe size/length of the LUN’s Database Benchmarking Prep Checklist – Pg 1

Servers Number of database servers being used (usually one, unless clustering or replicating) Are the database servers virtualized or shared Database server architecture/nature (i.e. uni-processor, SMP, DSM, NUMA, ccNUMA, etc) Database server CPU word-size and architecture/nature (i.e. RISC vs. CISC) Database server CPU physical count (slots) per database server and total Database server CPU logical count (cores) per database server and total Database server CPU speed and cache per logical unit (core) Hyper-threading turned off if it is available – critical, otherwise will negatively skew results Amount, type and speed of RAM per database server and total Number and throughput of HBA’s per database server and total HBA interconnect nature and speed (i.e. fiber, infiniband, 1GB Ethernet, 10GB Ethernet, etc) Number and throughput of NIC’s per database server and total Database server interconnect nature and speed (if clustering or replicating) Operating System Operating system word-size Operating system basic optimization parameters set or tuned Operating system database optimization parameters set or tuned Disk array and inter-node Ethernet NIC’s set to utilize jumbo frames Network Matching cabling and switch/router throughput to fully leverage NIC’s Disk array and inter-node Ethernet switches set to utilize jumbo frames Disk array and inter-node paths on private networks or private VLAN’s Database Benchmarking Prep Checklist – Pg 2

Database Database version (e.g. partition differently under 10g vs. 11g) Database word-size Database basic optimization parameters set or tuned Database specific optimization parameters set or tuned for given benchmark Benchmark Factory Using most recent Benchmark Factory software version available (i.e. currently 5.7.0) Using the best available database driver for that database platform (native vs. ODBC) Starting a number of agents = max total concurrent users for the benchmark / 900 Place no more than four concurrent agents for the same test on a single app server Customize the Benchmark Factory project meta-data for your specific needs: Partition or cluster tables Partition or cluster indexes Collect optimizer statistics Collect performance snapshot (e.g. Oracle Stats Pack or AWR snapshot) Run the workload Collect performance snapshot (e.g. Oracle Stats Pack or AWR snapshot) Database Benchmarking Prep Checklist – Pg 3

Standard Benchmark Options? TPC-A measures performance in update-intensive database environments typical in on-line transaction processing applications. (Obsolete as of 6/6/95) TPC-B measures throughput in terms of how many transactions per second a system can perform. (Obsolete as of 6/6/95) TPC-D represents a broad range of decision support (DS) applications that require complex, long running queries against large complex data structures. (Obsolete as of 4/6/99) TPC-R is a business reporting, decision support benchmark. (Obsolete as of 1/1/2005) TPC-W is a transactional web e-Commerce benchmark. (Obsolete as of 4/28/05) TPC-C is an on-line transaction processing benchmark. (showing its age – soon to be replaced by TPC-E) TPC-E is a new On-Line Transaction Processing (OLTP) workload. TPC-H is an ad-hoc, decision support benchmark. 7

Know Thy Test – Read The Spec !!! 8 If you don’t know this info, how can you set BMF parameters???

Understand Database Design – TPC-C 9

Understand Database Design – TPC-H 10

Data Model if Unsure (TPC-H) 11

Say Goodbye to Simple Designs – TPC-E 12

Know Some of the Workload 13 If you don’t know what the database is being asked to do, then how can you tune the database instance parameters? 2.6 Shipping Priority Query (Q3) This query retrieves the 10 unshipped orders with the highest value Business Question The Shipping Priority Query retrieves the shipping priority and potential revenue, defined as the sum of l_extendedprice * (1-l_discount), of the orders having the largest revenue among those that had not been shipped as of a given date. Orders are listed in decreasing order of revenue. If more than 10 unshipped orders exist, only the 10 orders with the largest revenue are listed Functional Query Definition Return the first 10 selected rows select l_orderkey, sum(l_extendedprice*(1-l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = '[SEGMENT]' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < date '[DATE]' and l_shipdate > date '[DATE]' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate; Substitution Parameters Values for the following substitution parameters must be generated and used to build the executable query text: 1. SEGMENT is randomly selected within the list of values defined for Segments in Clause ; 2. DATE is a randomly selected day within [ ].

Time to Introduce BMF to Equation BMF does three things (all per spec) –Create ANSI SQL standard database objects (tables, indexes, views) –Load those objects with appropriate amount of data for the scale factor –Creates the concurrent user workload or stream workload on db server What BMF does NOT do: –Does not partition, cluster, or any other advanced storage parameters –Does not know about storage array & LUNS – so does not spread IO –Does not know about db optimization techniques: e.g. “gather stats” –Does not monitor benchmark workload – other than to show progress –Does not diagnose database tuning/optimization required to improve –Does not diagnose operating system tuning parms required to improve –Does not diagnose hardware configuration tuning required to improve –Does not offer a “push one single button” benchmarking solution It’s a basic tool required to do the job, but it does not do the job for you – the user has to own & drive the process 14

Step 1 – Create Static Hold Schema & Load 15 That way refresh of data can occur in a few minutes Using CTAS rather than waiting on BMF client load!!!

Step 2 – Copy Schema, Gather Stats, Run, etc. 16

Copy User Script 17 set verify off drop user &1 cascade; CREATE USER &1 IDENTIFIED BY "&1" DEFAULT TABLESPACE "USERS" TEMPORARY TABLESPACE "TEMP" PROFILE DEFAULT QUOTA UNLIMITED ON "USERS"; GRANT "CONNECT" TO &1; GRANT "RESOURCE" TO &1; grant select any table to &1; ALTER USER &1 DEFAULT ROLE "CONNECT", "RESOURCE"; purge recyclebin; exit

Copy/Create Tables 18 set verify off CREATE TABLE C_WAREHOUSE TABLESPACE USERS NOLOGGING NOCOMPRESS NOCACHE PARALLEL (DEGREE &1) NOMONITORING AS SELECT * FROM &4..C_WAREHOUSE; create cluster c_district_cluster ( d_id number, d_w_id number ) TABLESPACE USERS single table hashkeys hash is ( ((d_w_id * 10) + d_id) ) size 1448; CREATE TABLE C_DISTRICT cluster c_district_cluster (d_id, d_w_id) NOCOMPRESS NOMONITORING AS SELECT * FROM &4..C_DISTRICT; … Why did I create a cluster? Why this table in cluster? BMF not really or easily support doing these kinds of things -Spec -Disclosure reports

19 CREATE TABLE C_ORDER TABLESPACE USERS PARTITION BY HASH (O_W_ID, O_D_ID, O_C_ID, O_ID) PARTITIONS &3 NOLOGGING NOCOMPRESS NOCACHE PARALLEL (DEGREE &1) NOMONITORING AS SELECT * FROM &4..C_ORDER; CREATE TABLE C_ORDER_LINE TABLESPACE USERS PARTITION BY HASH (OL_W_ID, OL_D_ID, OL_O_ID, OL_NUMBER) PARTITIONS &3 NOLOGGING NOCOMPRESS NOCACHE PARALLEL (DEGREE &1) NOMONITORING AS SELECT * FROM &4..C_ORDER_LINE; CREATE TABLE C_ITEM TABLESPACE USERS PARTITION BY HASH (I_ID) PARTITIONS &3 NOLOGGING NOCOMPRESS NOCACHE PARALLEL (DEGREE &1) NOMONITORING AS SELECT * FROM &4..C_ITEM; Why did I create a this partitioning scheme??? -Spec -Disclosure reports

Disclosure Reports 20

Disclosure Report – Lots of Info 21 This is where people document exactly what advanced database feature and storage parameters they used – info is invaluable

Disclosure Report – Appendix B 22

23 Try that with BMF Lessons people have learned: e.g. TPC-H results entirely rely on # disks and nothing else – need well over 100 spindles for just 300 GB test

Top 10 Benchmarking Misconceptions 24

31 Results – Average Response Time Sub Second

33 Apply Top-Down Analysis & Revision 1. Benchmark Factory Industry standard benchmark: TPC-C & Trace Files Key Metric = Avg Response Time 3. TOAD with DBA Module AWR/ADDM & Stats Pack Again record before & after for improvements confirmation Performance Testing Process (using tools) 2. Spotlight on RAC Record before & after results Confirm improvements

34 TOAD ® with DBA Module Expedite typical DBA management & tuning tasks Great Productivity Enhancing Features –Database Health Check –Database Probe –Database Monitor –AWR/ADDM Reports –UNIX Monitor –Stats Pack Reports See Toad World paper –Title: “Maximize Database Performance Via Toad for Oracle” – Let’s DBA concentrate on task at hand – Correcting (i.e. Fixing) Toad to the Rescue (as usual)

35

36

Config So Toad has that Performance Data

Wrap Up I can everyone the slides, scripts, projects etc Time Permitting – show new BMF TOAD Integration BMF has a new product management directive –# concurrent users sweet spot = 1000 –# concurrebt users max we’ll support = 2500 This limit is not because of BMF, but rather that people don’t prepare or have right expectations We cannot afford to keep doing their benchmarking projects for them when they do tens of thousands  38