Presentation is loading. Please wait.

Presentation is loading. Please wait.

Performance Testing and OBIEE

Similar presentations


Presentation on theme: "Performance Testing and OBIEE"— Presentation transcript:

1 Performance Testing and OBIEE
Watch Water Robin Moffatt, WM Morrisons plc

2 Introduction Oracle BI specialist at Morrisons plc
Big IT development programme at its early stages implementing OBIEE, OBIA, ORDM, all on Oracle 11g & HP-UX At Morrisons we’re on the latest (current) version of OBIEE 10g, and use Oracle 11g on HP. We’re using OBIA, ORDM, and we’ve built our own ODS using Oracle Data Integrator The performance work that I’ve done so far has been with OBIA, but what I’ll be talking about today should be applicable to any OBIEE installation. I’m interested to hear other people’s experience with OBIA and performance. Come and speak to me afterwards! We did Performance work for Existing prob with OBIA Methodology – future projects

3 The aim of this presentation
A Performance Tuning Methodology OBIEE techie stuff Learn from my mistakes! Three things to take away Questions – quick as we go / Q&A discussion at end NB – testing, not //tuning//

4 What is performance testing all about?
Response times Report ETL batch OLTP transaction System impact Resource usage Scalability Quantified & Empirical So– what is performance testing? I want to go through this pretty briefly, as it’s a huge area of theory that I can’t do proper justice to. I’ll try and cover off what I see as some of the basics It’s important to really understand this when you’re doing it, otherwise you’re not going to get valid test results and you’ll probably end up wasting a lot of time. For a proper understanding of it I’d really recommend reading papers written by Cary Millsap. Performance testing is a term used to cover quite a few different things. The way I define it – and this may or may not be industry standard, so apologies – is this: Performance testing itself is this [click] – Does it go fast enough? Does my report run in a time that meets user specification, or expectation? (This is within the context of OBIEE – for ETL we’d be talking about job runtimes and batch windows) The next stage after performance testing is generally load testing [click] This is to answer the question Will it still go fast enough, once everyone’s using it? The other big consideration with load testing is -- Will it break my server? Or to be a bit more precise, what kind of impact is a new system going to have on an existing one? When you put your new reporting system live what will happen to the existing one that’s been running happily for a year? A logical extension of load testing is stress testing - how far can it scale? This applies to both the reporting system you’re putting in, and the servers themselves (which therefore feeds into capacity planning). I’ve heard these terms used interchangeably, and there’s quite a difference between them in my mind. Why does it matter? Well it dictates how you design and execute your testing. When you move from performance testing to load testing you generally take a step back in terms of level of detail. My definition: Load Testing is not Performance Testing. It may be an area within it, but it is most definitely not one and the same thing. This presentation is not about performance TUNING Of course, testing feeds into tuning which in turn feeds into testing, so the two are inextricably linked. But to try and keep things focussed - I will try to avoid discussing tuning specifics otherwise we’ll be here all day…. Performance testing is the repeatable running of a request to obtain metrics (primarily response time), to determine whether its performance is acceptable or not Acceptable is a subjective term, but would typically be driven by user requirements In the context of OBIEE then we’re basically talking about does an Answers report or dashboard run fast enough to keep the users happy? Load Testing is about quantifying the effect an application is going to have on a system whether the application is going to perform acceptably on the system when its under load One report run on its own might be fine, but what happens on a Monday morning when a thousand users all log on at the same time and all run the report?

5 Why performance test? (Isn’t testing just for wimps?)
Check that your system performs Are the users going to be happy? Baseline How fast is fast? How slow is slow? Validate system design Do it right, first time Capacity planning This may be stating the obviously, but there are some pretty hefty reasons why you should incorporate multiple iterations of performance testing in any new development. - To test if the new system performs well Is it going to return the data to the users in the time that they’re expecting? If you don’t test for this reason alone then you’re either brave, or foolhardy! What are BI systems about, if not providing the best experience to the end user - To provide baselines How do you know if a system is performing worse if you don't know how it performed before How often have you had the problem reported to you “my report’s running slow”? If it took 10 seconds to run when you put the system live, then you’ve got a problem. Well what’s slow? 9 minutes? If it took 9 minutes to run when you put the system live then you’ve possibly just got an impatient user with unrealistic expectations. When you do get a performance problem, assuming you did your performance test packs beforehand you’ll be all set to diagnose where the problem lies By their definition, performance tests generate a lot of lovely metrics Once you think you’ve fixed the performance problem, you need to validate two things Have you fixed it? Have you broken anything else? Your performance test packs will give you the basis on which to prove it - To validate the way you are building the system For example, partitioning or indexing methods How often have you heard “It Depends” from a DBA? Optimal Parallelism setting or Partitioning strategy this time round may be different from what was optimal on the last project you did An index may help one report, how do you know it doesn’t hinder another? Unless you have a pack of repeatable tests with timings then you can’t quickly tell what the impact is In effect, you do your performance tuning up-front, as part of the build, rather than with a thousand angry users furious that their reports have stopped working

6 Why performance test? It’s never too late
“You’ll never catch all your problems in pre-production testing. That’s why you need a reliable and efficient method for solving the problems that leak through your pre-production testing processes.” — Cary Millsap - Thinking Clearly About Performance It’s not too late to put in place a solid performance test methodology around an existing system. If you set up your performance tests now you will have a set of baselines and a full picture of how your system behaves normally Then when it breaks or someone complains you’re already set to deal with it If you don’t, then when you do have problems you have to start from scratch, finding your way through the process. Which is better – at your leisure, or with the proverbial gun of unhappy users held to your head? Performance testing isn’t optional. It’s mandatory. It’s just up to you when you do it. Even if you’re running Exadata or similar and a so confident that you don’t have performance problems and never will – how are you going to capacity plan? Do you know how your system currently behaves in terms of CPU , IO, etc? How many more users can you run? Which is easier, simulate and calculate up front, or wait until it starts creaking?

7 Why performance test? Because it makes you better at your job
“At the very least, your performance test plan will make you a more competent diagnostician (and clearer thinker) when it comes time to fix the performance problems that will inevitably occur during production operation.” — Cary Millsap - Thinking Clearly About Performance Performance Testing requires an extremely thorough understanding of the system. If you do it properly there is no doubt you will come out of it better equipped to support and develop the system further

8 Performance Testing – What & Why
Quantifying response times System impact User expectations Problem diagnosis Design validation Any questions so far?

9 Performance Testing - How?
Iterative approach Define Measure Analyse Review Implement Timebox! Do it right Don’t “fudge it” Evaluate design / config options Do more testing This is the methodology we’ve developed It’s a high level view – subsequent slides will give detail Redefine test Do more testing Be Methodical

10 Define & build your test
Measure Analyse Review Implement Define & build your test Define – what are you going to test Aim of the test Scope Assumptions Specifics Data, environment, etc Build – how are you going to test it OBIEE specific E.g. : Check that the system performs Baseline performance Prove system capacity Validate system design Make sure you have a specific aim Easier to have two clear tests than try to cover everything in one Loose analogy - 10,000 ft view vs low-level flyby Don’t forget predicates – One report may have many filters and behave a lot differently depending on how they’re set WRITE IT ALL DOWN Think of breadth of test (# of reports) vs depth (# of metrics) If your test aim doesn’t define clearly enough how you’re going to run it, consider these two options: Option 1: Top down / Start big, see what breaks, follow standard troubleshooting of trying to isolate the problem Shortest time to get initial results Difficult to isolate any problems though Better for firefighting an existing problem where time is limited or there are obvious quick-wins imprecise For example – look in usage tracking for all reports that run > 5 minutes. Test only those reports. Option 2: Bottom up / Start small - define each test component, record behaviour for each test run, combine test components into bigger test runs, scale up to load testing Ultimately more precise More metrics = more precision = quicker to identify issues and resolutions But it’s boring ! Where’s my gigs of throughput bragging rights, or smoking server groaning under the load? What’s the point running a ten thousand users through your system just to prove it goes bang (or doesn’t)? Longer process, needs more accuracy For example, report response time requirements from users, representative workloads your testing may give rise to some tuning, your tests must be isolatable and repeatable, otherwise how do you validate that what you’ve changed has fixed the problem and not made it worse? Repeatable – what’s the complete set of things you’d need to run the same test somewhere else? Eg: Schema definition, DB config parameters, OBIEE NQSConfig, OBIEE RPD, OBIEE reports , [plus presentation services config & web cat] Isolatable – Do you have a dedicated performance environment?, What else is running at the same time?

11 Consider your test scope
Define Measure Analyse Review Implement Consider your test scope Fewer components = easier to manage = more precise = more efficient More components = more complex = more variables = larger margin of error Reduce complexity intelligently Don’t cut corners, but distil the problem to its essence Pick the closest point up stream from the bottleneck. More components = greater margin of error! So you’ve seen the different places in which you can test OBIEE. But how do you choose the one most applicable to the testing you’re doing? Should you just run everything against the database directly? Basically, it’s about keeping it simple, and avoiding unnecessary complexity. Say someone gives you a Ducati engine to fix. [click] You’ve already identified the area of the problem. Would you still spend all your time looking at the whole engine, or isolate where you know the problem lies and work on it from there? [click] Now clearly this isn’t an absolute, because there could be more than one problem with the engine, and so on – but the principle is sound. Reduce the complexity, intelligently. The same principle applies to testing OBIEE. Depending on the kind of reports, your RPD, and your data model and database, your performance bottlenecks will be somewhere across all of the stack. You shouldn’t be looking to cut corners, but look at it as distilling down what you’re testing to its essence and nothing more. Hopefully this is pretty obvious, as it’s going to be a more efficient use of your time, but here’s the two reasons why: 1) Your tests show a slow response time and you need to track down and diagnose this. Would you rather be considering three elements or three hundred? When I was working on some performance testing, it was clear that very little time was spent after the BI Server passes data back up to the Presentation Services. So what I did was cut out presentation services entirely, because the aim of my testing was to resolve reports that were taking 5, 10 minutes to run, and these 5/10 minutes were always down-wind of presentation services Bear in mind where your dependencies lie, where the stack is coupled. The time itself was always in the database, but you can’t just edit the SQL, because that comes from the BI server. So you intelligently pick the closest point up stream from the bottleneck. 2) The other reason why you should reduce complexity is the impact that a change will have on your test plans. If you decide you want to implement a change, maybe based on the results of your testing. How many stages in the test would you like to have to go and change and re-configure your monitoring for – two or twenty?

12 OBIEE stack Report / Dashboard Rendered report Presentation Services
Define Measure Analyse Review Implement OBIEE stack Report / Dashboard Rendered report Presentation Services Logical SQL Data set Physical SQL statement(s) BI Server Hopefully everyone’s familiar with this picture. Here’s a quick refresher A rather grumpy looking user runs a Request in answers PS sends LSQL to BI Server BI server sends SQL to DB server DB server returns results to BI Server BI Server processes data (aggregates, stitches, etc) BI Server returns data to PS server PS Server renders and returns through web/app server to the user, who’s hopefully now happy … Database Data set(s) Excludes App/Web server & presentation services plug-in

13 Presentation Services
Define Measure Analyse Review Implement OBIEE testing options Load Testing tool (eg. LoadRunner, OATS) User & Stopwatch Report / Dashboard Rendered report Presentation Services LSQL nqcmd Data set LSQL SQL Client BI Server Physical SQL Physical SQL So, how can we do our testing? I’m going to cover first all the options, and then discuss why you’d choose one rather than the other The whole point of testing is that it is repeatable, which will normally mean automated. From the top down, here are the ways of doing it [click] - To simulate the complete end-to-end system, you’ve two ways: A user and a stop watch :-) A web-capable testing tool such as LoadRunner or Oracle Application Testing Suite, which simulates a user interacting with OBIEE dashboards or answers Maybe something clever with web services too? - If you want to test from the BI Server onwards only, then you have these options: A utility that comes with OBIEE is called nqcmd. It interfaces with the BI Server using ODBC. I’m going to spend a lot of this presentation talking about it, and will come back to it shortly. You could use another ODBC-capable tool to generate the workload. - Finally, you could run the SQL on the database only. This isn’t as simple as it sounds, because remember that BI Server generates the SQL. How often have you seen the SQL being run on the database and winced or had a DBA shout at you for it? BI Server is a black box when it comes to generating the SQL, all you can do is encourage it through good data modelling both in the database schema and the RPD However, if the focus of your performance testing (remember I talked about defining why you’re doing it) is more to the load testing side of things and you are you happy that there’s no big wins to be had from the BI Server but from tuning the database itself, then you could consider running the SQL directly against it. If all that will change is the execution plan then you can save yourself a lot of time by effectively cutting out OBIEE entirely and treating it purely as a database tuning exercise. Candidates for this approach would be if you were doing things like evaluating new indexes, or parallelism or compression settings. If you’re just interested in the database then its database vendor specific. For Oracle I’d consider: SQL file run through SQL*Plus from the command line (lends itself to scripting) SQL Tuning Sets, which you can feed into SQL Performance Analyzer to run on another database Oracle RAT - real application testing (which is made up of Database Replay and SQL Performance Analyser) Database Data set(s)

14 OBIEE testing options BI Server Database LSQL Physical SQL Data set(s)
Define Measure Analyse Review Implement OBIEE testing options nqcmd LSQL BI Server Physical SQL This is what we opted for Database Data set(s)

15 nqcmd nqcmd is part of the OBIEE installation on both unix and windows
Command: nqcmd - a command line client which can issue SQL statements against either Oracle BI server or a variety of ODBC compliant backend databases. SYNOPSIS nqcmd [OPTION]... DESCRIPTION -d<data source name> -u<user name> -p<password> -s<sql input file name> -o<output result file name> -D<Delimiter> -C<# number of fetched rows by column-wise binding> -R<# number of fetched rows by row-wise binding> -a (a flag to enable async processing) -f (a flag to enable to flush output file for each write) -H (a flag to enable to open/close a request handle for each query) -z (a flag to enable UTF8 instead of ACP) -utf16 (a flag to enable UTF16 instead of ACP) -q (a flag to turn off row output) -NoFetch (a flag to disable data fetch with query execution) -NotForwardCursor (a flag to disable forwardonly cursor) -SessionVar <SessionVarName>=<SessionVarVAlue> nqcmd is part of the OBIEE installation on both unix and windows You can use nqcmd interactively, or from a script

16 nqcmd setup]$ . ./sa-init.sh setup]$ nqcmd Oracle BI Server Copyright (c) Oracle Corporation, All rights reserved Give data source name: RNMVM01 Give user name: Administrator Give password: Administrator [T]able info [C]olumn info [D]ata type info [F]oreign keys info [P]rimary key info [K]ey statistics info [S]pecial columns info [Q]uery statement Select Option: C Give catalog pattern: Give user pattern: Give table pattern: Time Give column type pattern: TABLE_QUALIFIER TABLE_NAME COLUMN_NAME A_TYPE TYPE_NAME PRECISION LENGTH SCALE RADIX NULLABLE Sample Sales Reduced Time Day Date 9 DATE Sample Sales Reduced Time Week 12 VARCHAR Sample Sales Reduced Time Month 12 VARCHAR Sample Sales Reduced Time Quarter 12 VARCHAR Sample Sales Reduced Time Year 12 VARCHAR Row count: 5 Interactively, you can use it to query the logical data model that the RPD exposes NB When you use nqcmd on unix you need to make sure you’ve set the environment variables for OBIEE first, by dot-sourcing sa-init.sh

17 nqcmd perftest]$ cat /data/perftest/lsql/test01.lsql SELECT "D0 Time"."T01 Per Name Week" saw_0 FROM "Sample Sales" WHERE ("D01 More Time Objects"."T31 Cal Week" BETWEEN 40 AND 53) AND ("D01 More Time Objects"."T35 Cal Year" = 2007) ORDER BY saw_0 perftest]$ . /app/oracle/product/obiee/setup/sa-init.sh perftest]$ nqcmd -d AnalyticsWeb -u Administrator -p Administrator -s /data/perftest/lsql/test01.lsql Oracle BI Server Copyright (c) Oracle Corporation, All rights reserved Connection open with info: [0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF saw_ Week Week Week Week Week Week Week Week Week Week Week Week Week Week 53 Row count: 14 Processed: 1 queries Using nqcmd to execute a given logical sql script is where the real power in it is Here we take a simple logical sql statement in the file test01.lsql It’s run as an input parameter to nqcmd using the s flag

18 nqcmd load test demo The good bit about being able to run nqcmd from the command line is that you can then call it from scripts. nqcmd on its own I’ve populated a directory with logical SQL statements that were generated when I ran some dashboards on the SH repository. Using a script I’ve written, I can simulate running each of these reports and dashboards I’ve disabled caching in BI Server so as to not complicate things. The script I’m going to run basically scans the directory specified and runs everything in it through nqcmd. Setup: - Run Oracle OS Watcher (every 5 seconds, for an hour) ~/osw/OSWatcher.sh 60 1 - Run Oracle OS Watch grapher java -jar ~/osw/oswg.jar -i ~/osw/archive/ - Run SQL Plus to show Usage Tracking sqlplus / as sysdba @ut1 - Tail NQQuery.log tail -f /app/oracle/product/obiee/server/Log/NQQuery.log First, check what’s in usage tracking, and NQQuery log Open two more windows: sqlplus / as ./obiee_replay.sh /data/perftest/lsql_test1/ This runs a fifth of the scripts (there are 25 in total). < run obiee_replay.sh for a handful of lsql files. > Once we’ve successfully set up the generation of the work, we need to monitor it. Here’s what I’d do. NQQuery – useful for diagnostics. Parse it with grep (get_nq_stats.sh) Usage Tracking – there’s so much you can do with this, and I’ll come back to this in a minute Add in SQL Tuning Set capture DB server stats – eg. kc_io, Oracle OS Watcher, sar What I’d then do is go to Usage Tracking and extract the run times from there. I’ve built my own table, OBIEE_REPLAY_STATS, which I populate with

19 Usage Tracking or NQQuery.log
Define Measure Analyse Review Implement nqcmd Usage Tracking or NQQuery.log BI Server Logical SQL Data Logical SQL Logical SQL Very versatile, here’s how you can use it Unix shell scripting, or powershell on Windows Generate a set of Logical SQL files Run them sequentially (twice) Test script nqcmd

20 Define Measure Analyse Review Implement nqcmd Master test script Test script nqcmd BI Server Test script Data nqcmd Logical SQL Test script Aim – script that encompasses the whole test – press a button, does it all Less interaction = less effort, less error Script – simulate user sleep, randomness - Automation of metric collection Run nqcmd in parallel it’s just a script, invoke it twice, three times, four times Write “user” scripts, that include random report choice and “sleeping” Script include: Random report choice Sleeping Logging to file SQL interaction, eg triggering SQL Tuning Set collection Target is - Press a button to run script and get response time numbers out nqcmd Test script nqcmd

21 LoadRunner a.k.a. HP Performance Centre
Define Measure Analyse Review Implement LoadRunner a.k.a. HP Performance Centre Simulates user interaction – HTTP traffic Powerful, but can be difficult to set up Ajax complicates things Do you really need to use it? Tools Fiddler2 FireBug Reference: My Oracle Support – Doc ID

22 Defining your test - summary
Define Measure Analyse Review Implement Defining your test - summary Be very clear what the aim of your test is You probably need to define multiple tests Different points on the OBIEE stack to interface Pick the most appropriate one Write everything down! Any questions?

23 Define Measure Analyse Review Implement Once you’ve defined your test you need to execute it – and measure the results Here’s the ways you should consider measuring the different parts of the stack

24 OBIEE measuring & monitoring
Define Measure Analyse Review Implement OBIEE measuring & monitoring PerfMon (Windows) Enterprise Manager (Oracle) Server metrics e.g. : IO, CPU, Memory Presentation Services plug-in App Server Web Server Apache log Oracle OS Watcher (unix) OAS log Usage Tracking sawserver.log Analytics log Presentation Services NQServer.log Enterprise Manager NQQuery.log ASH, AWR, SQL Monitor Once you’ve decided how much of the stack you’re going to test, you need to set about designing the test and how you’re going to capture your metrics Performance tests are all about collecting metrics that allow you to make statistically valid and quantifiable conclusions about your system. The primary metric of interest is time. What’s the end-to-end response time, from request to answer, and where’s the time in between spent? If a user complains that a report take five minutes to run but the DBA says they don’t see the query hit the database for the first two, and then it executes in 30 seconds, what’s happened to the other two and a half minutes? Other metrics of interest are the environmental statistics like CPU, memory, and IO, and diagnostic statistics such as the execution plan on the database and lower-level information like buffer gets etc. So – from the top down: [click] Web server, eg. Apache log – first log of the user request coming in App server, eg. OAS Presentation Serivces plugin, Analytics – (this is where you see the error logs when you get 500 Internal Server Error from analytics) sawserver.log - by default this doesn’t record that much, but by changing the logconfig.xml file you can enable extremely detailed logging. This is useful for diagnosing lots of problems, but also if you’re looking to do an accurate profile of where the time in an Answers request is spent. You can see when it receives the user request, when it sends on the logical SQL to the bI Server, and when it receives the data back See for details BI Server – spoilt for choice here. For a production environment I strongly recommend enabling Usage Tracking. For performance work you should also be using NQQuery.log where the variable levels of logging show you logical and physical SQL, BI Server execution plans, response times for each database query run, etc. As well as these two features there is the systemsmanagement functionality which exposes some very detailed counters through windows PerfMon or the BI Management Pack for OEM. You can also use the jmx protocol to access the data through clients like Jconsole or jManage For the database all the standard monitoring practices apply, depending on what your database is. For Oracle you should be using OEM, ASH, SQL Monitor, etc. And finally, for getting a complete picture of the stack’s performance -- Speak to your users! Maybe not as empirically valid as the other components, but just as important. BI Server Database PerfMon (windows only) systems management jConsole etc Enterprise Manager BI Management Pack Presentation services

25 Define Measure Analyse Review Implement NQQuery.log Query Status: Successful Completion Rows 1, bytes 96 retrieved from database query id: <<10172>> Physical query response time 1 (seconds), id <<10172>> Rows 621, bytes 9246 retrieved from database query id: <<10188>> Physical query response time 10 (seconds), id <<10188>> Physical Query Summary Stats: Number of physical queries 2, Cumulative time 11, DB-connect time 0 (seconds) Rows returned to Client 50 Logical Query Summary Stats: Elapsed time 14, Response time 12, Compilation time 2 (seconds) Useful when prob is suspected on DB, only place that individual physical SQL query response times are kept Database query times & row counts

26 Database metrics Oracle
Define Measure Analyse Review Implement Database metrics Oracle Enterprise Manager’s Performance functionality is fantastic For pure testing metrics capture you need to go to the tables V$SQL_MONITOR, etc EM is good For pure testing – need to capture data : SQL Tuning Sets Good for capturing behaviour of a set of SQL over time – longest running, most IO, etc Less good for focussing on individual queries because stats are aggregated SQL Monitor – export from EM (next slide) SQL Server DMVs, SQL Profiler, PerfMon counters, etc Other RDBMS – pass 

27 Define Measure Analyse Review Implement Oracle SQL Monitor Got to mention this – from EM you can export a standalone HTML file that renders like this brilliant

28 Measure - summary Lots of different ways to measure
Define Measure Analyse Review Implement Measure - summary Lots of different ways to measure Build measurement into your test plan Automate where possible Easier Less error Lots of different ways to measure decide what metrics are relevant to your testing Load testing – system metrics v. Important Perf testing indiv report – maybe just response time Plan your measurements as part of the test Trigger collection scripts automagically Include manual collection in test instructions

29 better to use a non-meaningful label analyse it - visualisation
Define Measure Analyse Review Implement Analysis step is : Collate data store in sensible way - Raw data - label your tests better to use a non-meaningful label analyse it - visualisation - analysis will depend on aim of test - eg loadtesting – identify bottlenecks

30 Analysing the data [click] raw data,
Define Measure Analyse Review Implement Analysing the data [click] raw data, [click] compared to a previous baseline – illustrate varience [click] host metrics - IO graph [click] response time, over time

31 Analysing the data Illustrating the data – colours! [click]
Define Measure Analyse Review Implement Analysing the data Illustrating the data – colours! [click] Use Excel, conditional formatting is great

32 Analysing the data Average (mean) 3.4 50th percentile (Median) 2
Define Measure Analyse Review Implement Analysing the data Response time 1 9 3 2 10 Response time 1 2 3 9 10 Average (mean) 3.4 50th percentile (Median) 2 Think about the data statistically, what do the number represent? Average / mean – often used, but ignores variance Percentile – more representative Standard Deviation – indication of the variance Sample quantity – statistically valid Good site: 90th percentile 9.1

33 Recording data about the test
Define Measure Analyse Review Implement Recording data about the test Dashboard Requests Logical SQL ORA_HASH(QUERY_TEXT) Physical SQL SQL IDs Execution plan Execution plan hash id As well as metrics – record data about your tests What are you recording “Response Time” against – report? Physical SQL? etc [click] Ways to capture them For each test execution aim to record how each level relates to the next Eg lsql -> SQL IDs SQL IDs -> exec plan id Might seem constant, but could change between tests: - changing the RPD could change logical SQL – and therefore physical SQL, SQL ID, exec plan - new index wouldn’t change physical SQL or SQL ID, but exec plan might Helps with retrospective analysis Be aware how each element links to the next Useful when analysing test results, to be able to identify a SQL ID in Oracle AWR etc

34 Extending Usage Tracking
Define Measure Analyse Review Implement Extending Usage Tracking OBIEE_REPLAY_STATEMENTS qt_ora_hash query_text saw_path dashboard S_NQ_ACCT START_TS ROW_COUNT TOTAL_TIME_SEC NUM_DB_QUERY QUERY_TEXT QUERY_SRC_CD SAW_SRC_PATH SAW_DASHBOARD OBIEE_REPLAY_STATS testid testenv qt_ora_hash start_ts response_time row_count db_query_cnt Follows on from how to capture data Something i put together to help with analysing statement performance across systems Ora_hash(query_text) common link Database tables obvious place to store raw perf test data Keeping track of logical SQL statements, often 2-3k in size, difficult Used ORA_HASH to encode Built new lookup table and new fact table Running queries on diff systems, could compare equal statements this way

35 Analyse Iterative approach At end of analyse - options Timebox!
Define Measure Analyse Review Implement Do it right Don’t “fudge it” Evaluate design / config options At end of analyse - options Most likely – change something and re-measure (partitioning, system config, etc) Choose what to do More tests? Indexes Config settings Etc is the test wrong? - redefine completed all tests, or reached end of timebox - review Timebox!

36 Review Iterative approach Summarise analysis Compare to the test aim
Define Measure Analyse Review Implement Redefine test Continue testing Summarise analysis Compare to the test aim Implementation options effort vs benefit Branch – both implement and continue testing What did tests show – summarise Have you proved anything Have you disproved anything Do you need to test some more Have you got time to test more Do you need to define a new set of tests What’s practical to implement? Return vs effort. Implement

37 Review Example from one of the review stages
Define Measure Analyse Review Implement Review Example from one of the review stages Evaluate IO profile from four different iterations Default parallelism – bottleneck at 800MB/s Output: - implement: reduce DOP - branch : look at auto-DOP in 11gR2

38 Implement Iterative approach
Define Measure Analyse Review Implement Don’t forget to validate your implementation Use your perf test scripts Implement the chosen option BASELINE & test it using your perf tests Branch the code line and do more testing if you want eg us and parallelism, compression When you hit perf problems in Prod, you can use your pre-defined perf tests to assess the scope and nature of the problem

39 Lessons Learnt You won’t get your testing right first time
There’s no shame in that Don’t cook the books Better to redefine your test than invalidate its results Stick to the methodology Don’t move the goalposts Very tempting to pick off the “low-hanging fruit” If you do, make sure you don’t get indigestion… Timebox Test your implementation! Testing – understand more about system as you go – prob want to redefine test – that’s part of the process! Performance Testing is an iterative process. I can’t stress this enough. You will not get it right the first time you do it Whatever you do, you’ll probably miss something or invalidate your tests. Remember that an iterative approach is entirely valid, don’t feel you “got it wrong” and have to fudge the results to cover your mistake. Better to abandon a test and learn from the mistake than produce a “perfect” test that’s complete rubbish. Stick to method Benefit of also enforces justification for changes, avoid “we’ve always done it that way” don’t move the goalposts. You might find some horrible queries as you dig into them you notice some obvious “quick wins” If you rush the fix in without completing your first round of testing, you risk invalidating it BE METHODICAL!!!! Timebox the execute/measure/analyse iterations -don’t get lost in diminishing returns It’s a good idea to timebox your work, and have regular review points Test your implementation! parallel config, not tested properly after implement in test env, nearly got to prod without realising Don’t get so bogged down in the detail that you miss the wood for the trees You can end up focussing on perfecting one element of the system at the expense of all the others.

40 How to approach performance testing
Think clearly This presentation has shown you how to run big workloads against your OBIEE system But, resist the temptation to dash off and see what happens when you run a thousand users against your system at once. It’ll be fun, but ultimately a waste of time. You have to define what you’re going to do. You need to define what the ultimate aim is. Are you proving a system performs to specific user requirements? In which case your test definition is almost written for you, you just have to fill in the gaps If you’re building a performance test for best-practice and all the good reasons I spoke about before then you need to think carefully about what you’ll test. What’s a representative sample of the system’s workload? For example: - Analyse existing usage, pick the most frequently run reports SPEAK TO YOUR USERS! Which reports do they care about? Be wary of only analysing the reports that users complain about though – you want to be colleting lots and lots of good metrics. What happens when you fix the slow reports – the old “fast” reports will now appear slow in comparison, so you want to have some baselines for them too I can’t stress this strongly enough. Cary Millsap writes excellently on the whole subject of performance. I can’t recommend highly enough his paper “Thinking Clearly About Performance”, as well as many of the articles on his blog. There are books and books written on how you should approach performance testing and tuning, people like Mr Millsap have built their whole careers around it. It’s way outside the scope of this, but I believe it’s essential to understand the approach to follow, otherwise all your testing can be in vain. It’s not the same as dashing off an OBIEE report that you can bin and recreate next week. Imagine designing your DW schema without good modelling knowledge – or think of ones that you’ve worked with where the person who created it didn’t understand what they were doing. The wasted time and misleading results can be potentially disastrous if you don’t get it right up front. Take my word for it – time invested up front reading and understanding will repay itself ten-fold. Preaching over.

41 Performance Testing OBIEE
Iterative approach Define Measure Analyse Review Implement Do it right Don’t “fudge it” Evaluate design / config options Do more testing Questions & Discussion Redefine test Do more testing Be Methodical · ·

42 #EOF Performance tuning links: ** Rittman’s article
** OBIEE EMG thread Oracle® Database Performance Tuning Guide 11g Release 1 (11.1) Performance Improvement Methods I/O Configuration and Design Oracle® Database 2 Day + Data Warehousing Guide 11g Release 1 (11.1) Optimizing Data Warehouse Operations Eliminating Performance Bottlenecks Oracle® Database VLDB and Partitioning Guide 11g Release 1 (11.1) Oracle® Database Data Warehousing Guide 11g Release 1 (11.1) Part V Data Warehouse Performance DBMS_SQLTUNE


Download ppt "Performance Testing and OBIEE"

Similar presentations


Ads by Google