Presentation on theme: "Session ID: BI-206 Performance tuning SAP NetWeaver BW 7.x"— Presentation transcript:
1 Session ID: BI-206 Performance tuning SAP NetWeaver BW 7.x Dr. Bjarne Berg
2 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome Hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
3 User's reaction to poor performance Is this your user community?How can you avoid this BEFORE it happens?
4 Performance is more important than price for the senior management. The Key to BI SuccessPerformance is more important than price for the senior management.1,873 managers and business leaders were asked what factor was most important for their BI application.Even in a recession, the key to BI success was functionality, ease of use, integration and Performance.Price, standards, product reputation and architecture was of lesser importance.Source: Business Research Center Survey44
5 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome Hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
7 An Example of a Real System On the surface, this company appears to have a typical BW implementation with a set of InfoCubes, MultiProviders and DSOsThe system supported five different department and had been “live” for about one year.
8 Key Question: How well do you know your own system? A Quick View of the technical designJust because an area is colored “red” does not mean it is wrong. However, an InfoProvider with many red areas is worth taking a close look at.Key Question: How well do you know your own system?
9 CharacteristicsIn general, a common BW configuration contains a set of characteristics that are used for analysis purposes. The number of these characteristics varies from implementation to implementations.Typically configurations ranges from 1 to 40y characteristics. The design depends largely on the requirements of the business, but there are some technical tradeoffs in load times when adding a very high number of characteristics. This is particularly true when these contains large text fields that are loaded at a high frequency and at a high number of records.
10 Navigational Attributes and Hierarchies Navigational attributes lends flexibility to the way users can access data. Common configuration consists of 1 to 30y attributes. While technically not incorrect one should review InfoCubes that does not contain any navigational attributes. One should also review of any InfoCube that contained more than thirty. This may be an indicator that too much information is being placed in a single InfoCube.HierarchiesHierarchies are ways for users to "drill-down" into the data and is commonly used for analysis purposes. Typical configurations tends to have one to eight. One should review any InfoCube with no hierarchy to validate this design with end user navigation, as well as question any design that contains a very high degree of these.Developer quote: “The consultants told me I could not cram all this stuff into one InfoCube. I told them – you just watch me!!”
11 Dimensions and Key Figures BW allows for upto 13 dimensions to be created by a developer on a single InfoCube. However, using all of these on a first implementation severely limits any future extensions without major redesign of the system. One should review any InfoCubes that are approaching this limit.Key figuresWhile no limitations are imposed by BW in terms of number of key figures (measures), typical implementations contains 1 to 20 of these. While a higher number of these may be required, there a are significant tradeoffs of load performance when a high number records are loaded (these are loaded with each transaction).Lessons Learned: Don’t “paint” yourself in a corner on day one!!
12 Record Length Record length In general, as the record length of an InfoSource increases, more data may be populated to the InfoCube.Since an InfoCube might have more that one InfoSource, the length of each may be an indicator of the InfoCube growth size as the company rolls out BW to other divisions.You should review of the design of InfoSources with large record lengths to determine the true need of including all the fields in the InfoCube vs. using alternate fields i.e. short text or codes, or removing them from the system..Lessons Learned: Don’t throw in the “kitchen sink” because it might come in handy one day…..
13 Lessons Learned: Spend your time and effort in a focused manner!! Data LoadsWhen fixing data load problems, narrow the problem down quickly and focus on these areas.Most data loads can be de-coupled from each other in the process chains.This may reduce the time needed for activationLessons Learned: Spend your time and effort in a focused manner!!
14 Database Performance - RSA1--> Manage InfoCubes --> Performance Database statistics are used by the database optimizer to route queries. Outdated statistics leads to performance degradation.Outdated indexes can lead to very poor search performance in all queries where conditioning is used (i.e. mandatory prompts).
15 Use of Line Item Dimensions and Monitoring tools Line item dimensions are basically fields that are transaction oriented.Once flagged as a ‘line item dimension’, the field is actually stored in the fact table and have no table joins.The results is significant improvements to query speeds (10%-15%)Programs that can help you monitor the system design:SAP_ANALYZE_ALL_INFOCUBESANALYZE_RSZ_TABLESSAP_INFOCUBE_DESIGNSExplore the use line item dimensions for fields that are frequently conditioned in queries. This model change can yield faster queries15
16 B-three Vs. Bitmap Indexes When you flag a dimension as “high cardinality” SAP BI will use a b-tree index instead of a bit-map index.This can be substantially slower if the high cardinality does not exist in the data in general (star-joins cannot be used with b-trees).Validate the high-cardinality of the data and reset the flag if needed – this will give a better index type and performance
17 MultiProviders and Hints Problem: To reduce data volume in each InfoCube,data is partitioned by Time period.A query now have to search in all InfoProviders to findthe data This is very slow.Solution: We can add “hints” to guide the query execution. In the RRKMULTIPROVHINT table, you can specify one or several characteristics for each MultiProvider which are then used to partition the MultiProvider into BasicCubes.If a query has restrictions on this characteristic, the OLAP processor is already checked to see which part cubes can return data for the query. The data manager can then completely ignore the remaining cubes.An entry in RRKMULTIPROVHINT only makes sense if a few attributes of this characteristic (that is, only a few data slices) are affected in the majority of, or the most important, queries (SAP Notes: See also: and ).
18 MultiProviders and Parallel Processing To avoid an overflow of the memory, parallel processing is cancelled as soon as the collected result contains 30,000 rows or more and there is at least one incomplete sub processThe MultiProvider query is then restarted automatically and processed sequentiallyWhat appears to be parallel processing, is actually sequential processing plus the startup phase of parallel processing.Generally, it’s recommended that you keep the number of InfoProviders of a MultiProvider to no more than 10However, even at 4-5 large InfoProviders you may experience performance degradation
19 More on MultiProviders and Parallel Processing Consider deactivating parallel processing for those queries that are MultiProvider queries and have large result sets (and “hints” cannot be used)Since SAP BW 3.0B SP14 , you can change the default value of 30,000 rows - Refer to SAP Note , SAP Note , SAP Note , and SAP NoteA larger number of base InfoProviders is likely to result in a scenario where there are many more base InfoProviders than available dialog processes, which results in limited parallel processing and many pipelined sub-queriesYou can also change the number of dialogs (increase the use of parallel processing) in RSADMIN by changing the settings for QUERY_MAX_WP_DIAG.
20 SAP BW 7.0 Performance - Data Activation With BW 7.01 we can disable delta consistency check for write- optimized DataStore objects. This protects delta requests that have been already propagated per delta mode from deletion.This can be switched on/off – e.g. for write-optimized DataStore objects as initial staging layer. When doing so, significant load performance benefits can be achieved (10-30%).Higher benefits are obtained from very large InfoProviders with thousands of requests.2020
21 Semantic partitioned object in SAP BW 7.3 In BW 7.3 SPO is introduced to help partition InfoCubes for query performance, and DSOs for load performance.BW 7.3 provides wizards to help you partition objects by year, business units or products.BW also generate automatically all needed DTP such as transformation rules and filters to load the correct infoProvider.Source: SAP AG, 2010SAP suggests that this will make the maintenance is easier since any remodeling only need to change the reference structure.SPOs can be added to MultiProviders for simpler query administration and to mask complexity2121
22 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome Hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
23 Query Read ModesThere are three query read modes that determines the amount of data to be fetched from a database and sent to the application server:1. Read all dataAll data is read from a database and stored in user memory space2. Read data during navigationData is read from a database only on demand during navigation3. Read data during navigation and when expanding the hierarchyData is read when requested by users in navigationKey Feature: Reading data during navigation minimizes the impact on the application server resources because only data that the user requires will be retrieved
24 Recommendation: Query Read Mode for Large Hierarchies For queries involving large hierarchies, it is smart to select Read data during navigation and when expanding this option to avoid reading data for the hierarchy nodes that are not expanded.Reserve the Read all data mode for special queries— i.e. when a majority of the users need a given query to slice and dice against all dimensions, or data miningThis places heavy demand on database and memory resources and may impact other BW processesA query read mode can be defined on an individual query or as a default for new queries (transaction RSRT)SAP's recommendations for OLAP Universes & Ad-Hoc analysis (formerly: 'Webi'):1. Use of hierarchy variable is recommended2. Hierarchy support in SAP Web Intelligence for SAP BW is limited3. The Use Query Drill option significantly improves drilldown performance4. Look at the 'Query Stripping' option for power users.2424
25 Reduce the use of conditions-and-exceptions reporting Conditions & exceptions are usually processed by the application serverThis generates additional data transfer between database & application serversIf conditions and exceptions have to be used, the amount of data to be processed should be minimized with filtersWhen multiple drilldowns are required, separate the drilldown steps by using free characteristics rather than rows and columnsBENEFIT: This results in a smaller initial result set, and therefore faster query processing and data transport as compared to a query where all characteristics are in rowsThis approach separates the drill-down steps. In addition to accelerating query processing, it provides the user more manageable portions of data.
26 Performance settings for Query Execution This decides how many records are read during navigation.In 7.x BI: OLAP engine can read deltas into the cache. Does not invalidate existing query cache.Examine the request status when reading the InfoProviderTurn off/on parallel processingWhen will the query program beregenerated based on databasestatisticsDisplays the level of statistics collected.
27 Filters in QueriesUsing filters contributes to reducing the number of database reads and the size of the result set, thereby significantly improving query runtimes.Filters are especially valuable when associated with large dimensions, where there is a large number of characteristics such as customers and document numbers.If large reports have to be produced, leverage the BEx Broadcaster to generate batch reports and pre-deliver them each morning to their , PDF or printer.
28 The RSRT Transaction to examine slow queries P1 of 4The RSRT transaction is one of the most beneficial transaction to examine the query performance and to conduct 'diagnostic' on slow queries.
29 Do you need an aggregate - some hints P2 of 4This suggests that an Aggregate would have been beneficial
30 Get Database InfoP3 of 4In this example, the basis team should be involved to research why the Oracle settings are not per SAP's recommendationThe RSRT and RSRV codes are key for debugging and analyzing slow queries.
31 Get Design Feedback in RSRT P4 of 4We can see that the system creates a yellow flag for the 6 base cubes in the MultiProvider and the yellow flag for the 14 free chars.HINT: Track front-end data transfers & OLAP performance by using RSTT in SAP 7.0 BI (RSRTRACE in BW 3.5)
32 Debug Queries using the transaction- RSRT Using RSRT you can execute the query and see each breakpoint, thereby debugging the query and see where the execution is slow.Try running slow queries in debug mode with parallel processing deactivated to see if they run faster.
33 The Performance Killers - Restrictive Key Figures When Restrictive Key Figures (RKF) are included in a query, conditioning is done for each of them during query execution. This is very time consuming and a high number of RKFs can seriously hurt query performanceMy Recommendation: Reduce RKFs in the query to as few as possible. Also, define calculated & RKFs on the Infoprovider level instead of locally within the query. Why?:Benefit: Formulas within an Infoprovider are returned at runtime and held in cache.Drawback: Local formulas and selections are calculated with each navigation step.
34 SAP's recommendation for Key Figures in OLAP universes "A large number of Key Figures in the BEx query will incur a significant performance penalty when running queries, regardless of whether the Key Figures are included in the universe or used in the SAP BusinessObjects Ad-hoc (WebI) queryOnly include KFs used for reporting in the BEx queryThis performance impact is due to time spent loading metadata for units, executed for all measures in the query".After SAP BusinessObjects Enterprise XI 3.1 FP 1.1, the impact of large number of key figures was somewhat reduced by retrieving metadata information only when the unit/currency metadata info is selected in the Webi Query34
35 The Performance Killers - Calculated Key Figure Calculated Key Figures (CKF) are computedduring run-time, and a many CKFs can slowdown the query performance.How to fix this:Many of the CKF can be done during data loads & physically stored in the InfoProvider. This reduces the number of computations and the query can use simple table reads instead. Do not use total rows when not required (this require additional processing on the OLAP side).SAP's recommendation for OLAP universes: "RKF and CKF should be built as part of the underlying BEx query to use the SAP BW back-end processing for better performanceQueries with a larger set of such KFs should use the “Use Selection of Structure Members” option in the Query Monitor (RSRT) to leverage the OLAP engine"
36 Reducing the text in query will also speed up the processing some. The BI Analytical Engine and SortingSorting is done by the BI Analytical Engine. Like all computer systems,sorting data in a reports with large result sets can be time consuming.Try reducing the number of sorts in the 'default view'. This may improve the report execution & provide the users with data faster. User can then choose to sort the data themselves.Reducing the text in query will also speed up the processing some.36
37 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome Hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
38 Correct Aggregates Are Easy to Build We can create proposals from the query, last navigation by users, or by BW statisticsCreate aggregate proposals based on queries that are performing poorly.Create aggregate proposals based on BW statistics. For example:Select the run time of queries to be analyzedSelect time period to be analyzedOnly those queries executed in this time period will be reviewed to create the proposal
39 Activate the aggregate The process of turning 'on' the aggregates is simple
41 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome Hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
42 Different Uses of the MDX and the OLAP Cache The OLAP Cache is used by BW as the core in-memory data set. It retrieves the data from the server if the data set is available.The Cache is based on First-in --> Last out.This means that the query result set that was accessed by one user at 8:00am may no longer be available in-memory when another user is accessing it at 1:00pm.Therefore, queries may appear to run slower sometimes.The MDX cache is used by MDX based interfaces, including the OLAP Universe.
43 Use the BEx Broadcaster to Pre-Fill the Cache Distribution TypesYou can increase query speed by broadcasting the query result of commonly used queries to the cache.Users do not need to execute the query from the database. Instead the result is already in the system memory (much faster).
44 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
45 The Memory Cache SizeThe OLAP Cache is by default 100 MB for local and 200 MB for global useThis may be too low...Look at available hardware and work with you basis team to see if you can increase this.If you decide to increase the cache, use the transaction code RSCUSTV14.WARNING: The Cache is not used when a query contains a virtual key figure or virtual characteristics, or when the query is accessing a transactional DSO, or a virtual InfoProvider
46 Monitor Application Servers and Adjust Cache Size To monitor the usage of the cache on each of the application servers, use transaction code RSRCACHE and also periodically review the analysis of load distribution using ST03N – Expert ModePS! The size of OLAP Cache is physically limited by the amount of memory set in system parameter rsdb/esm/buffersize_kb.The settings are available in RSPFPAR and RZ11.
47 The Four Options for OLAP Cache Persistence Settings CACHE OLAP Persistence settingsNoteWhenWhatt-codeDefaultFlatfileChange the logical file BW_OLAP_CACHE when installing the system (not valid name)FILEOptionalCluster tableMedium and small result setsRSR_CACHE_DBS_IX RSR_CACHE_DB_IXBinary Large Objects (blob)Best for large result setsRSR_CACHE_DBS_BL RSR_CACHE_DB_BLAvailable since BW 7.0 SP 14Blob/Cluster EnhancedNo central cache directory or lock concept (enqueue). The mode is not available by default.Set RSR_CACHE_ACTIVATE_NEW RSADMIN VALUE=xSource: SAP AG 2010.
48 Application Server Memory Usage In this real example, be the judge. Do we:a) Need another application server?b) Need to upgrade the application server with more hardware?c) Performance tune the application?Roll memory was never maxed out in the periodPaging memory was never maxed out in the periodExtended memory was never maxed out in the periodOnly 3Gb of 9 Gb of heap memory was ever used
49 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome Hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
50 Why In-memory processing? Disk speed is growing slower than all other hardware componentsTechnology DriversArchitectural Drivers19902010Improvement199020100.05MIPS/$253.31MIPS/$5066xDisk-based data storageSimple consumption of apps (fat client UI, EDI)General-purpose, application-agnostic databaseIn-memory data storesMulti-channel UI, high event volume, cross industry value chainsApplication-aware and intelligent data managementCPU0.02MB/$50.15MB/$2502xMemory216264248xAddressable Memory100Mbps100Gbps1000 xNetwork SpeedSource: 1990 numbers SAP AG ; numbers, Dr. Berg5MBPS600MBPS120xDiskData TransferPhysical hard drive speeds only grew by 120 times since All other hardware components grew faster.50
51 In Memory Processing - General Highlights - BWA
52 BI Analytical Engine’s Query Executing Priorities Query Execution Without SAP NetWeaver BW AcceleratorQuery Execution With SAP NetWeaver BW AcceleratorInformation Broadcasting / PrecalculationInformation Broadcasting / PrecalculationQuery CacheQuery CacheAggregatesSAP BW AcceleratorInfoProviderAggregates can be replaced with SAP BW Accelerator, while the memory cache is still useful.
53 BW Accelerator Performance Increases - real example Number of QueriesThe major improvement is to make query execution more predictable and overall fasterSecondsNumber of QueriesSeconds53
54 SAP BW Accelerator Administration is minimal and simple The Admin work is done through a single interfaceThe admin interface is available under the transaction code RSDDBWAMON.Health checks for SAP BW Accelerator are available under the transaction code RSRVPlan for 2-5 days of SAP BW Accelerator training. You need a maximum of 1-2 administrators (1 for backup)
55 Health-Checks and Reconciliation The SAP BW Accelerator interface allows you to compare the data in SAP BW vs. the indexes. This means that you can easily check if they are outdated.Other tools include the ability to run queries to see if the numbers in the two databases match.
56 Proposals and Estimations The Analysis and Repair options include many proposals and time estimation tools that you should leverage.The interface can propose delta- indexes for periodic updates (not complete builds).You can estimate the run-time of indexing the fact table of an InfoCube before you place it into a process chain or a manual job.You can also estimate the memory you need before you add new records into memory.
57 The SAP BW Accelerator “Reset Button” The simple way to fix most issues is to delete all indexes and rebuild them during a weekendThink of this as the ultimate “reset” buttonYou can also rebuild master data indexes
58 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome Hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
59 SAP BW 7.3 Performance - Data Movement & Activation BW version 7.3 has significant performance benefits:Semantic Partitioned Objects (SPO) as we already covered.Improved data activation due to new package fetch of active table instead of single lookups. The new 7.3 runtime option “new, unique data records only” prevents all lookups during activation.A new monitor in BW Administration Cockpit so that database usage can be tracked.5959
60 Performance at All Levels AgendaBackgroundThe Right DesignInfoCubes, DSOs and MultiProvidersPerformance at All LevelsQuery Design and BOBJ TipsBuilding and Using AggregatesUsing MDX and OLAP Cache correctlySome Hardware settingsBW- AcceleratorWhat is New in BW 7.3EarlyWatch Reports
61 EarlyWatch Reports in Solution Manager 4.0 EarlyWatch reports provide a simple way to confirm how your system is running and to catch problemsA “goldmine” for system recommendationsEarlyWatch reports are available since Solution manager version 3.2 SP8.The more statistics cubes you have activated in BW, the better usage information you will get.Depending on your version of SAP BW, you can activate InfoCubes. Also, make sure you capture statistics at the query level (set it to 'all').System issues can be hard to pin-down without access to EarlyWatch reports. Monitoring reports allows you to tune the system before user complains
62 Information about an pending 'disaster' This system is about to 'crash'The system is growing by 400+ Gb per month, the app server is 100% utilized and the Db server is at 92%.This customer needed to improve the hardware to get the query performance to an acceptable level
63 Inconsistent patches may be caught In this example, we see that the EarlyWatch report found many known issues at the Oracle level that should be implemented before the performance tuning effort started.Before the patches were applied it took 24 to 26 minutes to execute some queries, after the fixes, the queries ran at less then two minutes.
64 More at: Performance tuning presentations, tutorials & articles SAP SDN Community web page for Business Intelligence Performance Tuning https://www.sdn.sap.com/irj/sdn/bi-performance-tuningASUG407 - SAP BW Query Performance Tuning with Aggregates by Ron Silberstein (requires SDN or Marketplace log-on). 54 min movie.https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/media/uuid/d9fd84ad d9a5-ba726caa585dLarge scale testing of SAP BI Accelerator on a NetWeaver Platformhttps://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b00e7bb5-3add-2a e8582df5c70f