Configuring Global Payroll for Optimal Performance

Slides:



Advertisements
Similar presentations
Symantec 2010 Windows 7 Migration Global Results.
Advertisements

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Introduction to Graphing The Rectangular Coordinate System Scatterplots.
AGVISE Laboratories %Zone or Grid Samples – Northwood laboratory
5.1 Rules for Exponents Review of Bases and Exponents Zero Exponents
Variations of the Turing Machine
Accredited Supplier Communications Plan FY09-10 Q1 to Q4 May 2009, v2.0 Home Access Marketing & Stakeholder Engagement Team.
Information Retrieval from Relational Databases
Sequential Logic Design
Copyright © 2013 Elsevier Inc. All rights reserved.
Copyright © 2013 Elsevier Inc. All rights reserved.
David Burdett May 11, 2004 Package Binding for WS CDL.
Create an Application Title 1Y - Youth Chapter 5.
Add Governors Discretionary (1G) Grants Chapter 6.
CALENDAR.
1 Advanced Tools for Account Searches and Portfolios Dawn Gamache Cindy Bylander.
The 5S numbers game..
Photo Slideshow Instructions (delete before presenting or this page will show when slideshow loops) 1.Set PowerPoint to work in Outline. View/Normal click.
© Tally Solutions Pvt. Ltd. All Rights Reserved Shoper 9 License Management December 09.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Welcome. © 2008 ADP, Inc. 2 Overview A Look at the Web Site Question and Answer Session Agenda.
Student & Work Study Employment Facts & Time Card Training
Break Time Remaining 10:00.
The basics for simulations
Factoring Quadratics — ax² + bx + c Topic
EE, NCKU Tien-Hao Chang (Darby Chang)
13 Copyright © 2005, Oracle. All rights reserved. Monitoring and Improving Performance.
Database Performance Tuning and Query Optimization
PP Test Review Sections 6-1 to 6-6
Employee & Manager Self Service Overview
1 IMDS Tutorial Integrated Microarray Database System.
Briana B. Morrison Adapted from William Collins
Go-Faster Consultancy Ltd.1 Experiences of Global Temporary Tables in Oracle 8.1 David Kurtz Go-Faster Consultancy Ltd.
Chapter 10: Virtual Memory
Regression with Panel Data
Operating Systems Operating Systems - Winter 2012 Chapter 2 - Processes Vrije Universiteit Amsterdam.
1 Prediction of electrical energy by photovoltaic devices in urban situations By. R.C. Ott July 2011.
Dynamic Access Control the file server, reimagined Presented by Mark on twitter 1 contents copyright 2013 Mark Minasi.
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
Note: A bolded number or letter refers to an entire lesson or appendix. A Adding Data Through a View ADD_MONTHS Function 03-22, 03-23, 03-46,
Biology 2 Plant Kingdom Identification Test Review.
FAFSA on the Web Preview Presentation December 2013.
MaK_Full ahead loaded 1 Alarm Page Directory (F11)
Facebook Pages 101: Your Organization’s Foothold on the Social Web A Volunteer Leader Webinar Sponsored by CACO December 1, 2010 Andrew Gossen, Senior.
HORIZONT 1 XINFO ® The IT Information System HORIZONT Software for Datacenters Garmischer Str. 8 D München Tel ++49(0)89 /
GEtServices Services Training For Suppliers Requests/Proposals.
Global Payroll Performance Optimisation - I David Kurtz Go-Faster Consultancy Ltd.
1 GIS Maps and Tax Roll Submission. 2 Exporting A New Shapefile.
: 3 00.
5 minutes.
WorkKeys Internet Version Training
One-Degree Imager (ODI), WIYN Observatory What’s REALLY New in SolidWorks 2010 Richard Doyle, User Community Manager inspiration.
FIGURE 12-1 Op-amp symbols and packages.
Clock will move after 1 minute
Copyright © 2013 Pearson Education, Inc. All rights reserved Chapter 11 Simple Linear Regression.
Introduction to Management Science
Select a time to count down from the clock above
Pupil Premium CSV File Import & Maintain Jim Haywood Product Manager for Statutory Returns.
Copyright Tim Morris/St Stephen's School
Patient Survey Results 2013 Nicki Mott. Patient Survey 2013 Patient Survey conducted by IPOS Mori by posting questionnaires to random patients in the.
A Data Warehouse Mining Tool Stephen Turner Chris Frala
1 Dr. Scott Schaefer Least Squares Curves, Rational Representations, Splines and Continuity.
Advanced Users Training 1 ENTERPRISE REPORTING FINANCIAL REPORTS.
Outlook 2013 Web App (OWA) User Guide Durham Technical Community College.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Presented to: By: Date: Federal Aviation Administration FAA Safety Team FAASafety.gov AMT Awards Program Sun ‘n Fun Bryan Neville, FAASTeam April 21, 2009.
Configuring PeopleSoft Global Payroll for Optimal Performance
Presentation transcript:

Configuring Global Payroll for Optimal Performance nalin.n.patel@abbey.com david.kurtz@go-faster.co.uk

Abbey key facts 1 Sixth largest bank by assets in the UK Founded in 1944 Currently have approximately 18m customers 741 branches across UK

Abbey key facts 2 Abbey's main offices are in London, Milton Keynes, Bradford, Glasgow and Belfast. We have around 26,000 people (full time equivalent) We have about 1.8 million shareholders Assets at 30 June 2004 - £171 billion Personal Financial Services trading profit before tax for 6 months to 30 June 2004 - £340 million

History of PeopleSoft at Abbey PeopleSoft HRMS acquired for recruitment in 1994 Implemented PeopleSoft HRMS in 1997 Recruitment, Personnel & Training Paylink used to send data from HRMS to payroll Workflow and self-service with v7.5 in 2000 JAVA HTML Clients PeopleSoft HRMS upgraded to 8 SP1 in 2001 Implemented PeopleSoft Payroll in August 2003 Project initiated to upgrade to HCM 8.8

Current Platform AppServer runs on SUN E4500 Database runs on SUN E10000 Both boxes are shared with other applications Tier 1 mirrored disks Oracle 9.2.0.4

Why PeopleSoft Payroll ? Integrated HRMS Common infrastructure Web enabled Automate administrative functions Manager Self-Service Absence and maternity input Employee Self-Service Overtime input On-line payslips Real-time data input Increase system availability

PeopleSoft Payroll Implementation Development commenced in January 2002 In-house IT Project team Project delays due to re-scoping and internal re-structure Streamed payroll during parallel run tests Went live with payroll and absence in August 2003 30,000 staff and 7,000 pensioners 12 streams introduced in February 2004 Introduced hash partition in July 2004 due to increased run times Identify and calculate taking 2.5 hours But we had to tune it.

If you can’t hear me say so now. Resources If you can’t hear me say so now. Please feel free to ask questions as we go along. The presentation will be available from Customer Connection –> Conference Website www.go-faster.co.uk

Independent consultant Who am I? Independent consultant Abbey, DoD, Unilever, UBS… System Performance Tuning Oracle Databases Unix Tuxedo PeopleSoft Apps Inc. Global Payroll Book www.psftdba.com

Who are You? Technical? Non-Technical? DBA Developer HR Functional Familiar with PeopleSoft infrastructure Non-Technical? HR Functional HR/P Administrator Project Manager

Configuring Global Payroll Physical Database Considerations Parallel processing Increase concurrency Reduce Contention Reduce I/O Permit CPU usage some Oracle specific GP Changes Efficent GP ‘rules’ Reduce CPU consumption of rules Engine Data Migration This presentation will discuss the physical implications of using an Oracle database. The principle of read consistency is fundamental to a great deal of what Oracle does behind the scenes. Read Consistency means that the data returned by a queries is constant during the life of that query. If data being returned by a long running query is updated AND committed after that long running query starts, but before it can be fetched by the query, then the value returned will be the value as at the point in time when the query began. Physically, the before update value is reconstructed from the rollback segment. All this is also done without locking the entire table, or the entire block of data. This reconstruction process is slow and CPU as well as disk intensive. It is to be avoided. GP can process very significant quantities of data. It is important that the SQL in the identify stage executes efficiently. On the GP side it is important that the rules are written efficiently in order to minimise accesses to the PIN manager.

This has been done for real This is not theory! This has been done for real UBS – 32,000 payees Abbey – 36,000 payees* DoD – 640,000 payees (benchmark) Unilever – 12,500 payees (weekly & monthly) 3 other installations in UK, France & Japan And it works! The figures in this presentation come (with permission) from Abbey National

Payroll is calculated by a Cobol program Overview Payroll is calculated by a Cobol program GPPDPRUN Single non-threaded process Four Stages Cancel Identify (Re-)Calculate Finalise The payroll calculation is performed by a Cobol process. It is a single process. It can only execute on one CPU at any one time. If you have 10 CPUs only one will ever be consumed by a single Cobol process. The Oracle shadow process will not be active at the same time as the Cobol. To utilise more than one CPU you need to run more than one Cobol in parallel. This is termed ‘streaming’ Cancel phase essentially deletes rows from the result tables that were inserted by previous calculations. The identify phase determines which employees have to be calculated. This populates two result tables GP_PYE_SEG_STAT and GP_PYE_PRC_STAT. GP_PYE_SEG_STAT has one row per employee per period per process type (calc, absence). Calculate phase is the CPU intensive part when the rule engine performs the calculation. The finalise closes off a pay calendar.

Three stages with different behaviours Cancellation Monolithic SQL to delete results Identify Populating temporary work tables Database Intensive SQL Set processing ~10-20 minutes Calculation Opening cursors Load data into memory Evaluation of rules (Cobol only) Batch insert of results into database Cobol (CPU) Intensive ~6500 segments / hour / stream (was 400) The identify stages populates some temporary tables and opens a number of cursors that feed data into the calculate phase. The identify phase performs takes the information, caching some of it in memory as it goes. The results are inserting into the result tables in batches, by default, every 500 rows (but this is configurable). The identify phase is basically a series of SQL insert/update statements and is very database intensive, The Oracle shadow processes Calculation was initially 400 segs/hr/stream, tuning got it to 6500 segs. That was mainly achieve by eliminating contention. You will hear me use this concept of segments per stream per hour. Each employee has a segment to calculate. If there are changes effective in a prior pay period he will extra segments. We generally find that the average rate of calculation is very linear if we look at processing time per segment.

Employees are split into groups defined by ranges of employee ID What is Streaming? Employees are split into groups defined by ranges of employee ID Each group/range can be processed by a different instance/stream of GPPDPRUN The streams can then be run in parallel. Vanilla PeopleSoft functionality. This is not customisation Streaming is vanilla functionality delivered by PeopleSoft. Streams defined as ranges of employee IDs

Why is Streaming Necessary? GPPDPRUN is a standard Cobol program. It is a single threaded process One Cobol process can only run on one CPU at any one time 36000 payees at 2700 payees /stream/hour 97000 segments at 7350 segments/stream/hour 49m - 1h11m - 12 streams 13h12m if run in one stream On a multi-processor server streaming enables consumption of extra CPU.

Calculation of Stream Definitions Objective is roughly equal processing time for all stream PS_GP_PYE_SEG_STAT indicates work to be done by payroll. Calculate ranges of roughly equal numbers of rows for this table Script using Oracle’s Analytic functions that directly populates PS_GP_STRM Equal processing time does NOT correspond to equal volumes of result data.

GP Calculation Times

Employee Distribution Creep As new employees hired EMPLIDs allocated into the same stream. That stream starts to run longer. Effective execution time is maximum execution time for all streams. Need to periodically recalculate stream ranges Need to reflect this in physical changes. There are a number of implications of using streams

Employee Distribution Creep Company merger/divestment. Pensioners Abbey 30000 employees – avg 3.03 segments per employee 6000 pensioners – 1 segment per pensioner 12 streams Employee IDs allocated sequentially Earlier streams richer in pensioners Later streams richer in employees

Database Contention Rollback Contention Snapshot Too Old Insert Contention I/O Volume Datafile I/O Redo/Archive Log Activity It is not only possible, but highly likely, that Read consistency means

Working Storage Tables Rollback Contention Working Storage Tables Shared by all streams Rows inserted/deleted during run Different Streams never create locks that block each other Do update different rows in same block during processing 1 interested transaction per stream in many blocks. There is a additional rollback overhead of 16 bytes per row if two rows in same block -v- different blocks updates of ~<100 bytes / row

Oracle guarantees that data is consistent throughout life of a query Read Consistency Oracle guarantees that data is consistent throughout life of a query If a block has been updated by another transaction since a long running query started, it must be possible to reconstruct the state of that block at the time the query started using the rollback segment. If that information cannot be found in the rollback segment the long running query fails with ORA-01555.

ORA-01555 Snapshot Too Old Rollback segments are not extended for read consistency. Additional rollback overhead can cause rollback segments to spin and wrap. Error message also described a ‘rollback segments too small.’ In this case, to simply extend the segments is the wrong response. CPU overhead to navigate rollback segment header chain

Insert Contention During the calculation phase results are written to the result tables. A number of stream can simultaneously insert into the same result tables. Increases chance that one block will contain rows relating to more than one stream. This in turn causes rollback problems during the cancel in the next calculation.

Another cause of ORA-1555 If not processing calendar for the first time, previous results cancelled Result table are deleted Monolithic deletes from each table. If Streams start together tend to delete same table at same time in each stream. A long running delete is also a query for the purposes of read consistency. It is necessary to reconstruct a block as at the time the long running delete started in order to delete a row from it. Reconstruction occurs during ‘consistent read’. Deletes by primary key columns, thus Oracle tends to look each row up row by index. Thus index reads also ‘consistent’.

Datafile and Log Physical I/O Activity During the identify phase data is shuffled from table to table This generates datafile and redo log I/O Rollback activity is also written to disk, undo information is also written to the redo log. All the data placed in the temporary working tables by a stream is of no use to any other instance of the calculation process. It will be deleted by a future process. Dirty blocks written to disk before the rollback segment wraps.

High Water Marks The working storage tables tend to be used to drive processing. Thus, the SQL tends to use full table scans. In Oracle, High Water Mark is the highest block that has ever contained data. Full Scans scan the table up to the high water mark. Temporary tables contain data for ALL streams. All streams can have to scan data for all streams.

How to avoid inter-stream contention? Keep rows from different streams in different blocks Each block should contain rows for one and only one stream. Need Two Oracle Features Partitioning Global Temporary Tables

What is Partitioning? Logically Physically Local Index a partitioned table is a still a single table Physically each partition is a separate table. in a partitioned table, the partition in which a row is placed is determined by the value of one or more columns. Local Index is partitioned on the same logical basis as the table.

But can also be effective in OLTP What is Partitioning? Typically used in DSS But can also be effective in OLTP (From Oracle documentation)

What sort of Partitioning Range Streams defined in terms of ranges Queries specify range of employees Fits well with range partitioning Ensures partition elimination. Range Partition on EMPLID Hash Psuedo-random Hash function Same input always gets same output Good for single value look up. Single pay period (calendar group ID) Hash partition on CAL_RUN_ID

How should Range Partitioning used in GP? Largest Result tables range each partitioned on EMPLID to match GP streaming 1 stream : 1 partition Thus each stream references one partition in each result table. Only 1 interested transaction per block Indexes ‘locally’ partitioned Partitioning really designed for DSS systems. Most efficient for large tables. GP_RSLT_ACUM, GP_RSLT_ERN_DED, GP_RSLT_PIN, GP_RSLT_PI_DATA Effective on smaller ones too GP_PYE_PRC_STAT, GP_PYE_SEG_STAT

How should Hash Partitioning used in GP? Partition by CAL_RUN_ID because SQL contains CAL_RUN_ID = … Only worthwhile on the very largest GP_RSLT_ACUM, GP_RSLT_ERN_DED, GP_RSLT_PIN Adjust CAL_RUN_IDs to control partition to balance hash partition volumes.

Predicting Hash Values Use Oracle PL/SQL function SELECT sys.dbms_utility.get_hash_value( CAL_RUN_ID,1,16) Number of partitions should be a power of 2 Due to mathematics of hash function 16,32,64 not 12, 53,61, 106, 118 Abbey use 32 They want to hold 18 months of data, 18>16, so 32.

Calendar Group ID Suffixes Original Calendar Group ID AN2004/10 Hash value 15 But partition 15 already used and 14 is least empty AN2004/10E Hash value 14 Putting data into hash partition with least data improves performance. If only monthly payroll then you could arrange for one month per partition. That would make archiving easier later!

Calendar Group ID Suffixes (i) CAL_RUN_IDX HASHVALUEX ----------- ---------- AN2004/01 8 AN2004/02 7 AN2004/03 15 AN2004/04 6 AN2004/05 10 AN2004/06 18 AN2004/07 31 AN2004/08 3 AN2004/09 4 AN2004/10 15 AN2004/11 5 AN2004/12 30 CAL_RUN_IDX HASHVALUEX ----------- ---------- AN2004/01 8 AN2004/02 7 AN2004/03 15 AN2004/04 6 AN2004/05B 20 AN2004/06A 11 AN2004/07B 20 AN2004/08B 30 AN2004/09A 21 AN2004/10E 14 AN2004/11B 16 AN2004/12D 22

Calendar Group ID Suffixes (ii) CAL_RUN_IDX HASHVALUEX ----------- ---------- AN2004/01 8 AN2004/02 7 AN2004/03 15 AN2004/04 6 AN2004/05 10 AN2004/06 18 AN2004/07 31 AN2004/08 3 AN2004/09 4 AN2004/10 15 AN2004/11 5 AN2004/12 30 CAL_RUN_IDX HASHVALUEX ----------- ---------- AN2004/01BE 1 AN2004/02AL 2 AN2004/03AT 3 AN2004/04AJ 4 AN2004/05AF 5 AN2004/06AC 6 AN2004/07BC 7 AN2004/08AI 8 AN2004/09BJ 9 AN2004/10AW 10 AN2004/11BR 11 AN2004/12EB 12

Partitioning on other platforms DB2 does range partitioning Latest version will do multi-dimensional range partitioning Only Oracle does range partitioning and hash sub-partitioning multi-dimensional range partitioning could be more effective.

Global Temporary Tables Oracle specific feature that is appearing in other DB platforms. Definition is permanently defined in database catalogue. Physically created on demand by database in temporary tablespace for duration of session/transaction. Then dropped. Each session has its own copy of each referenced GT table. Each physical instance of each GT table only contains data for one stream. Working Storage Tables PS_GP_%_WRK converted to GT tables.

Global Temporary Tables Advantages Not recoverable, therefore no Redo/Archive Logging some undo information improved performance reduce rollback No High Water Mark problems Smaller object to scan. No permanent tablespace overhead. Disadvantages Does consume temporary tablespace but only during payroll Can’t Analyze in Oracle 8i Work arounds Can in Oracle 9i Can hamper debugging New in Oracle 8.1, some bugs. GP not affected

How many streams should be run? Cobol run on database server Either Cobol is active or database is active No more than one stream per CPU Perhaps CPUs -1 be careful not to starve database of CPU run process scheduler at lower OS priority Cobol and database on different servers Cobol active for 2/3 of execution time. Up to 1.5 streams per CPU on Cobol server Up to 3 streams per CPU on database server Hotsos Profiler

Other Streamable Processes Application Engine GP_PMT_PREP OK in CH Bug in UK extensions GP_GL_PREP GPGB_PSLIP Bug fixed Additional partitioned and GT tables required

Abbey Production Payroll Configuration 2 nodes Database Node 12 CPU – shared with other services Application Server/Process Scheduler Node 8 CPU each 12 Streams 2/3 of 12 is 8, so all 8 application server node CPUs active during calculate phase ‘nice’ the Cobol processes (by nicing the process scheduler) 1/3 of 12 is 4, so 4 of 12 DB CPUs active important to leave some free CPU for database else spins escalated to sleeps generating latch contention

Unilever Production Payroll Configuration 1 node 4 CPUs each – dedicated to GP only 4 Streams 1 per CPU monthly payroll only – 10000 payees weekly payroll not streamed

UBS Production Payroll Configuration 2 nodes Database Node Application Server/Process Scheduler Node 20 CPUs each – dedicated to HR&GP 30 Streams 2/3 of 30 is 20, so all 20 application server node CPUs active during calculate phase 1/3 of 30 is 10, so 10 of 20 CPUs active

UBS QA Payroll Configuration 2 nodes Database Node Application Server/Process Scheduler Node 10 CPUs each Still 30 Streams Only 15 run concurrently Full production volume payroll < 1 hour

GP Development Goals How to create and test efficient rules that work without adversely effecting performance How best to identify problems particularly in the area of system setup/data versus a problem in a rule or underlying program How to use GP payroll debugging tools

Efficient Rules Responsible for two thirds of the execution time, and so could produce the greatest saving, it will also require the greatest effort. Detailed functional and technical analysis of the definition of the payroll rules. The process involves detailed functional and technical analysis of the definition of the payroll rules. While this is responsible for two thirds of the execution time, and so could produce the greatest saving, it will also require the greatest effort. The tuning of rules can be as simple as using literals instead of variable elements and as complex as redesigning them. The process ideally starts during the design stage when various implementation schemes are analysed, intermediate tests are performed and the most efficient scheme is chosen. Likewise, all aspects of Global Payroll must be considered since creating rules to simplify calculation can adversely affect reporting or other online and batch areas and vice versa. This is an on-going process that does not stop with the rule’s implementation since the change in the size of employee population, number of records on the underlying tables, etc. can produce an unexpectedly substandard result for an initially efficient rule.

Efficient Rules The process ideally starts during the design stage when various implementation schemes are analysed, intermediate tests are performed and the most efficient scheme is chosen. All aspects of Global Payroll must be considered since creating rules to simplify calculation can adversely affect reporting or other online and batch areas and vice versa. The rules can be broken down into two groups. PeopleSoft delivered rules, and customer developments. So far, most of the tuning effort has focused on the rules delivered by PeopleSoft. The choice of rules to be examined has been determined by running payroll for a small subset of employees with auditing enabled. From this we can determine how much time has been spent in each rule. Then we examine the rules that take the greatest time. In principle it should be perfectly possible for PeopleSoft to tune the rules that they have delivered. However, different sets of data will exercise the rules to different degrees. Thus, if PeopleSoft use their own data set they may choose to tune different rules.

Generation control versus conditional section Efficient Rules Arrays Re-calculate? Store / Don’t store Formulas Proration and Count Historical rules Generation control versus conditional section Re-calculation = Yes or No? It’s important to be careful when you are using this functionality. In fact each time you are using an element with “re-calc” = Yes, the process will call the program to resolve it. Set this switch to 'No' unless you are sure that you want a recalculation. Store / Do Not store? You only need to store elements if you want to use them in a historical rule, if you need them for retro, reporting or auditing purposes. If you need certain supporting elements for reporting or audit, it might be better to create a Write Array that writes a row with all of the necessary results. Store if zero? If you decided to store an element, do you want to store it if its value is zero or blank? Definitely do not store accumulators if they are zero. Formulas: 1. Use literals like 'Y' or 'N' instead of variables. For 56 employees and 10 formulas, the difference in processing with variables vs. literals was close to 700%. 2. Use Exit in nested IF. 3. When you have multiple conditions, put the most 'popular' at the top, followed by second most 'popular', etc. 4. Use Min/Max. Arrays: The most important thing is to reduce the number of times you call the lookup formula. Proration and Count: When you need to have multiple proration rules as Calendar days, workdays and work hours for the same slice periods, it’s better to have one count element to “resolve” all proration rules. The goal is to minimize the “reading” of works schedule. Generation control versus conditional section: If a conditional formula resolves to 0, all elements in that section are skipped. That means that some Positive Input records and adjustments may remained unprocessed. However, it’s much better for the performance to use a conditional section.

Efficient Rules Keyed by Employee - 1 select, multiple fetches, small result set to search User Defined - 1 select, multiple fetches, all searches in memory. User Defined with the Reload Option - multiple selects, multiple fetches, small result set to search.

Efficient Rules- Keys

Efficient Rules – Fields Retrieved

Efficient Rules – Processing Formulae

Efficient Rules - Formula

Efficient Rules - Keys Now, some keys have been added to the query

Efficient Rules – Fields Retrieved

Efficient Rules – Processing Formulae

Efficient Rules

Efficient Rules

Migration/Customization PI v. Array PI can be used during identification. PI has special considerations during eligibility checking. PI allows easy override of components on element definition such as Unit, Rate, Percent or Base. The Array cannot handle multiple instances of earning/deduction. PI vs. Array approach. Using Arrays to drive payroll calculations is a very complicated process. Some functionality that is available to PI cannot be duplicated any other way, including by using arrays. The following are some of the major advantages of using PI (in no particular order): 1) PI can be used during identification. Since arrays are available in the calculation step only, the customer will have to come up with a User Exit to add appropriate payees to the process. The User Exit be smart enough to do Cancel logic as well (or create another User Exit) in order to cancel employees out of Calculation if Data removed from the table. PeopleSoft does not encourage the use of user exists. They should be considered the last resort. 2) PI has special considerations during eligibility checking. It supercedes any Payee Override information. If there is a PI for element that is not in eligibility group but in the process list, it will be processed if the PI override switch on the Pay Entity is on. This functionality is completely outside of Array abilities. 3) PI allows easy override of components on element definition such as Unit, Rate, Percent or Base. In the absence of PI, the default process ensues. In other words, a unit can be defined as a formula, amount, bracket, etc. If there is no PI, the process will resolve that formula, bracket, etc. If there is a PI, it overrides the calculation for that payee and calendar. The arrays can only populate variables, so either an earning/deduction component that is fed by the array must be defined as variable or the whole logic must be duplicated in some formulae. Most likely such formulae will have to be created for each earning/deduction component. So let's see… the number of elements * the number of components. 4) The Array cannot handle multiple instances of earning/deduction. The table that is read by Array must have the sum of all instances for a payee/element/period. 5) There is no way to override prorate option on earning/deduction definition using Arrays. 6) PI also overrides Generation control. So if generation control says not to process an earnings or deduction but PI exists, the earnings or deduction will be processed if PI override checkbox on Paying Entity is on. Cannot be done with Array. 7) PI is automatically directed to a proper segment/slice based upon the begin / end PI dates. A special non-trivial process (?) must be devised to enable use of Arrays during segmentation/slicing event. I have to spend some time thinking about this process. At this time, I can't imagine what it might look like. 8) Using Array precludes the customer from using an element on multiple calendars without an additional non-GP procedure (SQR?) to mark processed instances. 9) The RATE AS OF DATE will have to be somehow controlled for every Rate Code element since it may be different for various earnings and deductions. PI provides an easy way of doing so per each instance of an element. 10) PI allows the customer to override rate code, rounding rule, currency, etc. for each instance of earnings/deductions. There is no easy way to duplicate this using Arrays. 11) The provisions must be made to resolve conflict between a PI instance generated from Absence calculation and data read by Array for the same element. Which one wins? Can't be both. This is not a problem when using PI approach. 12) GP automatically keeps track of all the changes to PI over time. This is not only allows for a proper calculation but makes researching the changes a snap. The customer will have to create a process to duplicate this functionality. 13) Using PI allows overriding a GL cost center for a specific instance of an element. During GL calculation, the process looks through PI tables to get these overrides. This functionality cannot be duplicated by the Array approach. The customer will have to create a process to duplicate this functionality. 14) This same concept also applies to User Keys on Accumulators. If PI SOVRs are used, each instance of PI will update the appropriate Accumulator Instance (based on User Keys). 15) The issue of Retro, Segmentation or Iterative triggers is the same for either approach but for PI can be solved with the use of Component Interface.

Debugging Tools Audit Trace Trace All Trace Errors Large number of records, potential rollback segment size problems View on-line Query with SQL Hotsos Profiler

Debugging Tools

Debugging Tools

Debugging Tools select * from sysadm.ps_gp_audit_tbl where emplid = '884324' and cal_run_id = 'ErrMigr' and pin_num = 40811 order by audit_sort_key, audit_seq_num; and audit_sort_key = 229 order by audit_seq_num;

Debugging Tools select emplid, audit_sort_key as key ,audit_seq_num as seq, pin_chain_rslt_num as rslt_num ,b.pin_nm, a.pin_num ,pin_status_ind as status, c.pin_nm ,a.pin_parent_num as parent, a.fld_fmt as fmt ,calc_rslt_val as num, date_pin_val as dateval ,chr_pin_val as chr, pin_val_num as pin from ps_gp_audit_tbl a, ps_gp_pin b, ps_gp_pin c where cal_run_id = 'U_22_CI0101' and (emplid, audit_sort_key) in (select emplid, audit_sort_key from ps_gp_audit_tbl and pin_num = (select pin_num from ps_gp_pin where pin_nm = 'CH_EP_CHK_1002FF')) and a.pin_num = b.pin_num and a.pin_parent_num = c.pin_num order by emplid, audit_sort_key, audit_seq_num

Debugging Tools

Debugging Tools FLD_FMT OLD_VALUE_IND CALC_RSLT_VAL CALC_ADJ_VAL AUDIT_SORT_KEY AUDIT_SEQ_NUM INSTANCE_NUM SLICE_BGN_DT SLICE_END_DT PIN_CHAIN_LVL_NUM PIN_CHAIN_RSLT_NUM PIN_NUM PIN_STATUS_IND PIN_PARENT_NUM FLD_FMT OLD_VALUE_IND CALC_RSLT_VAL CALC_ADJ_VAL CALC_RAW_VAL DATE_PIN_VAL CHR_PIN_VAL PIN_VAL_NUM DIFF_SECONDS BAD_TRACE_IND SUM_INSTANCE_IND

Efficient rule section written by Acknowledgements Efficient rule section written by Gene Pirogovsky Omnia Solutions Inc. Gene.pirogovsky@omniasolutions.com www.omnisolutions.com

This permits use of streaming to utilise all available CPUs. Conclusion Use of Partitioning and Global Temporary Tables almost completely eliminates inter-stream contention. Almost 100% scalability – until I/O subsystem becomes bottleneck. This permits use of streaming to utilise all available CPUs. GP will always be a CPU bound process Rule Tuning will reduce CPU overhead It is an on-going process

Questions

Configuring Global Payroll for Optimal Performance nalin.n.patel@abbey.com david.kurtz@go-faster.co.uk