Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Overview 8 Backup&Recovery 2 Software Maintenance 9 Optimizer 3

Similar presentations


Presentation on theme: "1 Overview 8 Backup&Recovery 2 Software Maintenance 9 Optimizer 3"— Presentation transcript:

1 1 Overview 8 Backup&Recovery 2 Software Maintenance 9 Optimizer 3 Sizing&Platform 10 Performance 4 Interface 11 Others 5 APO&BW 6 CIF 7 liveCache

2 APO Core Interface(CIF)
SAP R/3 non R/3 System LO SD HR CIF (Core Interface) BAPI BAPIs BAPI SAP APO BAPI

3 Master Data Objects of the CIF
The APO Core Interface concerns a real-time interface. Only the data objects needed in the data structures reconciled in the planning in APO for the particular planning and optimization processes are transferred from the complex dataset in R/3 into APO. Both the initial data transfer (initial transfer) and the transfer of data changes within APO take place via the APO Core Interface. The master data objects in APO are not identical with those in R/3. For the master data transfer it is in fact the relevant R/3 master data that is mapped onto the corresponding planning master data in APO. The R/3 System remains the dominant system for the master data. Only specific APO master data that does not exist in R/3 is maintained directly in APO.

4 Transaction Data Objects of the CIF
The initial data transfer of transaction data takes place first, via the APO Core Interface. The change transfer between R/3 and APO usually automatically follows for transaction data objects that belong to an active integration model. New transaction data or changes to existing transaction data is automatically transferred. (For transaction data of the APO component SNP, you can define in Customizing whether a real-time or periodic publication is to be performed.) The transaction data objects in APO are not identical to those in R/3. All transaction data in R/3 is transferred to APO as orders that can be distinguished by ATP category. In the standard system, planned independent requirements can only be transferred from the R/3 into APO. The retransfer of the planned independent requirements that you may require if you only perform Demand Planning in APO, must be triggered from Demand Planning in APO with a specific transaction. For planned orders and purchase requisitions, you can specify in APO that they are only transferred from APO to the R/3 System if the conversion indicator has been set.

5 Data Consistency COMPARE RECONCILE /SAPAPO/CIF_DeltaReport3
Inconsistencies between APO and R/3 (external data consistency): COMPARE RECONCILE /SAPAPO/CIF_DeltaReport3 Compare R/3 and APO, correct inconsistent objects (orders, stocks...) Inconsistencies between APO and liveCache (internal data consistency) Compare APOKZ flags in Material Master data and Active integ. models live Cache APO Database Compare APO Database and liveCache (after recovery until the last checkpoint. /SAPAPO/OM17 Inconsistencies between material master and data in integration models RAPOKZFX Mat Master Int. Models In an APO System, the APO database and the liveCache must have a consistent status (internal data consistency). If the APO System is connected to another R/3 or OLTP system, which exchanges data with the APO System, the connected OLTP system, the APO-DB and the liveCache must have a consistent status (external data consistency). Data consistency must be guaranteed both during normal operation and after a recovery of one of the components. The following features are available to detect/correct inconsistent data: Inconsistencies between APO and R/3 (external data consistency): /SAPAPO/CCR - report /SAPAPO/CIF_DELTAREPORT The compare/reconcile function helps to identify and correct data inconsistencies between APO and R/3. The compare function gives the result in form of a list containing the inconsistent objects. If required, the reconcile function can be executed based on this result list. Inconsistencies between R/3 and APO might be solved by manual correction or by replanning that triggers a resend of incorrect orders to the R/3 system. Inconsistencies between APO and liveCache (internal data consistency) /SAPAPO/OM17 - report This report compares APO database and liveCache. In case the liveCache has crashed and a recovery was performed the data consistency between APO database and liveCache must be checked. If inconsistencies exist, eliminate these with the help of transaction /sapapo/om17. However, this generally causes data loss because inconsistent objects are deleted. Inconsistencies between material master and data in integration models Inconsistencies between material master and data in integration models are not very likely. To detect and correct such inconsistencies you can use the report RAPOKZFX. Please refer to SAPNet Note

6 Core Interface with Help of qRFC
Queue 3 Queue 4 LUW2 Outbound Queue tRFC Inbound Queue Queue 1 Queue 2 LUW3 LUW1 qRFCs Queue 5 With qRFC or tRFC, data is first buffered by OLTP, then transferred to the target APO system (asynchronously, using either background or dialog work processes). OLTP does not have to wait for the update to be completed in the target APO system, so the addition of CIF to the system causes minimal impact on OLTP functionality. Often, there is one (named) queue (data channel) for every logical unit of business objects like material changes, so objects that logically belong together are in the same queue, and they are processed serially, one after the other. If there is a problem in transferring or processing the queue, the whole queue is stopped. In other cases, due to complex logic of applications that create the CIF queues, there may be interdependences between different queues, so objects that logically belong together can be distributed over more than one queue. A collection of queues with interdependences between them is called a thread of queues.

7 Core Interface with Help of qRFC
Queue 3 Queue 4 LUW2 Outbound Queue tRFC Inbound Queue Queue 1 Queue 2 LUW3 LUW1 qRFCs Error Queue 5 To maintain transaction logic between related CIF queues, an error in one queue can potentially block a large number of related queues. For example, if an object in Queue 1 depends on an object in Queue 3&4 and can only be processed after it, an error in processing Queue 1 blocks Queue 3&4. An error in one CIF queue can block the related queues in the same thread but not CIF queues in other threads. Generally, multiple threads of stacked CIF queues can arise within the data channels. As return parameters cannot be delivered back to the OLTP for qRFC activities, potential error messages cannot be directly returned to OLTP. If no error related to CIF is found in the qRFC monitor on the OLTP side, errors may still be recorded in the application log on the APO side.

8 R/3 Plug-In & qRFC Version
Is R/3 Plug-In installed? Is R/3 Plug-In release compatible to the current R/3 Release? Is R/3 Plug-In up-to-date? Is the qRFC version up-to-date? CHECK! Implement the most recent qRFC version and R/3 Plug-In Release Table AVERS LOOK SAPNet Alias : R3-PLUG-IN Recommendation Check if an R/3 Plug-In is installed and up-to-date. It is essential that the customer use the latest plug-in release. Background Two Plug-In releases are released each year. Each release with a version for each supported R/3 release. Thus, the value of field ADDONREL (the current R/3 Plug-In release) has the value <Year of release>_<’1’ or ‘2’>_<Related R/3 Release>. For example, 2000_1_40B where 2000_1 is corresponding to the release and 40B to the R/3 Release 4.0B. Only the latest Plug-In release is supported by Support Packages. Refer to SAPNet For the qRFC version, please refer to the note and note

9 CIF Data Channel Monitoring
qRFC monitor Display transfer queues Display waiting qRFC calls Restart waiting calls qRFC problem causes: Communication errors Network problems, dialog work processes unavailable Missing RFC entry Application errors Bugs, non-posting of data to APO Locking of objects, missing master data Both in an OLTP and the APO system, a qRFC monitor is available. It displays all transfer channels (queues) for all target systems, including waiting qRFC calls; these can be restarted with help of the qRFC monitor. Use the qRFC monitor to monitor a variety of errors connected to data transfer through the Core Interface, including Program errors (bugs) Non-posting of data to the target system Locking of objects Missing master data for transaction data Communication/network problems CIF queues are client dependent. Application problems must be solved by a system administrator in cooperation with an application manager

10 Outbound Queue Overview
Transaction SMQ1 Both in OLTP and APO, you can start the qRFC monitor for outbound queues with transaction SMQ1 (report RSTRFCM1). Alternatively, in the OLTP system, you can call transaction CFQ1, but this only shows queues within the current client. The qRFC monitor presents an overview of queues that are not empty, the number of LUWs in each one, and the target system. For more detailed information (status, date/time of the first and last LUW written into the queue, and possibly the name of a queue that must be processed first), choose a queue and select Display selection. In the next screen, Double-clicking the queue displays the individual calls. Queue names are generated by the application programs. The qRFC monitor only displays the waiting calls. Because of message serialization, if an error occurs, the highest entry in the queue blocks all other entries. For any qRFC error, a detailed error log is always saved in the application log of the system. To find this entry in the application log: For the call with the qRFC error, copy the value in field TID (transaction ID). In the selection screen of transaction /SAPAPO/C3 (APO application log) or CFG1 (OLTP application log), enter this value in the field External ID, select a time period, and execute. The next screen displays all messages related to the erroneous qRFC call. An error can appear in the APO application log without appearing in the qRFC monitor. In OLTP, you can also monitor CIF channels with transaction CFP2 (report RCPQUEUE): choose Logistics >> Central functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Integration Model >> Change Transfer >> Transaction Data.

11 Inbound Queue Overview
Transaction SMQ2 Standard APO CIF delivery: Only outbound queues are used To implement usage of inbound queues, apply SAP Notes and Both in OLTP and APO, the qRFC monitor for inbound queues is started with transaction SMQ2 (report RSTRFCM3). However, inbound queues are not used in the default implementation of APO CIF. As a consequence, poor qRFC load distribution may cause heavy load on the system for mass transactions like planning. This can result in a capacity overload and system downtime. To implement usage of inbound queues, apply SAP Notes and Caution: this is an advanced development. As of Plug-In and APO Support Package 14, inbound queues can be activated in Customizing.

12 Common Statuses of Outbound Queues
READY Queue is ready for transmission This should only be a temporary status RUNNING The first LUW of the queue is currently being processed EXECUTED The first LUW of the queue has been processed Before further LUWs are processed, the system waits for a confirmation from the target system STOP The queue was stopped explicitly SYSLOAD At the time of the qRFC call, no dialog work processes were free in the sending system for sending the LUW asynchronously The most common statuses displayed in SMQ1 are: READY Queue is ready for transmission. This should only be a temporary status. If a queue was locked manually and then unlocked without being activated, the queue stays ready until it is activated explicitly. RUNNING The first LUW of the queue is currently being processed. If a queue in this status hangs for more than 30 minutes, activate the queue again. This status can mean that the work process that sent this LUW has terminated. Activating a queue in this status can cause a LUW to be executed several times, so always wait at least 30 minutes before you activate the queue again. EXECUTED The first LUW of the queue is processed. The system waits for a qRFC-internal confirmation from the target system before further LUWs are processed. If a queue in this status hangs for more than 30 minutes, the work process responsible for sending this LUW may have terminated. However, the current LUW has been executed successfully and you can activate the queue. The qRFC Manager automatically deletes the executed LUW from the queue and sends the next LUW. STOP A lock was set explicitly. qRFC never locks a queue in its processing. Unlock and activate this queue using SMQ1. SYSLOAD At the time of the qRFC call, no dialog work processes were free in the sending system for executing an asynchronous transmission to the target system immediately. The system automatically retries to send queue object again by creating and scheduling a batch job. The number and frequency of retries depend on the chosen tRFC options. Check the number of dialog work processes that can be used by the tRFC/qRFC: it is determined for each application server by the number of existing dialog work processes and by the profile parameters rdisp/rfc* described in SAP Note Check gateway parameters. See SAP Note For more details, see SAP Notes and

13 Common Statuses of Outbound Queues
SYSFAIL A serious error occurred in the target system while the first LUW of the queue was executed. The execution was interrupted CPICERR During transmission or processing of the first LUW in the target system, a network or communication error occurred WAITSTOP The first LUW of the queue has dependencies to other queues, and at least one of these queues is locked WAITING The first LUW of this queue has dependencies to other queues, and at least one of these queues contains other LUWs with higher priorities See SAP Note SYSFAIL A serious error occurred in the target system while the first LUW of the queue was executed. When you double-click the status field in SMQ1, the system displays an error text. You can find additional information on this error in the corresponding short dump (ST22) or system log (SM21) in the target system. For an explanation of the error text Connection closed and a list of situations that can prompt it, see SAP Note CPICERR During transmission or processing of the first LUW in the target system, a network or communication error occurred. Depending on the definition in SM59 for the destination used, a batch job is scheduled to send the queue object later. Double-click the status field in SMQ1 to display the corresponding error text. For more information on this error, see the syslog (SM21) and the trace files dev_rd or dev_rfc*. Generally, you should check the network, and user authorizations in the target system. WAITSTOP The first LUW of this queue has dependencies on other queues, and at least one of these queues is currently locked. WAITING The first LUW of this queue has dependencies on other queues, and at least one of these queues contains other LUWs with higher priority. If one queue has status SYSFAIL, all the queues that depend on it get status WAITING. For a complete list of statuses of both outbound and inbound queues, see SAP Note

14 SMQ1: Outbound Queue Management
Activate the qRFC Manager for a selected queue. The LUWs in the queue will be sent immediately. Lock a selected queue. A stop mark will be set on the end of existing queue. All previously recorded LUWs will be processed up to the stop mark. Unlock a selected queue. The first stop mark in the queue will be removed. The qRFC Manager will be started immediately and execute the LUWs until the next stop mark, or the end of the queue, if no stop mark is set. Lock a selected queue immediately. The stop mark will be set to the very first line in the queue, so the complete queue will be stopped. Unlock a selected queue without activation. The stop mark will be removed without activating the qRFC Manager. In transaction SMQ1, the outbound queue overview enables you to perform the following actions on selected queues (plus Choose and Delete, from the buttons or from menu Edit): Activate - activates the queue. Once the cause of any error state is removed, you can use this button to activate the qRFC Manager. The LUWs in the queue are sent immediately. Lock - locks the queue. A stop mark is set at the end of the queue. All further LUWs are written behind the stop mark. All previously recorded LUWs are processed up to the stop mark. Unlock - unlocks the queue. The first stop mark in the queue is removed. If more than one stop mark is set, they are removed one by one from the top down. The qRFC Manager is started immediately and executes the LUWs until the next stop mark, if one is set, or until the end of the queue otherwise. Lock immediately - locks the queue immediately. The stop mark is set at the very first line in the queue. The available LUWs are also stopped. Unlock without activation - unlocks the queue without activation. The stop mark is removed without activating the qRFC Manager.

15 Application Logging Display Application Log in OLTP System: Transaction CFG1 Display Application Log in APO System: Transaction /SAPAPO/C3 Maintain Logging Level: CFC2 in OLTP, /SAPAPO/C4 in APO

16 Clearing Application Log
Report SBAL_DELETE for both R/3 and APO ! Delete entries in the Application Log regularly Delete entries in the application log regularly by scheduling the following as background jobs: In the OLTP system, report RDELALOG In the APO system, report /SAPAPO/RDELLOG Alternatively, in both OLTP and APO you can schedule report SBAL_DELETE. It gives you more control of what is deleted. To delete entries manually, do the following: In OLTP, choose Logistics >> Central functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Monitoring >> Application Log >> Delete Entries (transaction CFGD) In APO, choose Tools >> APO Administration >> Integration >> Monitor >> Application Log >> Delete Entries (transaction /SAPAPO/C6)

17 CIF Application Log Log Header Log Message CHECK!
Tabelle BALHDR Tabelle BALDAT Log Header Log Message Are the log tables full? How many log entries are in the Application Log table BALHDR? How many CIF log entries are in the Application Log table BALHDR? How many log entries are in the Application Log table BALDAT? CHECK! The Application Log is a tool for collecting messages, exceptions and errors in a log and displaying them. An Application Log comprises a log header and a set of messages. The log header contains general data (type, created by/on, etc.). All this information is stored in certain tables on the database (BALHDR: header, BALDAT: messages). For the data transfer from R/3 to APO through the CIF application logs are recorded if the logging option is switched on. You must delete logs to avoid Application Log database table overflow. For that reason the number of entries is checked.

18 CIF Application Log Table BALHDR Table BALDAT Total logs  500.000
Header Data In BALHDR Total logs  CIF logs  Application Log Message Data Table BALDAT In BALDAT The number of entries is checked using SE16. The table BALM is not used anymore if R/3 release > 4.6B. The messages are then stored in table BALDAT thus reducing the volume of storage by factor 5-10. A GREEN rating is set if the total number of logs (in table BALHDR) < and number of CIF logs < , and total number of logs (in table BALM) < Otherwise a YELLOW rating is set. Total logs 

19 Note 195157 Application log - Deletion of logs
CIF Application Log BALHDR - BALDAT Overflowing !! To avoid the database from overflowing, delete the records at regular intervals. If required, also schedule an appropriate background processing job. NO LOGGING for USER PARAMETERS Recommendation Note Application log - Deletion of logs > Use the report RDELALOG to delete records from the application log in R/3. For release 3.1I to 4.5B, using the CIF area menu Monitoring -> Application Log -> Delete Entries. From release 4.6B on: Logistics -> Central Functions -> Supply Chain Planning Interface -> Core Interface Advanced Planner and Optimizer -> Monitoring -> Application Log -> Delete Entries in the SAP Easy Access menu. Background: Application Log is used by various applications. You must delete logs in regular intervals to avoid Application Log database table overflow. A log can only be deleted when it has reached its expiry date. Any type of Logging will slow down the performance significantly. Therefore check the user settings in your R/3 System. To access the user settings, call transaction /CFC2. The recommended standard setting is: Logging = ‘No Logging. Use this setting for all users, unless you have important reasons for other settings. >

20 Handling Change Pointers
Table BDCPV BDCPS BDCP ? Global Switch Total Change pointers processed  CIF Change pointers processed  Total Change pointers obsolete (older than 2week)  CIF Change pointers obsolete (older than 2week) /BD61 Check Total proc./obsolete  This check is to give the customer a general recommendation how to avoid problems with change pointers. The number of CIF change pointers as well as the entire number of change pointers (processed and obsolete) is determined using SE16. /BD50 CIFCUS : Customer CIFMAT : Material master CIFSRC : Sources of supply CIFVEN : Vendor CIFPPM: PPM CIFRES : Resource CIFPLT : Plant N.Used CIF proc./obsolete 

21 Handling Change Pointers
Transfer the changes to APO regularly (/CFP1) If you do not require these changes, deactivate the update of the change pointers of the corresponding message types (/BD50) Avoid rapidly increasing view BDCPV. Check the size of tables BDCP and BDCPS frequently. Delete change pointers that are not needed anymore. Schedule the job RBDCPCLR (/BD22). Perform a reorganization of the indexes on tables BDCP and BDCPS (only Oracle). Recommended: Many change pointers (obsolete and/or processed) can slow down the data transfer to APO System. Processed change pointers must be deleted periodically to free up space in the corresponding tables. If obsolete and processed change pointers are not deleted periodically, performance could slow down significantly. The changes should/must be transferred to the APO system regularly. If you do not require these changes please deactivate the update of the change pointers of the corresponding message types. CIF* change pointers are set to ‘processed’ if transaction CFP1 is executed. The general recommendation contains a procedure how to avoid problems with the view BDCPV (including long access time occurring on database system Oracle).

22 OLTP: CIF Data Channel Control
Transaction CFP2 or report RCPQUEUE Start / Stop data channels without losing data changes Monitor / Display details In an OLTP system, you can start and stop CIF data channels by using transaction CFP2. Choose Logistics >> Central functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Integration Model >> Change Transfer, select the symbol in the last column, and choose Execute. Even if a data channel is stopped, corresponding data changes are saved there for later processing. Alternatively, you can start and stop CIF queues by activating and deactivating the integration model (transaction CFM2). However, when the model is deactivated, no incremental transfer is performed and the data changes are not stored in the CIF queues.

23 Assignment of Objects to Queue Names
Object Queue Name Stock CFSTK... Sales Order CFSLS... Reservation CFRSV... Purchase Order CFPO... Planned Independent Requirements CFPIR... Planned / Production Order CFPLO... Materials CFMAT... Confirmation CFCNF... Delivery CFDL...

24 Business System Groups - Motivation
Define Business System Group(s) Assign Logical Systems to Business System Group(s) Hammer R/3 Nail R/3 R/3 An APO System can be integrated with several R/3 Systems (logical systems). It may therefore be the case that the number assignment for certain master data objects, for example, material masters, is not the same in all these logical systems. In the example above, material number A in logical systems 1 and 2 signifies the same hammer, whereas the same material number in logical system 3 (perhaps a subsequently purchased plant) is a screw. If logical systems 1, 2 and 3 are planned with the same APO System, the problem arises that the system cannot simply transfer the R/3 material numbers into APO, as they may be ambiguous. The fact that the same material may have different material numbers in different logical systems, is not discussed on this or the following pages. If this does occur, you can use a customer exit in APO inbound processing where you map the relevant material numbers from the different logical systems onto the unique material number in APO. Hammer APO

25 Business System Groups
To guarantee that the naming of master data in distributed system landscapes is unique, business system groups are defined as areas of the same naming convention. A Business System Group (BSG) groups different physical systems to form a higher-level logical unit. If, for example, two logical systems (LS1 and LS3) exist in a system landscape in which APO is integrated, and contain different materials with the same name (Matl. A is a hammer in LS1, but a screw in LS2), this conflict must be resolved in APO Integration. First of all, both logical systems are to be assigned to different business system groups (BSG1 and BSG2). The logical systems were each assigned to one business system group. The APO System, as an independent logical system, must also be assigned to a business system group. Each source system (R/3) is assigned to a BSG. One BSG can consist of one or several source systems, but must contain at least one R/3 source system.

26 Business System Groups
If the situation arises that two different logical systems with identical number ranges are operating when a distributed system landscape is being constructed, a unique mapping structure must be constructed. Firstly, you must define the business system groups in APO Customizing step Maintain business system groups (in the basic settings in APO). In the next step (Customizing step Assign logical system) you assign the diverse logical systems of your system landscape to a BSG. You should be aware that, within a system group, the same naming convention must apply, i.e. the different master data objects must have unique names within the group. The last step states that the system can transfer master data objects from the different BSGs (the names do not have to be uniquely cross-BSG) with unique names into APO via a user exit. These customer exits exist for all master data (see below). You must maintain a BSG, even if only one R/3 is linked with one APO System, or when several R/3 Systems are connected without the risk of ambiguities. You will only need more than one BSG when there is no unique naming convention.

27 Business System Groups
If the situation arises that two different logical systems with identical number ranges are operating when a distributed system landscape is constructed with APO, the identical names must be changed with a customer exit. The following enhancements are available as customer exits for the inbound processing in the APO System (transactions SMOD and CMOD): APO_CIAPOCF001 EXIT_/SAPAPO/SAPLCIF_LOC_001: Location APO_CIAPOCF002 EXIT_/SAPAPO/SAPLCIF_ATP_001: Maintain ATP check control APO_CIAPOCF003 EXIT_/SAPAPO/SAPLCIF_IRQ_001: Reduction of planned indep. req. APO_CIAPOCF004 EXIT_/SAPAPO/SAPLCIF_ORD_001: Production and planned orders APO_CIAPOCF005 EXIT_/SAPAPO/SAPLCIF_PROD_001: Products APO_CIAPOCF006 EXIT_/SAPAPO/SAPLCIF_PU_001: Purchase order documents APO_CIAPOCF007 EXIT_/SAPAPO/SAPLCIF_QUOT_001: Quotas and their schedules; EXIT_/SAPAPO/SAPLCIF_QUOT_001: Customizing settings quotas APO_CIAPOCF008 EXIT_/SAPAPO/SAPLCIF_RES_001: Resource APO_CIAPOCF009 EXIT_/SAPAPO/SAPLCIF_RSV_001: Reservation requirement APO_CIAPOCF010 EXIT_/SAPAPO/SAPLCIF_SLS_001: Sales order APO_CIAPOCF011 EXIT_/SAPAPO/SAPLCIF_STOCK_001: Stock For further customer exits: see the online help.

28 Creating/Generating Integration Model
General strategy Define separate integration models for master data and transaction data Use unique combinations of integration model name and application to transfer different parts of data Do not create integration models with large data pools Don't forget to activate the business transaction event for APO integration!

29 Generate Integration Model for Initial Data Transfer
The master data that the system will transfer for the first time (initial transfer) from the R/3 System into the APO System is defined in an integration model. The R/3 System generates this integration model (transaction CFM1). An integration model is clearly defined by its name and application. It is possible and makes sense to create several integration models with the same name but as different applications. As a rule, make sure that the data pools of your integration models are not too big. This enables easier error handling. The target system that you specify in the integration model determines which APO System the master data is transferred into. The target system is a logical (APO) system that must have a RFC connection. Finally, you specify which master data you want the system to transfer with a particular integration model. To do so, you first specify the master data types that flow into the integration model. In the second step, you specify the selection criteria to be used for the selection of the individual master data documents in the R/3 System. You complete the generation of the integration model by "performing" the model (which means that the data objects of the model are compiled) and then saving it. For the initial transfer there is only one queue.(Upto PI2001.2, From PI There is no restriction)

30 Activate or Deactivate Integration Models
Checks by Models: cfm1: checks, if all relevant objects are included in the generated model cfm2: check all models, if relevant objects are included in active models. Active model

31 Master Data: Initial Data Transfer
Name PUMPS Target system APOCLNT800 Application MATERIALS Integration model "Activate" Active/Inact. Start Master data in APO SAP OLTP Material master A Material master B Product B Product A 10:00:00 APO To transfer data into APO initially, simply activate the corresponding integration model, using transaction CFM2. When you choose Start, the system triggers the data transfer into APO. Only one data channel is currently available for intial data loads. This ensures data consistency and prevents two users from creating the same data object. This also means that only one initial data load integration model can be actively passing data at a time. Incremental loads are not restricted in this way.

32 Master Data: Transfer of New Data (1)
PUMPS APOCLNT800 MATERIALS Start New master data in APO Material master Q SAP OLTP Product Q 11:00:00 X0 New Name Target sys. Application Material master MRP type ... 10:00:00 Execute Save + Existing integration model "Activate" Active/Inact. Difference is transferred APO New master data that corresponds to the selection criteria of an existing integration model can be transferred into APO by regenerating the existing model and activating it. The graphic shows an example where materials with the MRP type X0 are selected in the integration model. Two models with the same name are then temporarily active, differing in date and time. In this case, during data transfer, the system compares the new model with the old one and transfers just the new data that is not included in the old model. After the data transfer, the system deactivates the old model, leaving the new complete model as the active one. The system does not allow two versions of an integration model with the same name to coexist while they are both active. If you want the system to retransfer all the master data of an existing integration model, you must first deactivate the old model and then activate the new one. All active models are always compared. In this case, if the model with the old timestamp is not active, all data is transferred again. Another important special case for integration models with different names is the following. If you have activated model 1 with material masters A and B, and then create model 2 with material masters B and C, then when you activate model 2, only material master C is transferred. If you later deactivate model 1, the integration for material B remains valid. To ensure that the system has transferred all the APO-relevant master data, you can periodically regenerate and activate the existing integration models.

33 Master Data: Transfer of New Data (2)
!!! Execute integration model periodically Name PUMPS Target system APOCLNT800 Application MATERIALS Generate integration model Activate integration model Active/Inact. ... Execute Save + Variant PUMP_MAT Report RIMODGEN Start Report RIMODAC2 JOB_1 JOB_2 alternative: JOB_1_AND_2 Step 1 Step 2 As an SAP OLTP system continually creates new APO-relevant master data, you should regenerate and activate the integration models at regular intervals. To do so, you can define appropriate jobs. Executing an integration model consists of two steps: generation and activation. The system generates an integration model with report RIMODGEN. Define a variant of this report and schedule the variant as a job. The system activates an integration model with report RIMODAC2. Define a variant of this report, too, and schedule the variant as a job. You can schedule these two steps with one job. This job runs the two variants as consecutive steps.

34 Master Data: Incremental Data Transfer
Any changes you make to the master data in the R/3 System that are also APO-relevant, must be transferred into the APO System. The system does not usually carry out a new initial data transfer, but just transfers the individual changes to the master data. In the same way, the system also transfers a deletion flag for a material into APO. An incremental data transfer assumes that an active integration model is available for the relevant materials, and that the materials are APO-relevant. With the transfer of master data changes, remember that the system will transfer the complete data record. If, for example, you change a field in the material master, the entire material master will be retransferred within the incremental data transfer.

35 Master Data: Incremental Data Transfer
Configuration procedure for transfer of master data changes Transaction CFC5 Business Transaction Event, immed. Material master ALE change pointer, periodic no incremental data transfer Customers Vendors Changed SAP master data objects are transferred into APO when the changes are saved in real-time APO Changes to SAP master data objects are recorded and the transfer of the changes is (periodically, for example) triggered Any changes made to the master data in the R/3 System that are also APO-relevant (data contained in an active integration model) must be transferred into the APO system. What is needed is not carry out a new initial data transfer but just transfer the individual changes to the relevant master data. Similarly, a deletion flag for a material must also be transferred into APO. This is done by an incremental data transfer. You can use transaction CFC5 to control incremental data transfer of master data. You can decide whether changes to material masters, customers, and vendors are transferred to the APO system immediately (in real time), periodically, or not at all. Depending on the extent of the changes, immediate data transfer may impact the performance of the system, so in many cases you may prefer periodic data transfer. However, if you choose periodic data transfer, you must also maintain the ALE change pointer settings. In future releases, when you choose periodic incremental data transfer in transaction CFC5, the change pointers will be activated automatically. With the transfer of master data changes, the system always transfers the complete data records. For example, if only one field in the material master is changed, the entire material master is retransferred within the incremental data transfer.

36 Master Data: Periodic Incremental Transfer
Master data change Change pointer generally active? Relevant message type active? Material master A Change pointer Matls. planning MRP type X0 Customizing Mat B in-house pr.time Mat A plan.deliv.time Plan.del.time 10 days ... 11 days Incremental data transfer Incremental data transfer Variant DELTA_MAT CFP1 or RCPTRANS4 Target sys. APOCLNT800 Object types Execute ... Periodic incremental data transfer uses ALE change pointers. The change pointers select the master data for the system to retransfer. If you select periodic incremental data transfer, you must set in the R/3 System Customizing that ALE change pointers are written for master data changes. Customize ALE change pointers as follows: Activate the change pointers: in transaction BD61, choose Customizing in ALE >> Activate change pointers. Determine which master data objects should have change pointers: in transaction BD50, choose Customizing >> Change pointer per message type. The relevant message types are CIFMAT for material masters, CIFVEN for vendors, CIFCUS for customers, and CIFSRC for info records. The availability of CIFPPM for BOMs and Routings depends on the installed Support Package level. You can initiate incremental data transfer of master data manually in transaction CFP1. You must specify the logical target system and the master data objects (material masters, vendors, sources of supply, customers) for which changes are to be transferred. You cannot choose integration models in CFP1. The transfers include changes to all master data specified in CFP1 that belong to an active integration model. To schedule incremental data transfer as a job, save the settings for an incremental data transfer as a variant of report RCPTRANS4 (the report used by CFP1). For performance reasons, delete change pointers regularly (approximately once a week), either manually with BD22 or by scheduling report RBDCPCLR. Always delete all processed change pointers that are more than 2 weeks old. Material master Changed master data in APO Product A Plan.del.time 11 days APO Delete change pointers regularly !

37 Transactional Data Transfer
Name PUMPS Target system APOCLNT800 Application BEW_DATA Integration model "Activate" Active/Inact. Start Transaction data in APO Material master A Stor loc. stock B Planned order A SAP OLTP 10:00:00 Incremental data transfer automatic APO To transfer transactional data between OLTP and APO, simply activate the corresponding integration model, using transaction CFM2. To trigger the initial data transfer, choose Start. The transactional data that you selected is transferred into APO for the first time. After this initial transfer, a real-time link is usually set up automatically between the OLTP system and the APO system, for the selected transactional data. Whenever a storage location stock of a selected material changes due to a goods movement posting, the new stock is transferred into APO. In the same way, production orders that are generated in APO, for example, are immediately transferred to the OLTP system. In other words, incremental transfer of transaction data is processed automatically. No explicit action is needed to initiate this.

38 Integration Monitoring
qRFC monitor (SMQ1) Application log (CFG1) APO Application log (/SAPAPO/C3) Monitoring both R/3 and APO from within APO SCM Queue Manager (/SAPAPO/CQ) qRFC alert (/SAPAPO/CW) The APO Core Interface (CIF) offers two monitoring functions to monitor the data transfer from R/3 to APO as well as from APO to R/3: Application log evaluation, transfer queue(s) display. In addition, the qRFC Alert is used to monitor the outbound queues in APO and R/3 that are relevant to APO-R/3 integration. If queue blocks are detected, a message is sent to a predefined destination. Instead of evaluating the application logs in both systems (R/3 and APO) separately when errors occur, the user can use the central SCM Queue Manager available in APO to monitor the queues and application logs in both systems. You can find the Troubleshooting Guidelines Integration R/3 – APO on the R/3 Plug-in homepage on SAPNet (alias: R/3 Plug-in) under Media Center -> Literature. This document is intended to help you to localize and solve problems that arise when integrating R/3 and APO. It describes the transfer technology, the prerequisites that a data transfer must fulfill, systematic troubleshooting and the steps that must be taken when a particular error occurs as well as special cases and how to deal with them. (See note ).

39 Integration Model: Other Functions
+ Deactivate integration model Connection between R/3 and APO for the relevant master and movement data will be cancelled + Delete integration model Deactivated models can be deleted + Filter object search Check whether the data objects are already contained within an integration model Deactivate integration model: For example, after you deactivate the integration model for transaction data (which contains sales and planned orders), relevant sales orders created in OLTP are no longer transmitted to APO. Also, planned orders that are created in APO after deactivation are not transmitted to APO. After you deactivate the integration model for master data, no data changes are transmitted. Delete integration model: Before you can delete an integration model, you must deactivate it. Deleting an integration model does not delete the previously transmitted data in APO. + Consistency check You can check the consistency of the selected data in the integration model

40 Integration Model: Other Functions
Two consecutive steps: Generating models with the RIMODGEN report. Activating models with the RIMODAC2 report. Recommended: The report RIMODDEL is also to be scheduled on a regular basis to delete the old versions. (e.g. weekly) Defining jobs enables the system to regenerate and then activate the integration models in regular intervals, as the R/3 System is constantly creating new APO-relevant master data. The system generates an integration model with the RIMODGEN report. You schedule this report by entering a variant (that you must have defined previously) as a job. The system activates an integration model with the RIMODAC2 report. You need a variant for this report too, in order to schedule it in a job. "Executing" an integration model consists of two steps: The generation and activation of an integration model. You can schedule these two steps with a job, where you define the two activities as two consecutive steps. For this procedure you also need the relevant variants for both steps. The report RIMODDEL is also to be scheduled on a regular basis to delete the old versions. Please refer to SAPNet Note Delete old inactive versions!

41 Appendix Example of Integration Model SAP APO Integration Model A
materials 4711 9011 6661 3311 7711 2222 5000 materials 4711 9011 6661 3311 7711 2222 5000 activate 1. step 8911 8944 8933 8922 Integration Model B 2222 5000 8911 8944 8933 8922 Will not be transfered, already active 2222 5000 activate 2. step In this example we have two Integration models A and B with the same materials 2222 and In the first step we activate Model A and all materials belonging to Integration model A will be transfered. In the 2nd step if we activate Integration model B, only the delta will be transfered. In our case the materials 2222 and 5000 will not be transfered as they are already active.

42 Appendix Example of Integration Model SAP R/3 Integration Model A
materials 4711 9011 6661 3311 7711 2222 5000 materials 4711 9011 6661 3311 7711 2222 5000 active deactivate 8911 8944 8933 8922 Integration Model B 2222 5000 8911 8944 8933 8922 2222 5000 active In the third step we deactivate Integration model A. The materials 2222 and 5000 are still active as part of model B. 2222 5000 still active, as part of modelB

43 Appendix Example of Integration Model Selection ! 6* ; 7* ; 9*
materials materials 9011 9011 active 6661 6661 7711 7711 Model A materials materials transfer 9011 9011 Online Transfer: As of PI2000.2, there is online transfer for material masters, customers and vendors. It is therefore possible to transfer changes to materials immediately to APO. In addition, newly created materials that correspond to the selection options of a stored integration model are transferred immediately to APO. 6661 7711 6661 7711 Model A No performance issues

44 Appendix 6000 6000 Example of Integration Model SAP R/3 SAP APO
Selection ! 6* ; 7* ; 9* materials materials 9011 9011 active 6661 6661 7711 7711 Model A materials materials transfer 6661 6661 9011 9011 In the 2nd example we have a new material 6000 that fits the selection criteria for model A. Now a new version of model A is created. The generation of a new model causes few performance issues. 6000 6000 7711 7711 New version of model A New model generation less performance issues

45 Appendix Outbound Queue
In the 2nd example we have a new material 6000 that fits the selection criteria for model A. Now a new version of model A is created. The generation of a new model causes few performance issues.

46 Appendix Inbound Queue
In the 2nd example we have a new material 6000 that fits the selection criteria for model A. Now a new version of model A is created. The generation of a new model causes few performance issues.


Download ppt "1 Overview 8 Backup&Recovery 2 Software Maintenance 9 Optimizer 3"

Similar presentations


Ads by Google