Smigr_create_ddl
You want to copy an NW system homogeneously or heterogeneously using R3load.
A heterogeneous database migration to INFORMIX is not supported. Then you may go for the following steps to do so:
Contents
1. General prerequisites for SAP kernel 6.40
2. Tasks before the export
3. Tasks after the export
4. Tasks after the import
5. Special explanations for Database Platform DB2
6. Special explanations for Database Platform DB4
7. Special explanations for Database Platform DB6
8. Special explanations for Database Platform MSSQL
9. Special explanations for Database Platform MAXDB
10. Special explanations for the ORACLE Database Platform
1. General prerequisites for SAP kernel 6.40
------------------------------------------
Prerequisite for the homogeneous or heterogeneous system copy without a change of the database platform.
Support Package/Tool Patch level
R3load >= 17
BW 3.5 >= 10
Basis 6.40 >= 16
Prerequisite for the heterogeneous system copy when you change the database platform:
Support Package/Tool Patch level
R3load >= 17
BW 3.5 >= 10
Basis 6.40 >= 16
Independent of the minimum requirements, we recommend that you carry out the migration with a current Support Package because we continuously carry out further developments.
Ensure that you use the new tools (R3load, R3szchk) both in the source system when you export the data and in the target system when you import the data.
2. Tasks before the export
-----------------------
Start the SMIGR_CREATE_DDL program before you export the data from the source system. This program allows you to copy database objects that do not correspond to SAP standards. These objects include partitioned (fragmented) tables and bitmap indexes. Special '.SQL' files are generated for these objects. These files contain 'native DDL' (create) statements and can be analyzed by R3load. Proceed as follows:
* Log on to the live BW client as a user with 'SAP_ALL' rights.
* Go to transaction SE38 and call program 'SMIGR_CREATE_DDL'.
* Select the database platform and the database version of your target system.
* Set the 'Unicode migration' indicator if your target system is an SAP Unicode system.
* Specify a directory for the '.SQL' files to be generated.
* Execute program 'SMIGR_CREATE_DDL'. The system now generates the DDL statements and writes them into the directory specified.
* You must ensure that no more changes, such as activations or field changes, are made in the BW system after you call the SMIGR_CREATE_DDL report and before you export the data. The safest thing to do is to call this report directly before the export while the system is still locked for the user.
* The report SMIGR_CREATE_DDL creates files of the form .SQL. If these files are not created, then BW-specific processing is not required because tables of this type do not exist in the system. This is the case if BW is not operated actively in the system. In this case, this no longer applies and the migration should be carried out in accordance with the standard guidelines.
You can access report 'SAP_DROP_TMPTABLES' (see 449891) before the export if you experienced problems with temporary tables during the test import. This report deletes all temporary tables. However, this has a short-term impact on query performance in the source system.
3. Tasks after the export
-----------------------
Copy the generated DDL '. SQL' files to the /DB/ of the target system before you import the data. This is the only way that R3load can evaluate these special files.
4. Tasks after the import
------------------------
After the successful import, you must call report RS_BW_POST_MIGRATION.
For more information, see 948780.
Before you execute the report, implement the corrections contained in 901592. It is absolutely imperative that you implement 901592 for Support Package Stack BW 350 from patch level 2 to patch level 15. Connect the source system to the migrated BW system before you start the RS_BW_POST_MIGRATION program. You must do this to be able to use new versions of all PSA tables. When you do this, you must ensure that all connected SAP systems are also active while program 'RS_BW_POST_MIGRATION' is running. Depending on whether you changed the database platform or not, you must run the RS_BW_POST_MIGRATION report with a different variant in the background. Call the program in the background so that the spool log remains. When you do so, DB2 is valid for zOS, DB2 is valid for AS400 and DB2 is valid for UNIX/NT as three different database platforms.
If you changed the DB platform, use the &POSTMGRDB variant. If you did NOT change the DB platform, use the &POSTMGR variant. The RS_BW_POST_MIGRATION program carries out necessary adjustments after the system copy. When it does this, the system adjusts database-specific DDIC entries (modified, deleted, expanded), it invalidates database-specific (generated) programs, it deletes temporary objects from certain LRU buffers, it transfers new versions for the PSA tables and it adjusts table DBDIFF. The runtime of the program may be a few hours.
5. Special Database Platform DB2
--------------------------------
* The sapinst and the kernel must be based on at least SAP Netweaver 04 SR1. You can use the following CDs, for example:
o SAP ERP Central Component 4.0 SR1 Installation Master, CD51030944
o SAP Netweaver 04 SR1 Kernel DB2 UDB for z/OS, CD51030958
SAP Netweaver 04 SR1 Unicode Kernel DB2 for z/OS, CD51030959
* Program SMIGR_CREATE_DDL
You do not have to specify a database version.
* Necessary corrections
Implement correction 797398 if necessary.
VERY IMPORTANT: Implement the corrections described in 849857. The HOTNEWS for this was not implemented in BW 3.50 patch 13. For further information, read this .
6. Special database platform DB4
--------------------------------
* Also use the "Installation Guide" specific to the platform and product. You can find the Installation Guide on the SAP Service Marketplace.
* For the migration you can use any Installation Master DVD for 6.40 on IBM iSeries.
* See 744735, which contains an overview of special features of the migration with Target Platform DB2 UDB for iSeries (DB2/400).
Due to their considerable performance advantages, pay particular attention to the "iSeries InPlace Unicode conversion" for homogeneous Latin-1 migrations.
* 782993 describes how, when you execute a homogenous migration of an SAP product to DB2/400, Basis 6.20 and higher, you can change the storage parameters of database objects (such as tables and indexes) without causing any additional downtime.
* Program SMIGR_CREATE_DDL: The "Database version" field is not available at present.
* Heterogeneous system copy to target DB2/400 or platform-specific actions after the import:
Secondary indexes of BW fact tables are not created automatically at the time of the heterogeneous system copy with the target platform IBM iSeries (DB4). After you have executed program 'RS_BW_POST_MIGRATION' as described above, proceed as follows if you want to migrate a BW system or a system based on BW (SEM, SCM/APO):
o For performance reasons, when you are later using BW systems, we strongly recommend that you set up the EVI stage 2 support as outlined in 501572 However, do not start programs 'SAP_INFOCUBE_INDEXES_CHG_DB4' and 'SAP_INFOCUBE_INDEXES_REPAIR' that are described in the .
o The next step is to set up the entire secondary database indexes of the BW fact tables. This procedure can run in parallel if you use Symmetric Multi Processing (SMP). SMP is an additional product for DB2/400. If you operate SMP you must implement 601110 in your system, so that the BW indexes can be set up in parallel.
o You can now use program 'SAP_SANITY_CHECK_DB4' to check the settings for the EVI stage 2 support and to check the usage of SMP. See 541508 for more detailed information.
o If you decide to activate the SMP, you must set up the missing indexes. To do this, start program 'SAP_INFOCUBE_INDEXES_REPAIR' in the background.
7. Special Database Platform DB6
--------------------------------
* If you want to migrate to DB6, the system should run on SAP BW 3.5 Support Package 23 or higher. It is important that 1240076 and 1335017 are implemented in your system.
* Possible data loss when loading F fact tables with incorrect primary indexes:
If R3load cannot read the .SQL files generated by the report SMIGR_CREATE_DDL, then F fact tables are created with the standard primary indexes ~0. This may occur, for example, if SMIGR_CREATE_DDL was not called at all, the .SQL files are not in the DB/DB6 subdirectory of the export directory, or the .SQL files have incorrect access authorizations. In this case, duplicate data records with regard to the incorrect primary key may disappear (without you noticing) when loading with R3load. You should try to avoid this situation at all costs.
When you implement the correction from 1376507, R3load terminates when duplicate data records exist. 1376507 also describes how you can recognize whether data records have disappeared.
* Possible performance problem when loading data to MDC tables with the LOAD utility:
Under certain circumstances, loading data to tables that use MDC may take longer than a "LOAD" to non-MDC tables.
This problem occurs under the following conditions:
o You have defined MDC on the target table.
o You have chosen "LOAD" as the method for loading data to the MDC table.
o The input data for the table is not sorted according to MDC columns, for example, if the data is sorted according to the primary key but the MDC columns are not the first columns in the primary key.
Solution : To improve the LOAD performance of unsorted data in MDC tables, increase the value of the database configuration parameter "UTIL_HEAP_SZ" to approximately 250,000 4 KB pages.
Also set the value of the "DATA BUFFER" parameter for the "LOAD Utility" to at least 200,000 4 KB pages. that you cannot start multiple R3load processes with these settings because the available utility heap is already completely used for the "DATA BUFFER" of the first process.
If you use the R3load utility (which calls a CLI version of the "LOAD"), you can set the shell environment variable DB6LOAD_DATA_BUFFER_SIZE to the required "DATA BUFFER" value (approx. 200,000 pages). R3load will then transfer the value from the environment variable to the "LOAD Utility". See 454173 for more information about the environment variable DB6LOAD_DATA_BUFFER_SIZE.
The performance problem mentioned above does not occur if the input data for an MDC table is sorted according to the MDC columns or the target table for loading data does not have MDC.
* Important information about multidimensional clustering (MDC) tables: If you use DB2 Version 9.1 up to and including FixPak 4, you cannot use DB2 LOAD to load MDC tables while using the row compression feature. In this case, data may be corrupted as described in DB2 APAR IZ21943. FixPak 5 solves the problem. We recommend that you use INSERT instead of LOAD.
* DB6 specific about migration: 884981
* You must also read 1156514 (DB6: Possible data loss during system copy of SAP BI) if you want to migrate to a DB6 database with several DPF partitions and you want to compress tables directly when you use R3load to load the data.
* Necessary corrections
After you migrate range-partitioned InfoCubes, the SQL error 'SQL0401N' "Datatypes not compatible" may occur. Implement the corrections described in 799465.
To avoid problems when you migrate ODS objects with several unique indexes, implement 822819. See this for more detailed information about the problem
To avoid errors when you generate the SQL, implement 833924. See this for more detailed information about the problem.
When you implement the correction in 858220, clustered indexes on InfoCube fact tables are no longer automatically created together with the table. With the relevant R3load settings (where the secondary indexes are created after loading data), clustered indexes are only created after data has been loaded. This can improve the performance of the data import considerably.
880219 adjusts the creation of the DDL for secondary indexes on the fact tables to the changes described in 865667. The DDL is generated so that no secondary indexes are created on the unit dimension. Also, in the case of cumulative InfoCubes on Fact Table E, no indexes are created on the package dimension. Implement this if your source system has Support Package status 14 or contains corrections from 865667.
Before running the RS_BW_POST_MIGRATION report on the DB6 target system, you must implement 884124, 933610 and 948780 so that indexes created on the database are entered correctly in the DDIC.
Implement 894290 before you execute SMIGR_CREATE_DDL. The eliminates an error that affects BW tables containing column names such as "TABNAME", "INDEXNAME", "DBSYSABBR" or "DDSTORAGE".
Implement 919530 before you execute SMIGR_CREATE_DDL. This prevents entries for DB6 that were created in the source system in table DDSTORAGE from being transferred to the target system.
If you are using BW 3. 5 Support Package 18, you must set the DB6 RSADMIN parameter DB6_MDC_FOR_PSA to 'No' before you execute report SMIGR_CREATE_DDL in the source system. Use report SAP_RSADMIN_MAINTAIN for this. Enter DB6_MDC_FOR_PSA in the 'Object' field and the value 'No' in the 'Value' field. Select 'Insert' and choose 'Execute'. See 971135 and 1000382 for more information.
1167138 corrects the error whereby program SMIGR_CREATE_DDL does not add COMPRESS YES to CREATE TABLE statements, even if the RSADMIN parameter DB6_ROW_COMPRESSION is set to YES. The corrections comprise a section for SAP BW and section for SAP Basis. The corrections are contained in BW 3.5 Support Package 23 and SAP Basis 6.40 Support Package 23.
* R3load
Implement 799745 if R3load issues the following error message when the target database is loading:
(DDL) ERROR: statement text too long (size=4)(
* Parameterization of the R3load:
Due to bottlenecks with the size of the logical log, we now recommend you do the following when parameterizing the R3load.
By observing the following three rules, you eliminate the otherwise very high risk of running in an "out of log space" when you load large databases, since only very few logs would be written.
a) Only migrate to DB2 V8.2 or higher. You should no longer migrate to V8.1. The reason for this is that in V8.1, the CREATE INDEX logs all index entries separately. As of V8.2, this logging strategy is changed, that is, a CREATE INDEX only creates minimal log entries.
b) To load large tables, you should call the R3load with the FAST option (for DB2 FAST LOAD). See 454173 for more information. The DB2 FAST LOAD only writes minimal entries to the log.
c) You must always call the R3load with the -nolog option. This is not the default and you must enter it manually. To do this, copy the installation CD locally and then change the keydb.xml xml file. This is described in detail below.
Adjusting the keydb.xml file for the R3load parameterization
In the keydb.xml file, search for the following line:
Now scroll down to the DB6 block, in other words to the lines:
The following lines are:
In the "loadOptions" variable the parameters for the R3load call are defined. Enter the additional R3load option "-nolog" following the instructions below. Replace the bottom line of the two lines with:
Important:
Exchange the changed keydb.xml file on a copy of the CD. The installation must run using the copied directory so that the changed keydb.xml is used.
In the NW04SR1 installation, the keydb.xml file is in the following directory:
IM04_AIX_64/SAPINST/UNIX/AIX_64/NW04SR1/WEBAS/ABAP/DB6/
In other installation CDs, the directory path differs accordingly.
* Program SMIGR_CREATE_DDL
You do not have to specify a database version.
8. Special Database Platform MSSQL
----------------------------------
* Necessary corrections:
Implement the corrections contained in the following explanations:
960504(only if you are using NW04 < Support Package 18), 961591, 962124, 965695, 996263(only if you are using NW04 < Support Package 20).
If you migrate to SQL 2005, after you implement 1010854, while you run report SMIGR_CREATE_DDL in the source system, the system checks whether more than 1000 partitions would be created in the target system (this would cause an error). This is strongly recommended if you are migrating from SQL 2000 and ORACLE.
* If you migrate to SQL Server 2005, after the successful migration, you must activate partitioning in the target system before you work actively with BW (load data, compress and so on).
You can do this by activating the MSS_FORCE_PARTITIONING_ON parameter. 869407 describes how you can activate the parameter and the effects of the parameter.
* If you come from the ORACLE database platform, where more than 999 partitions are allowed per table, you must ensure that there are not more than 999 partitions in your F fact table. If you have more than 999 partitions, you must compress as many requests as necessary so that you are below the limit of 1000 partitions.
You can use the following query to check all partitioned tables with more than 999 in sqlplus:
select table_name from user_part_tables
where partition_count >= 1000 and table_name like '/%'
* As of Basis Support Package 14 (SAPKB64014) this restriction is obsolete. Theoretically, you can now process 1.000 or more partitions for each database object during the migration.
* If you want to migrate from an SQL Server 2005 to an SQL Server 2005 (homogenous migration), we recommend you carry out a system copy using sp_attach_db, sp_detach_db (see 151603).
However, if you want to carry out the migration using the migration procedure, you must have at least Basis Support Package 18 for NW04 and at least Basis Support Package 9 for NW04s. Otherwise your tables will not be partitioned in the target system.
* If no target is specified for the SQL Server with the SMIGR_CREATE_DDL report, SQL 2000 is taken as the target .
If you want to migrate to SQL Server 2005, you must use the F4 help to choose the explicitly.
* :
If you migrate to SQL Server 2005, however you do not want the partitioning to be active, you must specify SQL 2000 as the target when you execute the SMIGR_CREATE_DDL report. However, this type of migration should be the EXCEPTION.
9. Special Database Platform MAXDB
----------------------------------
* Program SMIGR_CREATE_DDL
You do not have to specify a database version.
* Necessary corrections
Implement the corrections contained in 809907 ,896023 and 1077911.
* To support MaxDB clustering, implement 1020006 in the source system before SMIGR_CREATE_DDL is executed.
* 1014782 comprises general questions and answers about MaxDB/liveCache system copy.
10. Special ORACLE Database Platform
-----------------------------------
* Program SMIGR_CREATE_DDL
You do not have to specify a database version.
When you make a homogeneous or heterogeneous system copy without changing the database platform, the necessary information on generating DDL statements is read directly from the system catalog. For this all DB objects are checked to see if the storage parameters differ from those in the SAP standard system. This affects, for example, partitioned tables and bitmap indexes. To speed up processing during DDL generation, see 519407.
Ensure that there are no more than between 500 and 600 compressed requests in the F fact tables of your InfoCubes, otherwise 'CREATE TABLE' statements are generated for 'F fact' tables that are larger than 64KB. Statements larger than 64 KB are not executed during import due to DB-specific restrictions and this causes the import phase to terminate.
* Necessary corrections
Implement the corrections described in 793253. You only need the correction if your source system runs on INFORMIX and you use BW SP10.
If you use Basis Support Package 10 or Support Package 11, implement the corrections from 813562.
If you use ASSM on your Oracle DB, see 813562.
Implement the corrections from 899966 if your system was installed with a version of 4.6D or lower. In this case, the names of some primary keys differ from the name convention in 6.20. This may cause incorrect DDL statements and therefore missing primary keys in the target system.
VERY IMPORTANT: Implement the corrections described in 849857. The HOTNEWS for this was not implemented in BW 3.50 patch 13. For further information.