stagingcopy utility
The stagingcopy utility copies data from the production database to the production-ready data on a staging or authoring environment.
Important:
- Your staging and production environments must be at the same maintenance level to successfully run the stagingprop utility.
- If the site uses the web feed utility, export the contents of the CMFEEDLOG table, then import the contents after the stagingcopy utility completes.
- (DB2)
export to /backupdata/cmfbackup.del of del select * from cmfeedlog import from /backupdata/cmfbackup.del of del insert into cmfeedlogFor more information, see EXPORT command and IMPORT command.
- (Oracle)
exp userid/password FILE=cmfeedlog.dmp TABLES=cmfeedlog imp userid/password FILE=cmfeedlog.dmp FROMUSER=userid TABLES=cmfeedlog IGNORE=YSee your Oracle product documentation.
(Oracle) If Archive Logging is enabled, running the stagingcopy utility can result in a non-linear increase in the amount of disk space used by the destination database. The amount of disk space used by the Archive Logs can be up to 4 - 8 times the final size of the destination database. Ensure that you monitor your file system size to avoid running out of disk space for the destination database.
Parameter values
- dbtype
- Optional.
- (DB2) Specify DB2. DB2 is the default database type and we can omit the dbtype parameter from the command.
- (Oracle) Specify Oracle.
- scope
- Required: The level of scope for the copy to the staging or authoring environment. Specify one of the following levels:
- _all_
- Copies both records related to the site and to all merchants. Records are copied in the following order:
- Site records are copied from STGSITETAB
- Site records are copied from STGMRSTTAB
- Merchant records are copied from STGMERTAB
- Merchant records are copied from STGMRSTTAB
- _site_
- Copies only site-related records. These records are data that is common to all merchants. For example, the language and country or region code used by the system. This data comes from the STGSITETAB table.
- _merchant_
- Copies the records related to individual merchants only. For example, store information is customized for individual merchants, and rows from the store tables could be specific for each merchant. Copy all data for all merchants, not just data for one individual merchant. This data comes from the STGMERTAB table.
If you do not set your scope to _all_:
- Copy site data before merchant data, since the site data is used by all merchants. Otherwise, your copy fails due to a mismatch between the foreign and primary keys.
- When we use the parameter cleanup_stage_db to clean the site data, merchant data can be deleted because of the cascade delete. Clean the merchant data followed by the site data then copy the site data followed by the merchant data.
Note:
- These tables are referenced from only the staging or authoring environment, not from production.
- MEMBER table data is not cleaned for user members: MEMBER table records with TYPE column value of 'U'. However, if there are CASCADE DELETE foreign key constraints, cleanup of non-user members can result in the loss of data in tables that retain relationships with user members. For example, MBRREL, MBRGRPMBR, and MBRROLE. If information in these relationship tables must be retained, make sure that a backup of the destination database (staging or authoring) exists.
- dbtable
- The name of any specific table to be copied. All records in this table are copied, provided the records are within the scope specified by the scope parameter; otherwise, no records are copied.
- sourcedb
- Required: The name of the database on the production environment.s must be a valid, full JDBC URL or follow the Type 4 database name convention:
- (DB2) db_server:db_port/db_name.
If you use script_file, consider setting script_sourcedb as well.
- sourcedb_user
- Required: The logon ID of the database administrator who created the source database schema.
- sourcedb_passwd
- Required: The password of the logon ID specified by the sourcedb_user parameter.
- sourcedb_schema
- Optional: Specifies the schema on the source database where all operations are conducted. Specifically, this schema should have all database objects required by an active WebSphere Commerce instance. When not specified, this value defaults to the schema active on the source database when a connection is established.
- destdb
- Required: The name of the database on the staging environment.s must be a valid, full JDBC URL or follow the Type 4 database name convention:
- (DB2) db_server:db_port/db_name.
If you use script_file, consider setting script_destdb as well.
- destdb_user
- Required: The logon ID of the database administrator who created the staging or authoring database schema. This parameter is mandatory when we use a remote database.
- destdb_passwd
- Required: The password of the logon ID specified by the destdb_user parameter. If not specified, the system prompts you to enter the password. This parameter is mandatory when we use a remote database.
- destdb_schema
- Optional: Specifies the schema on the destination database where all operations are conducted. Specifically, this schema should have all database objects required by an active WebSphere Commerce instance. When not specified, this value defaults to the schema active on the destination database when a connection is established.
- script_file
- Optional. The name of the SQL script file that is generated by the stagingcopy utility when using export and import to copy the production database to the production-ready data on the specified scope. The script file also generates the delete statements to clean the production-ready data if you use the default value or specify cleanup_stage_db yes.There are several additional prerequisites for using the script_file parameter:
- Before running the script, verify that you have enough disk space to hold the exported tables. The script file is in the stagingcopy utility directory where you invoke the stagingcopy utility.
- Before running the script, copy the following files to the database server:
- stage_copy.sql
- utilities_root/schema/9.0.0.0/dbtype/wcs.stage.trigger.sql
- utilities_root/schema/9.0.0.0/dbtype/wcs.droptrigger.sql
Where dbtype is db2 or oracle.
- Consider setting the script_sourcedb and script_destdb options with script_file.
(DB2) Also, before you run the script, we must set the correct code page for your DB2 command-line session. Otherwise, we might lose data during the character conversion. Since the WebSphere Commerce database uses Unicode character, we must also set the DB2 client session with a Unicode code page. This setting can be done by setting the operating system environment variable DB2CODEPAGE.
Note: We must terminate the current session and start a new command-line session for the new environment variable to take effect. For example,
export DB2CODEPAGE=1208 db2 terminateIssue the following commands to run the script file:
db2 -vtd# -f wcs.droptrigger.sql
db2 -vtd# -f script_file_name
db2 -vtd# -f wcs.stage.trigger.sql(Oracle) Issue the following command to run the script file:
sqlplus /nolog @wcs.droptrigger.sql
sqlplus /nolog @script_file_name
sqlplus /nolog @wcs.stage.trigger.sqlscript_sourcedb Optional. Database specification to be used for the source database when database commands and statements are output to the file specified by script_file. The default value is the same as the value specified for sourcedb. If script_file is not specified, script_sourcedb and its value are ignored.
We can set this option if the sourcedb value is not of a type that can be understood by command-line or file interpreter of the underlying database. For example, use the sourcedb parameter to specify the full JDBC URL. Use the script_sourcedb parameter to use a locally cataloged source database identifier or alias in the file that is generated by script_file.
script_destdb Optional. Database specification to be used for the destination database when database commands and statements are output to the file specified by script_file. The default value is the same as the value specified for destdb. If script_file is not specified, script_destdb and its value are ignored.
We can set this option if the destdb value is not of a type that can be understood by command-line or file interpreter of the underlying database. For example, use the destdb parameter to specify the full JDBC URL. Use the script_destdb parameter to use a locally cataloged destination database identifier or alias in the file that is generated by script_file.
script_prtsourcepswd Optional. Specifies whether the password for the source database user is to be printed with database connection statements to the file generated by script_file. Default is false. If script_file is not specified, script_prtsourcepswd and its value are ignored. script_prtdestpswd Optional. Specifies whether the password for the destination database user is to be printed with database connection statements to the file generated by script_file. Default is false. If script_file is not specified, script_prtdestpswd and its value are ignored. instance Optional: The instance name that is local to the machine where you intend to run the stagingcopy utility. cleanup_stage_db Optional. Specify whether to clean the staging tables before the data is copied. If you do not specify this parameter, staging tables are automatically cleaned before copying data. Specify one of the following values:
- yes
- Cleans the staging tables before copying data. Merchant data can be deleted because of the cascade delete.
- no
- Do not clean the staging tables before copying data. Nothing is deleted from the staging tables. Your copy might fail if your copy data generates conflict or duplicate key on primary key or unique indexes.
- only
- Cleans the staging tables but no data is copied from the production database.
If you specify the scope parameter, clean and copy the merchant data after you clean and copy the site data.
batchsize Optional. Turns on or off SQL batch updates and specifies the number of consolidated change log records to include in one SQL batch. If not specified, the batchsize parameter is set to a value of 100.
Setting the batchsize parameter to a value of 0 (zero) turns off SQL batch update. Turn off SQL batch if we are publishing any of the following changes from the production-ready data to the production environment:
- Use a workspace to delete a WebSphere Commerce object that involves the MEMBER table. This table includes objects such as users, organizations, customer segments, member groups, customer territory groups, or customer price groups.
When SQL batch update is turned on, change log records are sorted by change type (insert, update, or delete). Each batch contains changes of one type only. For example, if you have 102 insert changes and the batchsize parameter is set to 100, 2 SQL batches are created: one batch has 100 insert operations and the other has two insert operations.
Use SQL batch updates improves the speed with which the stagingcopy utility updates the production database.
log Optional. The path and name of the file in which the stagingcopy utility records its activities and errors. The timestamp is appended to the file name, for example, myLog_yyyy.mm.dd_hh.mm.ss.zzz.log. If this parameter is not specified, a log file called stagingcopy_yyyy.mm.dd_hh.mm.ss.zzz.log is created in the utilities_root/logs directory. transaction Optional. Specifies the number of records after which a commit is issued. This number applies to records that are modified during both the cleanup and copy steps of StagingCopy. Note: When used with script_file, transaction requires invocation of an SQL statement to estimate the number of DELETE statements to output to the file that is being generated. For example:
select count(*) num_recs from <tablename> [<scope-filtering-predicate>]The number of DELETE statements that are generated is 1+(num_recs/n). A commit is appended after each DELETE statement.
By default, for each table, a commit is issued once after the cleanup step and once after the copy step. Values of n less than or equal to 0 revert functionality to default behavior.
paramfile Optional. Specifies the path to the parameter file that includes command-line arguments and values. Each argument and value needs to be in the format argument=value with a single argument and value on each line in the file. Any passwords within this parameter file must be encrypted.
- The stagingcopy utility and the production database
The stagingcopy utility allows an administrator to copy data from the production database to the staging database. Use the stagingcopy utility for specific administrative situations, such as setting up a new staging environment or recovering a corrupted staging environment database.- Examples of copying data to the production-ready database
The following examples illustrate how we can copy tables from the production database to the production-ready data. It is important to remember that we cannot use the stagingcopy utility if requests for quotes (RFQs) are on our production system.
Related concepts
Staging environment utilities
Staging environment
Related tasks
Running utilities from the Utility server Docker container
Configure the Oracle database connection for utilities to authenticate users with Oracle Wallet
Copying data to the staging database
Publishing data to the production database
Testing the site in a staging environment