+

Search Tips   |   Advanced Search

Troubleshoot migration

Select the smallest timeout value that will meet our needs. Be prepared to wait for at least three times the timeout that we select-once to download files to the backup directory, once to upload the migrated files to the deployment manager, and once to synchronize the deployment manager with the migrated node agent.

  • Go to the following location in the backup directory created by the WASPreUpgrade command:

    ul>backupDirectory/profiles/profile/properties

  • Open the soap.client.props file in that directory and find the value for the com.ibm.SOAP.requestTimeout property.

  • Change the value of com.ibm.SOAP.requestTimeout to the same value that we used in the v9.0 file.

    Alternatively, we might want to consider a solution in which we specify -includeApps script in the WASPostUpgrade command when we migrate the deployment manager to v9.0 if one or both of the following are true for our situation:

    • You want to quickly migrate all nodes in the cell. After the entire cell is migrated, however, we are willing to manually run the application installation script for every application in the deployment manager backup directory and then synchronize the configuration with all migrated nodes.

    • We are able to run without any applications installed.

    Follow these steps to perform this alternative procedure:

    1. Specify -includeApps script in the WASPostUpgrade command when we migrate the deployment manager to v9.0.
    2. Migrate the entire cell to v9.0before installing any applications.

    3. Run the wsadmin command to install each application.

      • Install the applications in the v9.0 configuration during normal operations or in applicable maintenance windows.

      • Specify -conntype NONE. For example:

        ul>wsadmin -f application_script -conntype NONE

    4. Synchronize the configuration with all of the migrated nodes.

    Read Migrate a large WAS ND configuration with a large number of applications for more information on this alternative procedure.

  • You receive the "Unable to copy document to temp file" error message. Here is an example:

    ul>MIGR0304I: The previous WebSphere environment is being restored. com.ibm.websphere.management.exception.DocumentIOException: Unable to copy document to temp file: cells/sunblade1Network/applications/LARGEApp.ear/LARGEApp.ear

    The file system might be full. Clear some space and rerun the WASPostUpgrade command.

  • You receive the following message:

      MIGR0108E: The specified WebSphere directory does not contain WebSphere version that can be upgraded.

    The following possible reasons for this error exist:

    • If WAS Version 6.1 is installed, we might not have run the WASPostUpgrade tool from the bin directory of the v9.0 installation root.

      1. Look for something like the following message to display when the WASPostUpgrade tool runs: IBM WAS, Release 6.1.

        This message indicates that we are running a migration utility from a previous release, not the v9.0 migration utility.

      2. Alter the environment path or change the current directory so that we can launch the v9.0 WASPostUpgrade tool.

    • An invalid directory might have been specified when launching the WASPreUpgrade tool or the WASPostUpgrade.

    • The WASPreUpgrade tool was not run.

  • You receive the following error message:

      MIGR0253E: The backup directory migration_backup_directory does not exist.

    The following possible reasons for this error exist:

    • The WASPreUpgrade tool was not run before the WASPostUpgrade tool.

      1. Check to see if the backup directory specified in the error message exists.

      2. If not, run the WASPreUpgrade tool.

        Read WASPreUpgrade command for more information.

      3. Retry the WASPostUpgrade tool.

    • An invalid backup directory might be specified.

      For example, the directory might have been a subdirectory of the v7.0 or later tree that was deleted after the WASPreUpgrade tool was run and the older version of the product was uninstalled but before the WASPostUpgrade tool was run.

      1. Determine whether or not the full directory structure specified in the error message exists.

      2. If possible, rerun the WASPreUpgrade tool, specifying the correct full migration backup directory.

      3. If the backup directory does not exist and the older version it came from is gone, rebuild the older version from a backup repository or XML configuration file.

      4. Rerun the WASPreUpgrade tool.

  • We decide to run WASPreUpgrade again after we have already run the WASPostUpgrade command.

    During the course of a deployment manager or a federated node migration, WASPostUpgrade might disable the old environment. If after running WASPostUpgrade we want to run WASPreUpgrade again against the old installation, run the migrationDisablementReversal.jacl script located in the old app_server_root/bin directory. After running this JACL script, our v7.0 or later environment will be in a valid state again, allowing us to run WASPreUpgrade to produce valid results.

  • A federated migration fails with message MIGR0405E.

    The migration that has taken place on the deployment manager as part of our federated migration has failed. For a more detailed reason for why this error has occurred, open the folder your_node_migration_temp located on the deployment manager node under the ...DeploymentManagerProfile/temp directory. For example:

      /websphere80/appserver/profiles/dm_profile/temp/nodeX_migration_temp

    The logs and everything else involved in the migration for this node on the deployment manager node are located in this folder. This folder will also be required for IBM support related to this scenario.

  • v9.0 applications are lost during migration.

    If any of the v9.0 applications fail to install during a federated migration, they will be lost during the synchronizing of the configurations. The reason that this happens is that one of the final steps of WASPostUpgrade is to run a syncNode command. This has the result of downloading the configuration on the deployment manager node and overwriting the configuration on the federated node. If the applications fail to install, they will not be in the configuration located on the deployment manager node. To resolve this issue, manually install the applications after migration. If they are standard v9.0 applications, they will be located in the app_server_root/installableApps directory.

    To manually install an application that was lost during migration, use the wsadmin command to run the install_application_name.jacl script that the migration tools created in the backup directory.

    In a Linux environment, for example, use the following parameters:

      ./wsadmin.sh -f migration_backup_directory/install_application_name.jacl -conntype NONE

  • v9.0 applications fail to install.

    Manually install the applications using the wsadmin command after WASPostUpgrade has completed.

    To manually install an application that failed to install during migration, use the wsadmin command to run the install_application_name.jacl script that the migration tools created in the backup directory.

    In a Linux environment, for example, use the following parameters:

      ./wsadmin.sh -f migration_backup_directory/install_application_name.jacl -conntype NONE

    Read WASPostUpgrade command for more information.

  • The trace file exceeds its 400 megabytes allocation, but WASPostUpgrade is still running. If additional disk space is not available, the migration will fail.

    If we think we might encounter this problem during our migration:

    1. Stop the Migration wizard before the WASPostUpgrade command is issued.

    2. Run the WASPostUpgrade command from the command line for each profile we are migrating.

      When we run the WASPostUpgrade command from the command line:

      • Include the -oldProfile and -profileName parameters to indicate the profile we want to migrate.

      • Add the com.ibm.ejs.ras.TraceNLS* parameter to the trace string to reduce the size of our trace log. For example we might want to specify the following trace setting:

          com.ibm.ejs.ras.TraceNLS*=info

  • (Solaris) When we use the Migration wizard to migrate a profile from WAS Version 6.0.2 to v9.0 on a Solaris x64 processor-based system, the migration might fail during the WASPostUpgrade step.

    We might see messages similar to the following in migration_backup_dir/logs/WASPostUpgrade.time_stamp.log:

      MIGR0327E: A failure occurred with stopNode.
      IGR0272E: The migration function cannot complete the command.

    WAS v6.0.2 uses a Java Virtual Machine (JVM) in 32-bit mode. The Migration wizard for v9.0 calls the WASPostUpgrade.sh script, which attempts to run the JVM for v6.0.2 in the 64-bit mode when the server stops the v6.0.2 node.

    Remove the incomplete profile and enable WAS to correctly migrate the v6.0.2 profile:

    1. From a command line, change to app_server_root/bin.

      For example, type the following command:

        cd /opt/IBM/WebSphere/AppServer/bin

    2. Locate the WASPostUpgrade.sh script in the app_server_root/bin directory, and make a backup copy.

    3. Open the WASPostUpgrade.sh script in an editor, and perform the following actions:

      1. Locate the following line of code:

          . "$binDir" /setupCmdLine.sh

      2. Insert the following line of code after the code identified in the previous step:

          JVM_EXTRA_CMD_ARGS=""

      3. Save the changes.

    4. Delete the incomplete v9.0 profile created during the migration process:

        app_server_root/bin/manageprofiles.sh -delete -profileName profile

    5. Delete the profile_root directory of the v9.0 profile that was removed in the previous step.

    6. Rerun the Migration wizard.

  • If we select the option for the migration process to install the enterprise applications that exist in the v7.0 or later configuration into the new v9.0 configuration, we might encounter some error messages during the application-installation phase of migration.

    The applications that exist in the v7.0 or later configuration might have incorrect deployment information-typically, invalid XML documents that were not validated sufficiently in previous WAS runtimes. The runtime now has an improved application-installation validation process and will fail to install these malformed EAR files. This results in a failure during the application-installation phase of WASPostUpgrade and produces an "E" error message. This is considered a "fatal" migration error.

    If migration fails in this way during application installation, we can do one of the following:

    • Fix the problems in the v7.0 or later applications, and then remigrate.
    • Proceed with the migration and ignore these errors.

      In this case, the migration process does not install the failing applications but does complete all of the other migration steps.

      Later, we can fix the problems in the applications and then manually install them in the new v9.0 configuration using the administrative console or an install script.

  • After migrating a federated node to v9.0, the application server might not start.

    When we try to start the application server, we might see errors similar to those in the following example:

      [5/11/06 15:41:23:190 CDT] 0000000a SystemErr R com.ibm.ws.exception.RuntimeError: com.ibm.ws.exception.RuntimeError: org.omg.CORBA.INTERNAL: CREATE_LISTENER_FAILED_4 vmcid: 0x49421000 minor code: 56 completed: No
      [5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:198)
      [5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:139)
      [5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:460)
      [5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at com.ibm.ws.runtime.WsServer.main(WsServer.java:59)
      [5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      [5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
      [5/11/06 15:41:23:197 CDT] 0000000a SystemErr R at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    Change the port number at which the federated node's application server is listening. If the deployment manager is listening at port 9101 for ORB_LISTENER_ADDRESS, for example, the application server of the federated node should not be listening at port 9101 for its ORB_LISTENER_ADDRESS. To resolve the problem in this example...

    1. From the administrative console, click Application servers > server > Ports > ORB_LISTENER_ADDRESS .

    2. Change the ORB_LISTENER_ADDRESS port number to one that is not used.

  • If synchronization fails when we migrate a federated node to v9.0, the application server might not start.

    We might receive messages similar to the following when we migrate a federated node to v9.0:

      ADMU0016I: Synchronizing configuration between node and cell.
      ADMU0111E: Program exiting with error:
      com.ibm.websphere.management.exception.AdminException: ADMU0005E:
      Error synchronizing repositories
      ADMU0211I: Error details may be seen in the file:
      /opt/WebSphere/80AppServer/profiles/AppSrv02/logs/syncNode.log
      IGR0350W: Synchronization with the deployment manager using the SOAP protocol failed.
      IGR0307I: The restoration of the previous WAS environment is complete.
      IGR0271W: Migration completed successfully, with one or more warnings.

    These messages indicate the following:

    • Your deployment manager is at a v9.0 configuration level.

    • The federated node we are trying to migrate is at a v9.0 configuration level on the deployment manager's repository (including applications).

    • The federated node itself is not quite complete given that we did not complete the syncNode operation.

    Perform the following actions to resolve this issue:

    1. Rerun the syncNode command on the node to synchronize it with the deployment manager.

    2. Run the GenPluginCfg command.

  • If we migrate a deployment manager to IBM v9.0 from a v6.1 configuration that was migrated from a v5.1 deployment manager, the syncNode command might fail on any v5.1 federated nodes in the cell.

    For example, we might see messages similar to the following text when running the syncNode command on a v5.1 node:

      bash-3.00# ./syncNode.sh dmgrhostname 8879 -username yAdminUser -password MyAdminPassword

      ADMU0116I: Tool information is being logged in file /usr/WebSphere/AppServer/logs/syncNode.log
      ADMU0401I: Begin syncNode operation for node My511Node with Deployment Manager dmgrhostname: 8879
      ADMU0111E: Program exiting with error: com.ibm.websphere.management.exception. AdminException:
      ADMU2092E: The node and Deployment Manager must have the same product extensions, but they do not match. The node product extension is BASE and the Deployment Manager product extension is PME.
      ADMU0211I: Error details may be seen in the file: /usr/WebSphere/AppServer/logs/syncNode.log
      ADMU1211I: To obtain a full trace of the failure, use the -trace option.

  • Because of the inclusion of the javax.ejb.Remote annotation in the EJB 3.0 specification, certain EJB 2.1 beans might fail to compile if Enterprise Java Beans are written to import the entire javax.ejb and java.rmi packages. Compilation errors similar to those in the following example might occur:

      ejbModule/com/ibm/websphere/samples/trade/ejb/QuoteHome.java(17): The type Remote is ambiguous

  • When we install WAS v6.1 and federate a node to a v9.0 deployment manager, we might experience unexpected and continuous security exception messages.

    The system.out logs of the node agent contain the following exceptions:

    [7/8/08 16:41:31:416 EDT] 0000001c DefaultTokenP E 
    HMGR0149E: An attempt to open a connection to core group 
    DefaultCoreGroup has been rejected. The sending process 
    has a name of wasinst101Cell01\ndrack104Node08\server1 
    and an IP address of /9.42.92.86. Global security in the 
    local process is Enabled. Global security in the sending 
    process is Enabled. The received token starts with 
    x2>W 9 Sv?. The exception is com.ibm.websphere.security.auth.WSLoginFailedException: 
    Validation of LTPA token failed due to invalid keys or 
    token type.
    
    at com.ibm.ws.security.ltpa.LTPAServerObject.
    validateToken(LTPAServerObject.java:876)
    at com.ibm.ws.security.token.WSCredentialTokenMapper.
    validateLTPAToken(WSCredentialTokenMapper.java:1178)
    at com.ibm.ws.hamanager.runtime.DefaultTokenProvider.
    authenticateMember(DefaultTokenProvider.java:214)
    at com.ibm.ws.hamanager.coordinator.impl.DCSPluginImpl.
    authenticateMember(DCSPluginImpl.java:723)
    at com.ibm.ws.dcs.vri.transportAdapter.rmmImpl.ptpDiscovery.
    DiscoveryRcv.acceptStream(DiscoveryRcv.java:266)
    at com.ibm.rmm.ptl.tchan.receiver.PacketProcessor.
    fetchStream(PacketProcessor.java:470)
    at com.ibm.rmm.ptl.tchan.receiver.PacketProcessor.
    run(PacketProcessor.java:917)
    

    The deployment manager uses v9.0 and all of the nodes and alias nodes are using v6.1. To resolve this problem, upgrade all v6.1 nodes to v6.1.0.17 or later.

    New ports that are registered on a migrated v9.0 node agent include: WC_defaulthost, WC_defaulthost_secure, WC_adminhost, WC_adminhost_secure SIB_ENDPOINT_ADDRESS, SIB_ENDPOINT_SECURE_ADDRESS,SIB_MQ_ENDPOINT_ADDRESS, SIB_MQ_ENDPOINT_SECURE_ADDRESS. These ports are not needed by the node agent, and can be safely deleted.


    What to do next

    If we did not find our problem listed, contact IBM support.