Troubleshoot migration
- While we are using the v9.0 Migration wizard to create a profile before migrating a configuration, we might see the following profile-creation error messages.
profileName: profileName cannot be empty
profilePath: Insufficient disk spaceThese error messages might be displayed if we enter a profile name containing an invalid character such as a space. Rerun the Migration wizard, and verify that there are no invalid characters in the profile name such as a space, quotes, or any other special characters.
- If we encounter a problem when we are migrating from a previous version of WAS to v9.0, check the log files and other available information.
- Look for the log files, and browse them for clues.
- migration_backup_dir/logs/WASPreUpgrade.time_stamp.log
- migration_backup_dir/logs/WASPostUpgrade.time_stamp.log
- app_server_root/logs/clientupgrade.time_stamp.log
If these log files reference additional support commands, such as the syncNode command or wsadmin commands, additional trace and log files for these commands can be found at the following sample locations:
- migration_backup_dir/logs/backupConfig.log
- migration_backup_dir/logs/wsadmin/applications/app_a.ear.log
- migration_backup_dir/logs/wsadmin/applications/app_a.ear.trace
We can define where these trace and log files are created in a migration properties file as described in Define our migration through properties.
- Look for MIGR0259I: The migration has successfully completed or MIGR0271W: The migration completed with warnings in one of the log files.
- migration_backup_dir/logs/WASPreUpgrade.time_stamp.log
- migration_backup_dir/logs/WASPostUpgrade.time_stamp.log
- app_server_root/logs/clientupgrade.time_stamp.log
If MIGR0286E: The migration failed to complete is displayed, attempt to correct any problems based on the error messages that appear in the log file. After correcting any errors, rerun the command from the bin directory of the product installation root.
- Open the service log of the server hosting the resource we are trying to access, and browse error and warning messages.
- With WAS running, run the dumpNameSpace command and pipe, redirect, or "more" the output so that it can be easily viewed.
This command results in a display of all objects in WAS's namespace, including the directory path and object name.
- If the object that a client needs to access does not appear, use the administrative console to verify the following conditions.
- The server hosting the target resource is started.
- The web module or enterprise Java bean container hosting the target resource is running.
- The JNDI name of the target resource is properly specified.
- Analyze the trace data from the migration tools or forward the data to the appropriate organization for analysis.
When we use the WASPreUpgrade command or the WASPostUpgrade command, we can specify the following parameters for tracing:
- -traceString
- Optional. The value trace_spec specifies the trace information to collect.
- Specify "*=all=enabled" (with quotation marks) to gather all possible trace information.
This produces a very large trace file; for example, it can exceed 1 GB for the WASPostUpgrade command.
- Specify "Migration.*=all" to gather only the migration information
- Specify "Migration.Flow=all:Migration.*=finer" to gather most of the migration information.
- Specify "Migration.Flow=finer:Migration.*=fine" to gather the minimum amount of migration data needed by support teams.
This is the default.
- -traceFile
- Optional. The value file_name specifies the name of the output file for trace information.
If we do not specify the -traceString or -traceFile parameters, the command creates a trace file by default and places it in the backup_directory/logs directory.
For current information available from IBM Support on known problems and their resolution, read the IBM Support page. IBM Support also has documents that can save you time gathering information needed to resolve this problem. Before opening a PMR, read the IBM Support page.
- During the migration process, problems might occur while we are using the WASPreUpgrade tool or the WASPostUpgrade tool.
- Problems can occur when we are using the WASPreUpgrade tool.
- A Not found or No such file or directory message is returned.
This problem can occur if we are trying to run the WASPreUpgrade tool from a directory other than the v9.0 app_server_root\bin. Verify that the WASPreUpgrade script resides in the v9.0 app_server_root\bin directory, and launch the file from that location.
- The DB2 JDBC driver and DB2 JDBC driver (XA) cannot be found in the drop-down list of supported JDBC providers in the administrative console.
The administrative console no longer displays deprecated JDBC provider names. The new JDBC provider names used in the administrative console are more descriptive and less confusing. The new providers will differ only by name from the deprecated ones.
The deprecated names will continue to exist in the jdbc-resource-provider-templates.xml file for migration reasons (for existing JACL scripts for example); however, we are encouraged to use the new JDBC provider names in your JACL scripts.
- You receive the following message:
MIGR0108E: The specified WebSphere directory does not contain a WebSphere version that can be upgraded.
The following possible reasons for this error exist:
- If WAS v7.0 or later is installed, we might not have run the WASPreUpgrade tool from the bin directory of the v9.0 installation root.
- Look for something like the following message to display when the WASPreUpgrade tool runs: IBM WAS, Release 6.x.
This message indicates that we are running the WAS v7.0 or later migration utility, not the v9.0 migration utility.
- Alter the environment path or change the current directory so that we can launch the v9.0 WASPreUpgrade tool.
- An invalid directory might have been specified when launching the WASPreUpgrade tool.
- The WASPreUpgrade tool might exit without backing up the previous environment.
The tool might appear to run successfully as in the following example:
MIGR0201I: The migration function initialized log file WASPreUpgrade.log.
IGR0300I: The migration function is starting to save the existing Application Server environment.
IGR0302I: The existing files are being saved.
IGR0303I: The existing Application Server environment is saved.
IGR0420I: The first step of migration completed successfully.
We might also see a message similar to the following example in the migration trace file:
ul>[10/9/08 18:26:40:363 CDT] 00000000 Save 1 Skipped instance dmgr01 because user root /opt/migration_backup/profiles/dmgr01 does not exist.
The WASPreUpgrade tool writes out a copy of a profileList.ser file containing pointers to the backup directory to be used by the WASPostUpgrade tool. If that file is not subsequently deleted by migration for any reason, the old paths are used instead of the real paths when running the WASPreUpgrade tool in later migrations. To resolve this issue, we can safely delete the profileList.ser file and rerun the WASPreUpgrade tool.
When migrating a v6.1 federated node to v9.0, the WASPreUpgrade command might fail. We might receive an error similar to the following example:
[07/16/2011 11:07:10:357 CDT] MIGR0344I: Processing configuration file /opt/WAS61fep/profiles/v6109node74_01/config/cells/ndcell/clusters/Station1EJBCluster /resources.xml.
[07/16/2011 11:07:10:436 CDT] org.eclipse.emf.ecore.resource.Resource$IOWrappedExcept ion: Unresolved reference 'DataSource_1310769433958'. (file:/opt/WAS61fep/profiles/v6109node74_01/config/cells/ndcell/clusters/Station1EJBC luster/resources.xml, 9, 323) java.lang.Exception: org.eclipse.emf.ecore.resource.Resource$IOWrappedException: Unresolved reference 'DataSource_1310769433958'. (file:/opt/WAS61fep/profiles/v6109node74_01/config/cells/ndcell/clusters/Station1EJBCluster/resources.xml, 9, 323) at com.ibm.wsspi.migration.document.wccm.WCCMDocument.setInputStream(WCCMDocument.ja va:162)We might encounter this problem on a WebSphere v6.1 node when a DB2 database using IBM JCC Provider Driver has been created, and the WebSphere v6.1 node is synchronized to the v9.0 deployment manager. The v6.1 node does not support the v7.0 or later driver level. The node synchronization process is failing to remove all of the driver definitions.
To resolve this problem, backup any resources.xmlfiles that are to be modified. Stop the Version 6.1 node agent process. Edit the WebSphere v6.1 node resources.xml files and remove the orphaned resources.jdbc:CMPConnectorFactory entries prior to running the WASPreUpgrade command. Do not edit the deployment manager copy.
- Problems can occur when we are using the WASPostUpgrade tool.
- We might see an exception in the WASPostUpgrade logs after migrating a federated node that is similar to the exception that is highlighted in the following text:
MIGR0304I: The previous WebSphere environment is being restored.
IGR0367I: Backing up the current Application Server environment.
CEIMI0006I Starting the migration of Common Event Infrastructure.
IGR0486I: The Transports setting in file server.xml is deprecated.
IGR0486I: The PMIService:initialSpecLevel setting in file server.xml is deprecated.
IGR0486I: The PMIService:initialSpecLevel setting in file server.xml is deprecated.
IGR0404W: Do not use the node agent in the old configuration. It has been disabled.
IGR0351I: The migration function is attempting to synchronize with the deployment
manager using the SOAP protocol.
IGR0241I: Output of syncNode.
ADMU0116I: Tool information is being logged in file
/usr/WAS80/profiles/AppSrv01/logs/syncNode.log
ADMU0128I: Starting tool with the AppSrv01 profile
ADMU0401I: Begin syncNode operation for node aaixae15aNode01 with Deployment
Manager packppc.rtp.raleigh.ibm.com: 8879
ADMU0016I: Synchronizing configuration between node and cell.
AWXJR0006E The file, /usr/WAS80/java/jre/PdPerm.properties, was not found.
ArchiveUtil.toLocalURLs
ArchiveUtil.toLocalURLs
ArchiveUtil.toLocalURLs
ADMU0402I: The configuration for node aaixae15aNode01 has been synchronized
with Deployment Manager packppc.rtp.raleigh.ibm.com: 8879
IGR0352I: The synchronization with the deployment manager is successful.
CEIMI0007I The Common Event Infrastructure migration is complete.
IGR0307I: The restoration of the previous Application Server environment is complete.
IGR0271W: Migration completed successfully, with one or more warnings.This exception occurs during the syncNode operation, and it is listed as an error; but it does not cause any failures. The overall action completes successfully, and the message does not reoccur. After the server on the migrated federated node is started, the file in question is regenerated. We can ignore this message.
- A "Not found" or "No such file or directory" message is returned.
This problem can occur if we are trying to run the WASPostUpgrade tool from a directory other than the v9.0 app_server_root\bin. Verify that the WASPostUpgrade script resides in the v9.0 app_server_root\bin directory, and launch the file from that location.
- You receive the following message:
MIGR0102E: Invalid Command Line. MIGR0105E: Specify the primary node name.
The most likely cause of this error is that WAS v7.0 or later is installed and the WASPostUpgrade tool was not run from the bin directory of the v9.0 installation root.
To correct this problem, run the WASPostUpgrade command from the bin directory of thev9.0installation root.
- When we migrate the federated nodes in a cell, we receive the following error messages:
MIGR0304I: The previous WebSphere environment is being restored.
com.ibm.websphere.management.exception.RepositoryException:
com.ibm.websphere.management.exception.ConnectorException: ADMC0009E:
The system failed to make the SOAP RPC call: invoke IGR0286E: The migration failed to complete.
A connection timeout occurs when the federated node tries to retrieve configuration updates from the deployment manager during the WASPostUpgrade migration step for the federated node. Copying the entire configuration might take more than the connection timeout if the configuration that we are migrating to v9.0 contains any of the following elements:
- Many small applications
- A few large applications
- One very large application
The best practice is to modify the timeout value before running the WASPostUpgrade command to migrate a federated node.
- Go to the following location in the v9.0 directory for the profile to which we are migrating the federated node:
ul>profile_root/properties
- Open the soap.client.props file in that directory and find the value for the com.ibm.SOAP.requestTimeout property. This is the timeout value in seconds. The default is 180 seconds.
- Change the value of com.ibm.SOAP.requestTimeout to make it large enough to migrate the configuration. For example, the following entry would give you a timeout value of a half of an hour:
ul>com.ibm.SOAP.requestTimeout=1800
Select the smallest timeout value that will meet our needs. Be prepared to wait for at least three times the timeout that we select-once to download files to the backup directory, once to upload the migrated files to the deployment manager, and once to synchronize the deployment manager with the migrated node agent.
Go to the following location in the backup directory created by the WASPreUpgrade command: ul>backupDirectory/profiles/profile/properties
Open the soap.client.props file in that directory and find the value for the com.ibm.SOAP.requestTimeout property. Change the value of com.ibm.SOAP.requestTimeout to the same value that we used in the v9.0 file.
Alternatively, we might want to consider a solution in which we specify -includeApps script in the WASPostUpgrade command when we migrate the deployment manager to v9.0 if one or both of the following are true for our situation:
- You want to quickly migrate all nodes in the cell. After the entire cell is migrated, however, we are willing to manually run the application installation script for every application in the deployment manager backup directory and then synchronize the configuration with all migrated nodes.
- We are able to run without any applications installed.
Follow these steps to perform this alternative procedure:
- Specify -includeApps script in the WASPostUpgrade command when we migrate the deployment manager to v9.0.
- Migrate the entire cell to v9.0before installing any applications.
- Run the wsadmin command to install each application.
- Install the applications in the v9.0 configuration during normal operations or in applicable maintenance windows.
- Specify -conntype NONE. For example:
ul>wsadmin -f application_script -conntype NONE
- Synchronize the configuration with all of the migrated nodes.
Read Migrate a large WAS ND configuration with a large number of applications for more information on this alternative procedure.
You receive the "Unable to copy document to temp file" error message. Here is an example: ul>MIGR0304I: The previous WebSphere environment is being restored. com.ibm.websphere.management.exception.DocumentIOException: Unable to copy document to temp file: cells/sunblade1Network/applications/LARGEApp.ear/LARGEApp.ear
The file system might be full. Clear some space and rerun the WASPostUpgrade command.
You receive the following message: MIGR0108E: The specified WebSphere directory does not contain WebSphere version that can be upgraded.
The following possible reasons for this error exist:
- If WAS Version 6.1 is installed, we might not have run the WASPostUpgrade tool from the bin directory of the v9.0 installation root.
- Look for something like the following message to display when the WASPostUpgrade tool runs: IBM WAS, Release 6.1.
This message indicates that we are running a migration utility from a previous release, not the v9.0 migration utility.
- Alter the environment path or change the current directory so that we can launch the v9.0 WASPostUpgrade tool.
- An invalid directory might have been specified when launching the WASPreUpgrade tool or the WASPostUpgrade.
- The WASPreUpgrade tool was not run.
You receive the following error message: MIGR0253E: The backup directory migration_backup_directory does not exist.
The following possible reasons for this error exist:
- The WASPreUpgrade tool was not run before the WASPostUpgrade tool.
- Check to see if the backup directory specified in the error message exists.
- If not, run the WASPreUpgrade tool.
Read WASPreUpgrade command for more information.
- Retry the WASPostUpgrade tool.
- An invalid backup directory might be specified.
For example, the directory might have been a subdirectory of the v7.0 or later tree that was deleted after the WASPreUpgrade tool was run and the older version of the product was uninstalled but before the WASPostUpgrade tool was run.
- Determine whether or not the full directory structure specified in the error message exists.
- If possible, rerun the WASPreUpgrade tool, specifying the correct full migration backup directory.
- If the backup directory does not exist and the older version it came from is gone, rebuild the older version from a backup repository or XML configuration file.
- Rerun the WASPreUpgrade tool.
We decide to run WASPreUpgrade again after we have already run the WASPostUpgrade command. During the course of a deployment manager or a federated node migration, WASPostUpgrade might disable the old environment. If after running WASPostUpgrade we want to run WASPreUpgrade again against the old installation, run the migrationDisablementReversal.jacl script located in the old app_server_root/bin directory. After running this JACL script, our v7.0 or later environment will be in a valid state again, allowing us to run WASPreUpgrade to produce valid results.
A federated migration fails with message MIGR0405E. The migration that has taken place on the deployment manager as part of our federated migration has failed. For a more detailed reason for why this error has occurred, open the folder your_node_migration_temp located on the deployment manager node under the ...DeploymentManagerProfile/temp directory. For example:
/websphere80/appserver/profiles/dm_profile/temp/nodeX_migration_temp
The logs and everything else involved in the migration for this node on the deployment manager node are located in this folder. This folder will also be required for IBM support related to this scenario.
v9.0 applications are lost during migration. If any of the v9.0 applications fail to install during a federated migration, they will be lost during the synchronizing of the configurations. The reason that this happens is that one of the final steps of WASPostUpgrade is to run a syncNode command. This has the result of downloading the configuration on the deployment manager node and overwriting the configuration on the federated node. If the applications fail to install, they will not be in the configuration located on the deployment manager node. To resolve this issue, manually install the applications after migration. If they are standard v9.0 applications, they will be located in the app_server_root/installableApps directory.
To manually install an application that was lost during migration, use the wsadmin command to run the install_application_name.jacl script that the migration tools created in the backup directory.
In a Linux environment, for example, use the following parameters:
./wsadmin.sh -f migration_backup_directory/install_application_name.jacl -conntype NONE
v9.0 applications fail to install. Manually install the applications using the wsadmin command after WASPostUpgrade has completed.
To manually install an application that failed to install during migration, use the wsadmin command to run the install_application_name.jacl script that the migration tools created in the backup directory.
In a Linux environment, for example, use the following parameters:
./wsadmin.sh -f migration_backup_directory/install_application_name.jacl -conntype NONE
Read WASPostUpgrade command for more information.
The trace file exceeds its 400 megabytes allocation, but WASPostUpgrade is still running. If additional disk space is not available, the migration will fail. If we think we might encounter this problem during our migration:
- Stop the Migration wizard before the WASPostUpgrade command is issued.
- Run the WASPostUpgrade command from the command line for each profile we are migrating.
When we run the WASPostUpgrade command from the command line:
- Include the -oldProfile and -profileName parameters to indicate the profile we want to migrate.
- Add the com.ibm.ejs.ras.TraceNLS* parameter to the trace string to reduce the size of our trace log. For example we might want to specify the following trace setting:
com.ibm.ejs.ras.TraceNLS*=info
(Solaris) When we use the Migration wizard to migrate a profile from WAS Version 6.0.2 to v9.0 on a Solaris x64 processor-based system, the migration might fail during the WASPostUpgrade step. We might see messages similar to the following in migration_backup_dir/logs/WASPostUpgrade.time_stamp.log:
MIGR0327E: A failure occurred with stopNode.
IGR0272E: The migration function cannot complete the command.
WAS v6.0.2 uses a Java Virtual Machine (JVM) in 32-bit mode. The Migration wizard for v9.0 calls the WASPostUpgrade.sh script, which attempts to run the JVM for v6.0.2 in the 64-bit mode when the server stops the v6.0.2 node.
Remove the incomplete profile and enable WAS to correctly migrate the v6.0.2 profile:
- From a command line, change to app_server_root/bin.
For example, type the following command:
cd /opt/IBM/WebSphere/AppServer/bin
- Locate the WASPostUpgrade.sh script in the app_server_root/bin directory, and make a backup copy.
- Open the WASPostUpgrade.sh script in an editor, and perform the following actions:
- Locate the following line of code:
. "$binDir" /setupCmdLine.sh
- Insert the following line of code after the code identified in the previous step:
JVM_EXTRA_CMD_ARGS=""
- Save the changes.
- Delete the incomplete v9.0 profile created during the migration process:
app_server_root/bin/manageprofiles.sh -delete -profileName profile
- Delete the profile_root directory of the v9.0 profile that was removed in the previous step.
- Rerun the Migration wizard.
If we select the option for the migration process to install the enterprise applications that exist in the v7.0 or later configuration into the new v9.0 configuration, we might encounter some error messages during the application-installation phase of migration. The applications that exist in the v7.0 or later configuration might have incorrect deployment information-typically, invalid XML documents that were not validated sufficiently in previous WAS runtimes. The runtime now has an improved application-installation validation process and will fail to install these malformed EAR files. This results in a failure during the application-installation phase of WASPostUpgrade and produces an "E" error message. This is considered a "fatal" migration error.
If migration fails in this way during application installation, we can do one of the following:
- Fix the problems in the v7.0 or later applications, and then remigrate.
- Proceed with the migration and ignore these errors.
In this case, the migration process does not install the failing applications but does complete all of the other migration steps.
Later, we can fix the problems in the applications and then manually install them in the new v9.0 configuration using the administrative console or an install script.
After migrating a federated node to v9.0, the application server might not start. When we try to start the application server, we might see errors similar to those in the following example:
[5/11/06 15:41:23:190 CDT] 0000000a SystemErr R com.ibm.ws.exception.RuntimeError: com.ibm.ws.exception.RuntimeError: org.omg.CORBA.INTERNAL: CREATE_LISTENER_FAILED_4 vmcid: 0x49421000 minor code: 56 completed: No
[5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:198)
[5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:139)
[5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:460)
[5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at com.ibm.ws.runtime.WsServer.main(WsServer.java:59)
[5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[5/11/06 15:41:23:196 CDT] 0000000a SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
[5/11/06 15:41:23:197 CDT] 0000000a SystemErr R at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Change the port number at which the federated node's application server is listening. If the deployment manager is listening at port 9101 for ORB_LISTENER_ADDRESS, for example, the application server of the federated node should not be listening at port 9101 for its ORB_LISTENER_ADDRESS. To resolve the problem in this example...
- From the administrative console, click Application servers > server > Ports > ORB_LISTENER_ADDRESS .
- Change the ORB_LISTENER_ADDRESS port number to one that is not used.
If synchronization fails when we migrate a federated node to v9.0, the application server might not start. We might receive messages similar to the following when we migrate a federated node to v9.0:
ADMU0016I: Synchronizing configuration between node and cell.
ADMU0111E: Program exiting with error:
com.ibm.websphere.management.exception.AdminException: ADMU0005E:
Error synchronizing repositories
ADMU0211I: Error details may be seen in the file:
/opt/WebSphere/80AppServer/profiles/AppSrv02/logs/syncNode.log
IGR0350W: Synchronization with the deployment manager using the SOAP protocol failed.
IGR0307I: The restoration of the previous WAS environment is complete.
IGR0271W: Migration completed successfully, with one or more warnings.These messages indicate the following:
- Your deployment manager is at a v9.0 configuration level.
- The federated node we are trying to migrate is at a v9.0 configuration level on the deployment manager's repository (including applications).
- The federated node itself is not quite complete given that we did not complete the syncNode operation.
Perform the following actions to resolve this issue:
- Rerun the syncNode command on the node to synchronize it with the deployment manager.
- Run the GenPluginCfg command.
If we migrate a deployment manager to IBM v9.0 from a v6.1 configuration that was migrated from a v5.1 deployment manager, the syncNode command might fail on any v5.1 federated nodes in the cell. For example, we might see messages similar to the following text when running the syncNode command on a v5.1 node:
bash-3.00# ./syncNode.sh dmgrhostname 8879 -username yAdminUser -password MyAdminPassword
ADMU0116I: Tool information is being logged in file /usr/WebSphere/AppServer/logs/syncNode.log
ADMU0401I: Begin syncNode operation for node My511Node with Deployment Manager dmgrhostname: 8879
ADMU0111E: Program exiting with error: com.ibm.websphere.management.exception. AdminException:
ADMU2092E: The node and Deployment Manager must have the same product extensions, but they do not match. The node product extension is BASE and the Deployment Manager product extension is PME.
ADMU0211I: Error details may be seen in the file: /usr/WebSphere/AppServer/logs/syncNode.log
ADMU1211I: To obtain a full trace of the failure, use the -trace option.
Because of the inclusion of the javax.ejb.Remote annotation in the EJB 3.0 specification, certain EJB 2.1 beans might fail to compile if Enterprise Java Beans are written to import the entire javax.ejb and java.rmi packages. Compilation errors similar to those in the following example might occur: ejbModule/com/ibm/websphere/samples/trade/ejb/QuoteHome.java(17): The type Remote is ambiguous
When we install WAS v6.1 and federate a node to a v9.0 deployment manager, we might experience unexpected and continuous security exception messages. The system.out logs of the node agent contain the following exceptions:
[7/8/08 16:41:31:416 EDT] 0000001c DefaultTokenP E HMGR0149E: An attempt to open a connection to core group DefaultCoreGroup has been rejected. The sending process has a name of wasinst101Cell01\ndrack104Node08\server1 and an IP address of /9.42.92.86. Global security in the local process is Enabled. Global security in the sending process is Enabled. The received token starts with x2>W 9 Sv?. The exception is com.ibm.websphere.security.auth.WSLoginFailedException: Validation of LTPA token failed due to invalid keys or token type. at com.ibm.ws.security.ltpa.LTPAServerObject. validateToken(LTPAServerObject.java:876) at com.ibm.ws.security.token.WSCredentialTokenMapper. validateLTPAToken(WSCredentialTokenMapper.java:1178) at com.ibm.ws.hamanager.runtime.DefaultTokenProvider. authenticateMember(DefaultTokenProvider.java:214) at com.ibm.ws.hamanager.coordinator.impl.DCSPluginImpl. authenticateMember(DCSPluginImpl.java:723) at com.ibm.ws.dcs.vri.transportAdapter.rmmImpl.ptpDiscovery. DiscoveryRcv.acceptStream(DiscoveryRcv.java:266) at com.ibm.rmm.ptl.tchan.receiver.PacketProcessor. fetchStream(PacketProcessor.java:470) at com.ibm.rmm.ptl.tchan.receiver.PacketProcessor. run(PacketProcessor.java:917)
The deployment manager uses v9.0 and all of the nodes and alias nodes are using v6.1. To resolve this problem, upgrade all v6.1 nodes to v6.1.0.17 or later.
New ports that are registered on a migrated v9.0 node agent include: WC_defaulthost, WC_defaulthost_secure, WC_adminhost, WC_adminhost_secure SIB_ENDPOINT_ADDRESS, SIB_ENDPOINT_SECURE_ADDRESS,SIB_MQ_ENDPOINT_ADDRESS, SIB_MQ_ENDPOINT_SECURE_ADDRESS. These ports are not needed by the node agent, and can be safely deleted.
What to do next
If we did not find our problem listed, contact IBM support.