Clustering and WebSphere Portal
Contents
- Overview
- Guidelines
- Preliminary information and procedures
- Install Network Deployment
- Install WebSphere Portal
- Set up the cluster
- Configure for HTTP session failover
- Cleaning up Network Deployment after a failed deployment
- Configure search in a cluster
- Using an external security manager with the cluster
Overview
Clusters are sets of servers that are managed together and participate in workload management.
In WAS, a cluster is composed of multiple identical copies of an appserver. A cluster member is a single application server in the cluster. WebSphere Portal is installed as an enterprise application server within the WAS infrastructure. WebSphere Portal can be installed directly on a federated WAS node.
All of the clustering features available within the WAS infrastructure are also available and apply to WebSphere Portal. Thus, a WebSphere Portal cluster is simply a collection of multiple WebSphere Portal servers that are identically configured.
WebSphere Portal configuration tasks are cell aware, in that the tasks can determine whether the node is federated or standalone, and then act accordingly.
WebSphere Portal nodes running on different operating systems are supported in the same cluster, enabling you to install WebSphere Portal using different install paths if you so choose.
The activate-portlets task can be used to activate the WebSphere Portal portlets across all cluster members at one time.
Guidelines
- Apply appropriate interim fixes to each node.
- The deployment manager node must be installed separately before the cells and clusters can be configured.
- Because of the size of WebSphere Portal and the number of enterprise applications it contains, update the addNode and removeNode script or batch files to increase the maximum heap size for the Java commands used in processing these commands.
Edit...
$WAS_HOME/bin/addNode.sh
$WAS_HOME/bin/removeNode.sh...and change the maximum heap size to at least 512MB by adding the -Xmx512m option to the java command.
If the addNode command fails due to an OutOfMemoryError exception, you can remove the node from the cell by using the -force option on the removeNode command. The -force option is required in this case because the node is only partially federated.
- If required, configure database session persistence.
- If you add a node to a cell or change a node's configuration after it has been federated to deployment manager, synchronize the node's configuration...
Administrative console | System Administration | Nodes | nodename | Full ResynchronizeThis helps ensure that there is no mismatch between node and cell configuration after the synchronization is complete.
- Network Deployment must be configured with the same security settings as WebSphere Portal before adding a secured WebSphere Portal node to the cell.
- If you are planning to configure an external security manager to perform authentication or authorization for WebSphere Portal in a cluster environment, install and configure the WebSphere Portal cluster first. Verify that the WebSphere Portal cluster is working properly before proceeding with the configuration of any external security managers.
- If you intend to enable security after creating the cluster by using the enable-security-ldap, enable-security-wmmur-ldap, or enable-security-wmmur-db tasks, stop all cluster members before doing so.
- If you intend to use the Lotus Workplace Web Content Management function included with WebSphere Portal in a cluster environment, the use of vertical scaling is not supported.
- Install and configure WebSphere Portal separately on each node. In addition, there can only be one full WebSphere Portal installation per cell. That is, there cannot be more than one WebSphere Portal EAR in the cell at any given time.
- Cloudscape, the default database installed with WebSphere Portal, cannot be operated remotely and therefore cannot be used in a cluster environment. When installing WebSphere Portal, you will first install to Cloudscape, then reconfigure WebSphere Portal to point to the common database used for the cluster.
- The global settings portlet modifies configuration properties of the local WebSphere Portal node. These changes are not reflected on every node in the cluster unless the affected property is copied to the the appropriate location on each node. To understand what server is affected by a change made through this portlet, the node's name is given in the portlet's view.
- To support search in a clustered environment, install and configure search for remote search service on a WAS node that is not part of the WebSphere Portal cluster.
- Uninstallation of the primary node, when there are secondary nodes in the cluster, is not supported and should not be attempted.
- WebSphere Portal administrative actions are immediately visible for the user who performs them. However, another user can be assured of seeing the changes only if the user logs out of WebSphere Portal and then logs back in. This limitation applies to both cluster and non-cluster environments.
Procedures
The cluster environment setup in the following sections is described according to the following topology:
DM01 Network Deployment node WP01 on NODE01 First WebSphere Portal in the cell WP02 on NODE02 Second WebSphere Portal in the cell (horizontal scaling only) You will install each WebSphere Portal instance on a dedicated WAS instance, bring it under deployment manager control if it is not already federated into a cell, and finally add it to the cluster after it is completely configured.
The instructions in this topic can be used to set up a cluster in either a horizontal or vertical scaling topology. If you are installing WebSphere Portal in a vertical cluster, you can disregard the instructions for installing node WP02 and proceed to the creation of the cluster after completing the installation of node WP01. Where appropriate, differences between the steps required for horizontal scaling and vertical scaling are noted.
Install Network Deployment
WAS Network Deployment must be installed to the same level as the WAS levels used in the cluster. Therefore, Network Deployment and all WAS nodes must be at the same product revision level and have the same corrective service levels applied.
Network Deployment must be installed before WAS Base, if both products are installed on the same machine. Therefore, if you intend to install Network Deployment and WebSphere Portal on the same machine, install Network Deployment before installing WebSphere Portal, because the WebSphere Portal installation also automatically installs WAS Base.
It is recommended that you choose a dedicated system to serve as the Network Deployment machine.
Install Network Deployment on Windows/UNIX
- Locate the WAS Network Deployment disc for your operating system and install the V5.1 Network Deployment support by invoking...
cd_root/operating_system/install
- Update the Network Deployment product by installing WebSphere Business Integration Server Foundation on the Network Deployment machine. Locate the disc with the WebSphere Business Integration Server Foundation V5.1 installation and install it over Network Deployment by invoking...
cd_root/operating_system/install
WebSphere Business Integration Server Foundation 5.1 installation will automatically detect that the Network Deployment installation exists on this system and will give you the option to extend Network Deployment.
- Update the V5.1 Network Deployment installation to the first fix pack level (V5.1.1). Copy the files located in the WAS Network Deployment Fix pack disc's was51nd_fp1/operating_system directory to the nd_root/update directory, where operating_system corresponds to the appropriate operating system. You should then invoke the updateWizard command from the nd_root/update directory...
updateWizard.shMore detailed instructions are located in the nd_root/update/doc directory that you copied from the disc.
- Update the V5.1 WebSphere Business Integration Server Foundation installation to the first fix pack level (V5.1.1). Copy the files located in the WAS Fix pack 1 disc's wbisf51_fp1/operating_system directory to the nd_root/update directory, where operating_system corresponds to the appropriate operating system. You should then invoke the updateWizard command from the nd_root/update directory...
updateWizard.shMore detailed instructions are located in the nd_root/update/doc directory that you copied from the disc.
- Update the V5.1.1 Network Deployment installation to the first cumulative fix pack level (V5.1.1.1). Copy the files located in the WAS Network Deployment Fix pack disc's was511nd_cf1/operating_system directory to the nd_root/update directory, where operating_system corresponds to the appropriate operating system. You should then invoke the updateWizard command from the nd_root/update directory
updateWizard.shMore detailed instructions are located in the nd_root/update/doc directory that you copied from the disc.
- Ensure Deployment Manager is operating properly. On the Network Deployment node, start the deployment manager by running the startManager script...
$WAS_HOME/DeploymentManager/bin/startManager.shAfter the deployment manager starts, launch the administrative console by pointing your browser to...
http://DM01:9090/admin...where DM01 is the name of the Network Deployment node.
- If you plan to add a secured WebSphere Portal node to the deployment manager cell, manually configure the deployment manager security settings to match the node security before adding the node to the cell.
Install WebSphere Portal
When installing WebSphere Portal into a cluster environment, you can approach the installation process in two ways:
- Install WebSphere Portal on a federated WAS node.
- Install WebSphere Portal on a separate WAS node that is not managed by a deployment manager.
Using the addNode command
After the addNode command completes successfully, the cell name of the configuration on the node that you added is changed from the base cell name to the deployment manager cell name.
To add a node to the deployment manager cell, enter the addNode command on one line on the server system to be added:
$WAS_HOME/bin/addNode.sh deployment_manager_host \ deployment_manager_port \ -username admin_user_id \ -password admin_password \ [-includeapps] \ [-trace]
The deployment_manager_port is the Deployment Manager SOAP connector-address. The default value is 8879.
Parameters:
-includeapps
- The
-includeapps parameter is optional. This parameter should only be used if there are enterprise applications already installed on this node that need to be added to the deployment manager's master configuration. If you do not specify this flag, any appservers that are defined on this node will be included in the deployment manager's configuration, but they might not be functional without their corresponding enterprise applications. If the application already exists in the cell, a warning is generated and the application is not installed into the cell.
-trace
- The
-trace parameter is optional. This parameter generates trace information that is stored in the addNode.log file for debugging purposes.
Install WebSphere Portal on a federated WAS node
This section describes how to federate two separate WAS nodes into a cell that is managed by the deployment manager on the Network Deployment system (DM01) and then install an instance of WebSphere Portal (WP01 and WP02) on each of those nodes.
When you remove a node from a managed cell, the appserver on that node reverts to the state it had at the time it was originally federated. Consequently, if you install WebSphere Portal on a node that has already been federated into a cell and then later remove that node from the cell, WebSphere Portal will not be usable on the resulting stand-alone server.
Federating the WAS nodes
- Prerequisites for federating the WAS nodes (NODE01 and NODE02):
- WAS must be installed and operational on each node.
- Network Deployment must be installed on DM01 to the same release level as the instances of WAS.
- If security is enabled, the same security settings must be specified for the Network Deployment system and for each WAS node.
- Add NODE01 to the deployment manager (DM01) cell by entering the addNode command.
Specify the
-includeapps parameter only if there are enterprise applications already installed on this node that need to be added to the deployment manager's master configuration.- Repeat the previous step for NODE02, if you are setting up a horizontal cluster. This step is not required if you are using vertical scaling.
- If any appservers and enterprise applications were already installed before you added the node to the cell, verify that they still function properly.
Install WP01
- Prerequisites for installation of first WebSphere Portal instance (WP01):
- The WAS nodes must be installed with any fix packs or interim fixes required by WebSphere Portal. The nodes must also be operational and federated into the deployment manager cell.
- Network Deployment must be installed on DM01 to the same release level as the instances of WAS.
- The remote database server must be installed and operational.
- Install WebSphere Portal.
- If you install on a WAS node that has security enabled, ensure that you complete all manual steps specified to complete the security configuration.
- The installation program detects that you are installing to a federated node and displays a panel that enables you to specify that this is the primary node in the cell.
- Select Primary Node to indicate that this is the primary instance of WebSphere Portal in the cell. This will cause the installation program to update the deployment manager's master configuration with the server's enterprise applications.
- Ensure that all portlets deployed during installation are active.
Run the following command from the $WP_ROOT/config directory:
./WPSconfig.sh activate-portlets
- Transfer the database from Cloudscape to another database.
- Configure the node for security.
- If you installed WebSphere Portal on a WAS node with security already enabled, have completed the security configuration as part of the installation process. Continue to the security verification step.
- If you installed WebSphere Portal on a WAS that did not have security enabled, configure the node for security at this time.
Because this process also enables security on the deployment manager, restart the deployment manager to ensure that the security changes are activated.
If you are using a custom user registry for security, ensure that the required WMM JAR files are present on the Network Deployment machine.
- Configure the node to use an external Web server. By default, WebSphere Portal uses its own HTTP service for client requests. However, an external Web server is required to take advantage of features such as workload management.
- Once complete, the federated servers will be visible in the Servers > Application Server view. Start and verify the operability of the WebSphere Portal instance.
- Ensure that you can use the Portal Scripting Interface with the cluster.
- Verify whether the wp.wire.jar file is present in the nd_root/lib/ext directory on the Network Deployment machine.
- If the file is not present, copy the file from the was_root/lib/ext directory on any WebSphere Portal node to the nd_root/lib/ext directory on the Network Deployment machine.
- Restart the deployment manager.
Install WP02 (horizontal scaling only)
If you intend to use WebSphere Portal in a vertical scaling topology, you can disregard the steps in this section and skip to Set up the cluster.
- Install the second WebSphere Portal instance.
- Because you are installing on a WAS node that has security enabled, ensure that you complete all manual steps specified to complete the security configuration.
- The installation program detects that you are installing to a federated node and displays a panel that enables you to specify that this is the secondary node in the cell.
- Because this is not the first installation of WebSphere Portal in the cell, ensure that you select Secondary Node, which will cause the installation program to install the WebSphere_Portal appserver without any of its enterprise applications. After the cluster is created, this node will share the enterprise applications that were installed with WP01.
- Do not attempt to verify that WebSphere Portal is operational after installation.
Because you installed as a secondary node without any enterprise applications or portlets, the WebSphere Portal instance on WP02 is not operational at this point.
- The database must now be transferred and reconfigured from Cloudscape to the database being used by WP01.
- The WP02 datasources must be made to point to WP01's datasources.
The wpconfig.properties must be updated to specify what remote database will be used.
- Reconfigure WebSphere Portal to use the remote databases by running the following command from the $WP_ROOT/config directory:
At this point, do not access WebSphere Portal and do not log in to WebSphere Portal as any user, as it could result in database corruption.
- Continue to Set up the cluster.
Approach 2. Installing WebSphere Portal on a separate WAS node not managed by the deployment manager
This section describes how to set up two separate WebSphere Portal instances (WP01 and WP02) on two dedicated systems and add them to the Deployment Manager (DM01).
Install WP01
- Prerequisites for installation of first WebSphere Portal instance (WP01):
- Deployment Manager must be installed on DM01 to the same release level as the instances of WAS.
- The remote database server must be installed and operational.
- Install WebSphere Portal.
If you install on a WAS node that has security enabled, ensure that you complete all manual steps specified to complete the security configuration.
- Transfer the database from Cloudscape to another database.
- Configure the node to use an external Web server.
- By default, WebSphere Portal uses its own HTTP service for client requests. However, an external Web server is required to take advantage of features such as workload management.
- On some machine configurations, WebSphere Portal components might cause a timeout exception when the node is being added to the Network Deployment cell. To eliminate the risk of this happening, increase the ConnectionIOTimeOut value for the WebSphere_Portal appserver by using the following steps:
- Open the server.xml file, located in...
$WAS_HOME/config/cells/cell/nodes/node/servers/WebSphere_Portal- Within the opening and closing tags of each HTTP transport section...
xmi:id="HTTPTransport_<x>"...where x is a unique number, add the following...
<properties xmi:id="Property_10" name="ConnectionIOTimeOut" value="180" required="false"/>The following is an example of an HTTP transport section with the line added:
<transports xmi:type="applicationserver.webcontainer:HTTPTransport" xmi:id="HTTPTransport_1" sslEnabled="false"> <address xmi:id="EndPoint_1" host="" port="13975"/> <properties xmi:id="Property_10" name="ConnectionIOTimeOut" value="180" required="false"/> </transports>- After you have added this line to each HTTP transport section, save the file.
- Restart the WebSphere_Portal appserver.
- Change the time-out request for the Simple Object Access Protocol (SOAP) client. The default, in seconds, is 180. Within...
$WAS_HOME/properties/...edit soap.client.props. Change the line to:
com.ibm.SOAP.requestTimeout=6000- If you installed WebSphere Portal on a WAS node with security already enabled, ensure that the Network Deployment machine is configured with the same security settings as WebSphere Portal before adding the secured WebSphere Portal node to the deployment manager cell.
If you are using a custom user registry for security, ensure that the required WMM JAR files are present on the Network Deployment machine.
- Add WP01 to the deployment manager cell by entering the addNode command. Ensure that you include the
-includeapps parameter.- After federation, the WP01 node is part of the deployment manager cell.
Use a text editor to open...
$WP_ROOT/config/wpconfig.properties
...and ensure that the CellName property specifies the name of the cell to which the WebSphere Portal node belongs. The cell name can be identified by...
$WAS_HOME/config/cells/cell...directory on the node, where cell indicates the cell to which the node belongs.
- Update the deployment manager configuration for the new WebSphere Portal.
./WPSconfig.sh post-portal-node-federation-configuration
- Regenerate the Web server plug-in.
- Go to...
Administrative console | Environment | Update Web Server Plug-in | OK.- Edit...
$ND_ROOT/config/cells/plugin-cfg.xml...and change any directory structure occurrences specific to the Network Deployment machine to match the directory structure used on the Web server. For example, references to...
$WAS_HOME/DeploymentManager...would be replaced with...
$WAS_HOME/AppServer- If you are using a remote Web server, copy the updated plug-in configuration file to the Web server's plug-in configuration directory.
- Stop and start the Web server.
- Edit the wpconfig.properties file, and ensure that WebSphere Portal can be accessed using the values specified for the WpsHostName and WpsHostPort properties. For detailed information on setting these values, refer to Access WebSphere Portal through another HTTP port.
- If you installed WebSphere Portal on a WAS that did not have security enabled, configure the node for security at this time.
Because this process also enables security on the deployment manager, restart the deployment manager to ensure that the security changes are activated.
If you are using a custom user registry for security, ensure that the required WMM JAR files are present on the Network Deployment machine.
- Once complete, the federated servers will be visible in the Servers > Application Server view. Start and verify the operability of the WebSphere Portal instance.
- Ensure that you can use the Portal Scripting Interface with the cluster.
- Verify whether the wp.wire.jar file is present in the nd_root/lib/ext directory on the Network Deployment machine.
- If the file is not present, copy the file from the was_root/lib/ext directory on any WebSphere Portal node to the nd_root/lib/ext directory on the Network Deployment machine.
- Restart the deployment manager.
3.3.2: Install WP02 on a separate WAS node not managed by the deployment manager (horizontal scaling only)
Vertical cluster note: If you intend to use WebSphere Portal in a vertical scaling topology, you can disregard the steps in this section and skip to Set up the cluster.
The procedure for installing WP02 is nearly identical to that for installing WP01, except that WP02 is federated into the cell without including any of its applications. This is because the cell can contain only one set of these applications, and the applications have already been included when WP01 was federated.
- Install second WebSphere Portal instance.
If you install on a WAS node that has security enabled, ensure that you complete all manual steps specified to complete the security configuration.
- Verify that WebSphere Portal is operational after installation.
- The database must now be transferred and reconfigured from Cloudscape to the database being used by WP01.
- The WP02 datasources must be made to point to WP01's datasources. Edit wpconfig.properties to specify what remote database will be used.
- Reconfigure WebSphere Portal to use the remote databases by running the following command from $WP_ROOT/config...
At this point, do not access WebSphere Portal and do not log in to WebSphere Portal as any user, as it could result in database corruption.
- Configure the node for security.
- If you installed WebSphere Portal on a WAS node with security already enabled, have completed the security configuration as part of the installation process. Continue to the security verification step.
- If you installed WebSphere Portal on a WAS that did not have security enabled, configure the node for security at this time.
If you are performing the security configuration at this point, follow these steps:
- Set the DbSafeMode property to true in wpconfig.properties to prevent the configuration program from modifying the database.
- Configure WebSphere Portal for security.
- Set the DbSafeMode property to false in wpconfig.properties.
- Verify that WebSphere Portal is operational with the new security configuration.
- Change the time-out request for the Simple Object Access Protocol (SOAP) client. The default, in seconds, is 180. Within the was_home/properties/ directory, edit the soap.client.props file. Change the line to:
com.ibm.SOAP.requestTimeout=6000
- Add WP02 to the deployment manager (DM01) cell by entering the addNode command. Do not include the
-includeapps parameter.Note the following information:
- It is important not to use the -includeapps option. The applications will not be transferred as they are already in the deployment manager's master configuration.
- Applications stored in the master configuration are still only assigned to WP01. In order to share them with WP02 and any additional nodes, a cluster must be created.
- After federation, the WP02 node is part of the deployment manager cell. The default cell name in wpconfig.properties must be updated manually to account for this change. Use a text editor to open the wpconfig.properties file:
$WP_ROOT/config/wpconfig.properties
Ensure that the following properties are uncommented and specify appropriate values:
- CellName property: Specify the name of the cell to which the WebSphere Portal node belongs. The cell name can be identified by the was_root/config/cells/cell directory on the node, where cell indicates the cell to which the node belongs.
- PrimaryNode property: Because this node is a secondary node, set this property to false.
- Update the deployment manager configuration for the new WebSphere Portal.
- Run the following command from the $WP_ROOT/config directory:
./WPSconfig.sh post-portal-node-federation-configuration
4: Set up the cluster
Once the WebSphere Portal appservers reside on the federated WAS nodes, you can create a cluster and add the WebSphere Portal instances (WP01 and WP02) as cluster members. This section also describes how to add additional cluster members after the cluster has been created.
4.1: Create the cluster
- To create the cluster, use the administrative console on the deployment manager. Start the administrative console by opening a browser and entering http://DM01:9090/admin in the address bar, where DM01 is the deployment manager node or host.name.
- Begin creating the cluster by going to the Servers > Cluster view and adding a new cluster.
- Enter the basic cluster information:
- Define the cluster name.
- Check the box Prefer local enabled.
- Check the box Create Replication Domain for this cluster.
- Select the option Select an existing server to add to this cluster and then choose server WebSphere_Portal on node WP01 from the list.
- Check the box Create Replication Entry in this Server.
- Click Next.
- Create the second cluster member.
- Define the name of cluster member.
- Select the appropriate node, depending on the cluster topology you are using:
- Horizontal: WP02
- Vertical: WP01
- Select the appropriate HTTP port setting, depending on the cluster topology you are using:
- Horizontal: Uncheck the box Generate Unique HTTP Ports.
- Vertical: Check the box Generate Unique HTTP Ports.
- Check the box Create Replication Entry in this Server.
- Click Apply and then click Next to view the summary.
- Finally create the new cluster and save the changes.
- When you have completed the steps above, the cluster topology can be viewed from the Servers > Cluster Topology view.
- The appserver view will list the new server cluster members.
- To enable portlet deployment in the cluster, modify the DeploymentService.properties file on each WebSphere Portal member node and set the wps.appserver.name property to the name of the cluster you defined.
$WP_ROOT/shared/app/config/services/DeploymentService.properties
Each node in the cluster should have the same synchronization settings to ensure consistency between WebSphere Portal server configurations. By default, automatic synchronization will occur between every node when a change is made to the configuration of the cell, which is checked for updates once per minute.
- The configuration on each WebSphere Portal node must be updated with appropriate cluster member information. Use a text editor to open the wpconfig.properties file:
$WP_ROOT/config/wpconfig.properties
Ensure that the following properties are uncommented and specify appropriate values:
- ServerName property: Update this property to match the cluster member name used to identify the node to the deployment manager. To determine the cluster member name, click Servers > Cluster Topology in the administrative console for the deployment manager and expand the cluster you are working with to view the cluster members.
- Enable dynamic caching on the cluster member nodes to correctly validate the portal caches. If dynamic caching is not enabled, situations could arise where users have different views or different access rights, depending which cluster node handles the user's request.
- Enable dynamic caching on the first cluster member node (WP01).
Run the following command in the $WP_ROOT/config directory:
./WPSconfig.sh action-set-dynacache -DServerName=ClusterMemberName -DReplicatorName=ReplicatorName
ClusterMemberName is the name of the cluster member you want to update with the replicator setting, and ReplicatorName is the name of the cluster member to be used as the replicator.
- Enable dynamic caching for the second cluster member by repeating the previous step.
- Horizontal scaling: When running the action-set-dynacache task, ensure that you run the task on the WP02 node.
- Vertical scaling: When running the action-set-dynacache task, ensure that you specify the name of the second cluster member with the ServerName property and run the task on the WP01 node.
- Save your changes and resynchronize the nodes:
- In the administrative console for the deployment manager, click Save on taskbar, and save your administrative configuration.
- Select System Administration > Nodes, select the node from the list, and click Full Resynchronize.
- Regenerate the Web server plug-in.
- Select Environment > Update Web Server Plug-in in the deployment manager administrative console, and click OK.
- Edit the nd_root/config/cells/plugin-cfg.xml file and change any directory structure occurrences specific to the Network Deployment machine to match the directory structure used on the Web server. For example, references to install_dir/WebSphere/DeploymentManager would be replaced with install_dir/WebSphere/AppServer.
- If you are using a remote Web server, copy the updated plug-in configuration file (nd_root/config/cells/plugin-cfg.xml) to the Web server's plug-in configuration directory.
- Stop and start the Web server.
- Restart all nodes in the cluster.
4.2: Create additional cluster members
Although it is possible to define all cluster members at the same time you initially create the cluster, you might also need to add cluster members at some later time, after the cluster has already been configured. This section provides instructions for adding cluster members in both horizontal and vertical scaling topologies for an existing cluster.
4.2.1: Creating horizontal cluster members
Cluster members in a horizontal scaling topology reside on different nodes in the cell. To create an additional cluster member in a horizontal cluster, complete the following steps:
- Install WebSphere Portal on the new node in the cell. Follow the instructions in Installing WebSphere Portal , depending on whether the node is already federated into the cell.
- Access the administrative console on the deployment manager by opening a browser and entering http://DM01:9090/admin in the address bar, where DM01 is the deployment manager node or host.name. The port number might differ based on your installation.
- Click Servers > Clusters in the console navigation tree, and then click Cluster members from the list of additional properties.
- Click New to create the cluster member.
- Define the name of cluster member.
- Select the node where you installed the new instance of WebSphere Portal.
- Uncheck the box Generate Unique HTTP Ports.
- Click Apply and then click Next to view the summary.
- Click Finish, and save the changes.
- The new cluster topology can be viewed from the Servers > Cluster Topology view.
- The Servers > Application Servers view will list the new server cluster members.
- To enable portlet deployment in the cluster, modify the DeploymentService.properties file on the new WebSphere Portal member node and set the wps.appserver.name property to the name of the cluster you defined.
$WP_ROOT/shared/app/config/services/DeploymentService.properties
- The configuration on each WebSphere Portal node must be updated with appropriate cluster member information. Use a text editor to open the wpconfig.properties file:
$WP_ROOT/config/wpconfig.properties
Ensure that the following properties are uncommented and specify appropriate values:
- ServerName property: Update this property to match the cluster member name used to identify the node to the deployment manager. To determine the cluster member name, click Servers > Cluster Topology in the administrative console for the deployment manager and expand the cluster you are working with to view the cluster members.
- Enable dynamic caching on the new cluster member node.
Run the following command in the $WP_ROOT/config directory:
./WPSconfig.sh action-set-dynacache -DServerName=ClusterMemberName -DReplicatorName=ReplicatorName
ClusterMemberName is the name of the cluster member you want to update with the replicator setting, and ReplicatorName is the name of the cluster member to be used as the replicator.
- Save your changes and resynchronize the nodes:
- In the administrative console for the deployment manager, click Save on taskbar, and save your administrative configuration.
- Select System Administration > Nodes, select the node from the list, and click Full Resynchronize.
- Regenerate the Web server plug-in.
- Select Environment > Update Web Server Plug-in in the deployment manager administrative console, and click OK.
- Edit the nd_root/config/cells/plugin-cfg.xml file and change any directory structure occurrences specific to the Network Deployment machine to match the directory structure used on the Web server. For example, references to install_dir/WebSphere/DeploymentManager would be replaced with install_dir/WebSphere/AppServer.
- If you are using a remote Web server, copy the updated plug-in configuration file (nd_root/config/cells/plugin-cfg.xml) to the Web server's plug-in configuration directory.
- Stop and start the Web server.
- Restart all nodes in the cluster.
4.2.2: Creating vertical cluster members
Cluster members in a vertical scaling topology reside on the same node in the cell. To create an additional cluster member in a vertical cluster, complete the following steps:
- Access the administrative console on the deployment manager by opening a browser and entering http://DM01:9090/admin in the address bar, where DM01 is the deployment manager node or host.name. The port number might differ based on your installation.
- Click Servers > Clusters in the console navigation tree, and then click Cluster members from the list of additional properties.
- Click New to create the cluster member.
- Define the name of cluster member.
- Select an existing node where WebSphere Portal is installed.
- Check the box Generate Unique HTTP Ports.
- Click Apply and then click Next to view the summary.
- Click Finish, and save the changes.
- The new cluster topology can be viewed from the Servers > Cluster Topology view.
- The Servers > Application Servers view will list the new server cluster members.
- Regenerate the Web server plug-in.
- Select Environment > Update Web Server Plug-in in the deployment manager administrative console, and click OK.
- Edit the nd_root/config/cells/plugin-cfg.xml file and change any directory structure occurrences specific to the Network Deployment machine to match the directory structure used on the Web server. For example, references to install_dir/WebSphere/DeploymentManager would be replaced with install_dir/WebSphere/AppServer.
- If you are using a remote Web server, copy the updated plug-in configuration file (nd_root/config/cells/plugin-cfg.xml) to the Web server's plug-in configuration directory.
- Stop and start the Web server.
4.3: Updating Personalization properties in the cluster
WebSphere Portal provides two property files that you can modify to customize the Personalization feature. These files are not managed by the WAS deployment manager. This means that if you make any changes to these files on a node in the cluster, those changes are not transferred to other nodes when you perform a synchronization of the cluster members. Instead manually copy the following properties files to each node in the cluster:
- $WP_ROOT/shared/app/config/services/PersonalizationService.properties
- $WP_ROOT/shared/app/config/services/FeedbackService.properties
5: Configure for HTTP session failover
In a clustered environment, all requests for a particular session are directed to the same WebSphere Portal server instance in the cluster. In other words, after a user establishes a session with a portal (for example, by logging in to the portal), the user is served by the same WebSphere Portal server instance for the duration of the session. To verify which server is handling user requests for a session, you can view the global settings portlet in WebSphere Portal, which displays the node name of the WebSphere Portal server handling requests.
If one of the WebSphere Portal servers in the cluster fails, the request is rerouted to another WebSphere Portal server in the cluster. If distributed sessions support is enabled (either by persistent sessions or memory-to-memory session replication), the new server can access session data from the database or another WebSphere Portal server instance.
Distributed session support is not enabled by default and must be configured separately in WAS.
By default failover support is available for WebSphere Portal and any portlets that are installed with the product. To take advantage of failover support with your own developed portlets, ensure that your portlets are implemented according to best practices.
Failover and lost data
Data that is stored within the JVM memory and not managed by the appserver or the WebSphere Portal server for replication might be lost in the case of failover. Even with the distributed session support, during a failure, users will not be able to recover any uncommitted information that is not stored in sessions or other replicated data areas (such as a distributed Map or render parameters). In such cases, users might have to restart a transaction after a failover occurs. For example, if you are in the process of working with a portlet and have navigated between several different screens when a failover occurs, you might be placed back at the initial screen, where you would need to repeat your previous steps. Similarly, if you are attempting to deploy a portlet when a failover occurs, the deployment might not be successful, in which case redeploy the portlet. Note, however, that with distributed session support enabled, the validity of user login sessions is maintained despite node failures.
In cases where a portlet does not support failover, a "Portlet Unavailable" message is displayed for the portlet after a failover occurs. If a portlet supports partial or incomplete failover, some data displayed by the portlet before the failover could disappear after the failover occurs, or the portlet might not work as expected. In such extreme cases, the user should log out and log back in to resume normal operation.
After a failover occurs, the request is redirected to another cluster member by the Web server plug-in. Most browsers will issue a GET request as a response to a redirect after submitting a POST request. This ensures that the browser does not send the same data multiple times without the user's knowledge. However, this means that after failover, users might have to refresh the page in the browser or go back and resubmit the form to recover the POST data.
Document Manager portlets supplied with WebSphere Portal and any other portlets that use POST data are affected by this behavior.
6: Cleaning up Network Deployment after a failed deployment
In the event of a problem during deployment of WebSphere Portal in a cell or cluster, use the steps outlined in the section Uninstalling WebSphere Portal from a cluster to clean up the deployment manager configuration.
7: Configure search in a cluster
To support search in a clustered environment, install and configure search for remote search service on a WAS node that is not part of the WebSphere Portal cluster. For instructions on setting up search and configuring search in a cluster, refer to the Search topic.
8: Using an external security manager with the cluster
Perform external security managers AFTER you have completed all other setup, including ensuring that the WebSphere Portal cluster is functional.
The following external security managers will work with WebSphere Portal:
Perform security configuration on each node in the cluster.
Edit wpconfig.properties on each node, and set WpsHostName and WpsHostPort equal to the host.name and port number used for the Web server.
The wpconfig.properties file is not propagated with cluster syncronization. You have to manually keep all wpconfig.properties files in sync on each node in the cluster.
Tivoli Access Manager considerations
- For Tivoli Access Manager, ensure that you run the validate-pdadmin-connection task on each node in the cluster.
- If the validate-pdadmin-connection task fails, run the run-svrssl-config task before attempting to run validate-pdadmin-connection again. Note that the PDServerName property represents an individual configured AMJRTE connection to Tivoli Access Manager, and each node in the cluster must have a unique value for PDServerName before running the run-svrssl-config task.
- Ensure that the WebSEAL TAI parameters are the same on each node in the cluster. If you run a configuration task at a later time that overwrites the WebSEAL junction, the WAS TAI properties are not automatically updated, so manually ensure that all nodes are using the same parameters.
Note the file location specified by the PDPermPath property in the wpconfig.properties file. This property indicates the location of the Tivoli Access Manager AMJRTE properties file (PdPerm.properties). In a cluster composed of nodes with different operating systems, the location of the PdPerm.properties file might differ, depending on the node. Ensure that the value of the PDPermPath property on each node corresponds to the location of the PdPerm.properties file.
This value can be set globally for all cluster members by using the configURLName property, accessed in the deployment manager administrative console by clicking...
Security | JAAS Configuration | Application Logins | Portal_Login | JAAS Login Modules | com.tivoli.mts.PDLoginModule | Custom PropertiesTo ensure that the location of the PdPerm.properties file is properly specified, use one of the following approaches:
- If your nodes are all on UNIX platforms, use the UNIX link command (ln) to ensure the value for configUrlName resolves on each node
- If the configURLName property is not set in the administrative console, the default location is relative to the JAVA_HOME system property under the following path: java_home/jre/PdPerm.properties. Make sure the PDPermPath property for each node is set to this relative location before running the run-svrssl-config task, and completely remove the configURLName property from the PDLoginModule custom properties.
- If you are creating an SSL junction in Tivoli Access Manager, set the value of WpsHostPort in the wpconfig.properties file to the SSL port (default of 443) before running either of the following Tivoli Access Manager configuration tasks: enable-tam-all, enable-tam-tai. After you have successfully run the task, change the value of WpsHostPort back to its initial value. If you do not reset the property after running the Tivoli Access Manager tasks, subsequent attempts to access the portal by running the xmlaccess command (or configuration tasks that invoke it) will fail.
SiteMinder considerations
For SiteMinder the SMConfigFile property in the wpconfig.properties file is used to specify the location of the SiteMinder TAI configuration file. There are two approaches you can take when specifying the location of this file in a clustered environment:
- Ensure that each node in the cluster has the same path to the TAI configuration file.
- SiteMinder TAI V1.1 must be installed for these steps to work. V1.0 does not support the use of a System property to specify the location of the SiteMinder TAI's WebAgent.conf file.
- Define a WAS variable in the deployment manager administrative console. This variable can then be referenced in the custom System properties to be passed to each appserver in the cluster. For example:
- Click Environment > Manage WebSphere Variables.
- Specify WP01 as the node in the scope field, click Apply, and then click New.
- Enter SITEMINDER_TAI_LOCATION as the variable name and the directory path where the configuration file is located on node WP01 (such as c:\netegrity\smwastai\).
- Click OK.
- Specify WP02 as the node in the scope field, click Apply, and then click New.
- Enter SITEMINDER_TAI_LOCATION as the variable name and the directory path where the configuration file is located on node WP02 (such as /usr/netegrity/smwastai/).
- Click OK.
- Repeat these steps for any other nodes in the cluster.
- Save the changes to update the deployment manager master configuration.
- Edit the wpconfig.properties file on each node, and update the SMConfigFile property with the variable information:
SMConfigFile=WebAgent.conf- For each appserver, create a new custom property by completing the following steps:
- Click Servers > Application Server in the administrative console.
- Click on the name of the WebSphere Portal appserver.
- Click Process Definition > Java Virtual Machine > Custom properties.
- Click New, and enter smasa.home for the name of the new property and ${SITEMINDER_TAI_LOCATION} for the property value.
- Click OK, and save your changes.
- Repeat these steps for each appserver.
When WebSphere Portal configuration tasks are run with this configuration, a server variable is generated in the custom properties of the JVM, which is evaluated and resolved at runtime on each node.
See also:
- Plan information
- Managing the cluster
- Maintaining the cluster
- Uninstalling WebSphere Portal or Personalization Server from a cluster
- Troubleshooting
- Install and configuring a Personalization Server cluster
- WebSphere Portal Quick Installation
- Required on the portal machine: WebSphere BISF
- Information Center for Network Deployment
Workplace Web Content Management is a trademark of the IBM Corporation in the United States, other countries, or both.
WebSphere is a trademark of the IBM Corporation in the United States, other countries, or both.
IBM is a trademark of the IBM Corporation in the United States, other countries, or both.
Tivoli is a trademark of the IBM Corporation in the United States, other countries, or both.