Install WebSphere Portal into a cluster

 

+
Search Tips   |   Advanced Search

 


Overview

Use clustering to create identical copies of an application server. A cluster is created based on an existing application server, and then you can add additional cluster members as needed. It is recommended that you convert the original application server into a cluster member. When you create a new cluster member, it is identical to the original application server.

Vertical scaling refers to setting up multiple application servers on a single machine. Use clusters to create additional application server processes on a single physical machine. This topology requires WAS Network Deployment for Enterprise Extensions .

 

Guidelines and limitations for implementing cluster environments

A cluster is a logical collection of application server processes. Use clusters to group servers for workload balancing. Application servers within a cluster are called cluster members. All cluster members in a single cluster must have identical application components deployed on them. Other than the applications configured to run on them, cluster members do not have to share any configuration data. One cluster member can run on a large multi-processor enterprise server system while another member of the same cluster can run on a small laptop.

 

Network Deployment installation

WAS Network Deployment (Network Deployment) provides a distributed management environment for managing multiple WAS V5.0 instances (also known as nodes or base application server instances).

The Network Deployment product provides the administration deployment manager application which runs in a specialized application server also provided by the Network Deployment installation. The Network Deployment installation does not install the WAS product or create WAS instances. Also, you do not install enterprise applications into a Network Deployment instance. The only functions supported in the Network Deployment installation is the Network Deployment and its associated administration programs, which you use to manage and configure WAS nodes which have been added to the Network Deployment domain (also known as a cell). Choose one computer system on which to install the Network Deployment product, and then choose the WAS nodes you wish to add to Network Deployment cell. The Network Deployment and the nodes it manages comprise a cell. On iSeries, multiple Network Deployment instance are fully supported, enabling you to have multiple Network Deployment each managing a unique set of nodes.

Once you have installed Network Deployment, also install IBM Enterprise Enablement for WAS for iSeries, which is included with WebSphere Portal. When installing, be sure to select the WAS ND Extensions option.

 

Install WebSphere Portal on a separate WAS instance not managed by Network Deployment

This section describes how to set up a WebSphere Portal instance (WP01), add it to the Network Deployment (ND01), and finally create a cluster. Below are detailed steps for implementing this scenario.

 

Install WP01

  1. Prerequisites for installation of first WebSphere Portal instance (WP01):

    • WAS must be installed and operational.

    • Network Deployment must be installed on ND01 and be the same version and release as WAS.

  2. Verify that WebSphere Portal is operational after installation.

  3. Configure the node to use an external Web server. By default, WebSphere Portal uses its own HTTP service for client requests.

  4. Verify that WebSphere Portal is operational with the new external Web server configuration.

  5. Configure the node for security.

  6. Verify that WebSphere Portal is operational with the new security configuration.

  7. On some machine configurations, WebSphere Portal components may cause a timeout exception when the node is being added to the Network Deployment. To eliminate the risk of this happening, increase the Network Deployment's ConnectionIOTimeOut value by doing the following:

    Open the server.xml file, located in...

    /QIBM/UserData/WebAS5/ND/instance/config/cells/cell/nodes/node/servers/server

    ...where instance is the Network Deployment instance, cellname is the cell name, nodename is the node name, and servername is the application server name. Within the opening and closing tags of each HTTP transport section (xmi:id="HTTPTransport_<x>", where x is a unique number), add the following line:

    <properties xmi:id="Property_10" name="ConnectionIOTimeOut" value="180" required="false"/>

    The following is an example of an HTTP transport section with the line added:

    <transports xmi:type="applicationserver.webcontainer:HTTPTransport" xmi:id="HTTPTransport_1" sslEnabled="false">
    <address xmi:id="EndPoint_1" host="" port="13975"/>
    <properties xmi:id="Property_10" name="ConnectionIOTimeOut" value="180" required="false"/>
    </transports>
    

    After you have added this line to each HTTP transport section, save the file.

  8. Add WP01 to Network Deployment by entering the addNode command on one line:

    /qibm/proddata/webas5/pme/bin/addNode <deployment_manager_host> <deployment_manager_port> -instance <instance> -username <admin_user_id> -password <admin_password> -startingport <startingport> -includeapps

    Parameters: -includeapps transfers the complete set of applications installed on this node into the Network Deployment master configuration. It is important to include the -includeapps parameter on the first node.

    where:

    • <deployment_manager_host> is the Network Deployment host name.

    • <deployment_manager_port> is the Network Deployment SOAP connector-address.

    • <instance> is the name of the WAS instance.

    • <admin_user_id> is the WAS administrative user name. This parameter is required if security is enabled.

    • <admin_user_id> is the administrative user password. This parameter is required if security is enabled.

    • <startingport> is the first port used by the node.

  9. If WP01 was installed using the default HTTP port values, then the port configured to access WebSphere Portal may not be recognized as a Virtual Host in the Network Deployment and must be added. For example, if you access WebSphere Portal using the URL http://wp01:9081/wps/portal, then ensure Network Deployment has a Virtual Host definition for port 9081 as well. In the Administration Console, open the Environment Then click Virtual Hosts view, click on the default_host entry or the entry for the virtual host that is being used to access the WebSphere Portal application. Select the Host Aliases link. On the resulting page, ensure there is a host name and port entry to match the values used to access WebSphere Portal (for example, "*:9081").

  10. Once complete, the federated servers will be visible in the Servers Then click Application Server view. Start and verify the operability of the WebSphere Portal instance.

 

Create the cluster

  1. To create the cluster, use the Administration Console on the Deployment Manager.

    Start the Administration Console by opening a browser and entering...

    http://<DM01>:<admin_port>/admin

    ...in the address bar, where <DM01> is the deployment manager node or host name and <admin_port> is the port assigned to the Administration Console of the WAS Network Deployment instance. Note: the instance referred to here was created from the directory...

    qibm/proddata/webas5/pmend/bin directory

  2. Begin creating the WebSphere Portal cluster by going to the Servers Then click Cluster view and adding a new cluster.

  3. Enter the basic cluster information:

    1. Define the cluster name.

    2. Check the box Prefer local enabled.

    3. Check the box Create Replication Domain for this cluster.

    4. Select the option Select an existing server to add to this cluster and then choose server WebSphere_Portal on node WP01 from the list.

    5. Check the box Create Replication Entry in this Server.

    6. Click Next.

  4. Create the second cluster member.

    1. Define the name of cluster member.

    2. Select the node WP01.

    3. Check the box Generate Unique HTTP Ports.

    4. Check the box Create Replication Entry in this Server.

    5. Click Apply and then click Next to view the summary.

  5. Finally, create the new cluster and save the changes.

  6. When you have completed the steps above, the cluster topology can be viewed from the Servers Then click Cluster Topology view.

  7. The application server view will list the new server cluster member(s).

  8. Regenerate the Web server plug-in through the Administrative Console by navigating to Environment Then click Update Web Server Plug-in. Copy the plug-in configuration file (plugin-cfg.xml) to the external Web server's plug-in configuration directory.

  9. Stop and start the Web server.

  10. Start the cluster and verify proper operation.

  11. Each WebSphere Portal node must be told the name of the cluster in order for portlet deployment to work correctly. On each node, modify the /qibm/UserData/webas5/base/<instance>/portalserver5/shared/app/config/services/DeploymentService.properties and set the wps.appserver.name property to the name of the cluster you defined.

 

Create vertical cluster members

WebSphere Network Deployment documentation should be used for setting up a vertical cluster member within a cluster. When defining vertical cluster members be sure to check the Generate unique HTTP ports option to ensure there are no port conflicts between vertical cluster members running on the same node.

 

Deploying portlets

When WebSphere Portal installs a portlet, it stores the portlet configuration data in the WebSphere Portal database then forwards the portlet application's Web Module and associated configuration to the Network Deployment. Network Deployment then pushes the new Web Module to the node.

Install the portlet into WebSphere Portal either using the Install Portlets portlet in WebSphere Portal Administration, or the XML configuration interface utility. Any WebSphere Portal server may be used to deploy a portlet into the cluster. Cluster nodes do not need to be stopped during this operation. Auto-synchronization of the portlet application to each node in the cluster may not happen immediately, or at all, depending on how the administrator has auto-synchronization configured in the Network Deployment. For this reason, WebSphere Portal cannot guarantee that the portlet has been successfully synchronized to each node in the cluster and thus cannot automatically activate the portlet.

Complete the following steps to ensure that the new portlet application has been successfully synchronized in the node:

  1. Access the Network Deployment Administration Console.

  2. From the System Administration Then click Nodes view, select the node and click the Synchronize button.

  3. Select the portlets under Enterprise Application, and use the Start button to start the application.

  4. The portlets can be activated after they have been successfully synchronized. Log in to WebSphere Portal and navigate to Administration Then click Portlets Then click Manage Portlet Applications.

  5. Select the portlet application you wish to activate and click the Activate/Deactivate button.

 

Deploying themes and skins

Theme and skin JSPs are managed as part of the main WebSphere Portal enterprise application and are thus part of the WebSphere Portal EAR file. When customized themes and skins are deployed into the cluster, they must be updated in the cluster's master configuration, which is managed on Network Deployment.

WebSphere Portal's EAR file itself must be updated and re-deployed when adding new themes and skins, using the following steps:

  1. The WebSphere Portal EAR must be exported from Network Deployment. The EAR file should be checked out into a temporary directory.

    1. On the Network Deployment node, ND01, change directories to the Network Deployment bin directory.

    2. Invoke the wsadmin command to export the wps EAR file to a temporary directory:

      wsadmin -instance <instance> -user <admin_user_id> -password <admin_password> -c '$AdminApp export wps <directory>/wps.ear'

      where:

      • <instance> is the WAS instance

      • <admin_user_id> is the administrator's user id

      • <admin_password> is the administrator's password

      • <directory> is the temporary directory


  2. Create the wps_expanded directory. Use the EARExpander tool to expand the contents of the exported EAR file (make sure all commands are entered on one line):

    EARExpander -ear <dirctory>/wps_original.ear -operationDir <directory>/wps_expanded -operation expand

  3. Place the updated themes and skins JSPs into the correct directory within the expanded EAR.

  4. Use the EARExpander command to collapse the EAR directory back into an EAR file:

    EARExpander -ear <directory>/wps.ear -operationDir <directory>/wps_expanded -operation collapse

  5. Use the wsadmin command to update the WebSphere Portal EAR in Network Deployment. This action will automatically cause the application to be synchronized across each node in the cluster.

    wsadmin -instance <instance> -user <admin_user_id> -password <admin_password> -c '$AdminApp install <directory>/wps.ear {-update -appname wps}'

    where:

    • <instance> is the WAS instance

    • <admin_user_id> is the administrator's user id

    • <admin_password> is the administrator's password

    • <directory> is the temporary directory


    Updates to the configuration of a WebSphere Portal cluster MUST occur on Network Deployment and resynchronized with the other node in the cluster. If updates are made directly to the node, the updates will be lost when the master configuration on the Network Deployment resynchronizes with the node again.

 

Configure cluster-aware dynacache object caching

You must enable dynamic caching in the cluster environment in order to correctly validate the portal caches. If dynamic caching is not enabled, situations could arise where users have different views or different access rights, depending which cluster node handles the user's request.

  1. Select one of the server members in the cluster for which the object cache will be configured and select Dynamic Cache Service from the Additional Properties section (Servers Then click Application Servers Then click WebSphere_Portal Then click Dynamic Cache Service)

  2. Select Enable service at server startup and Enable cache replication.

  3. Click the Enable cache replication link to go to the Internal Messaging view.

  4. Select the WebSphere Portal cluster as the domain for the replicator and one of the server members as the replicator.
    Note: All cluster members must use these same replication settings:

    • Select Runtime mode = Push only.

    • Set the Push frequency to 1 second.


    Apply the changes and click OK.

  5. Repeat the above steps for all server members in the WebSphere Portal cluster.

  6. Save the changes to the Network Deployment master configuration by clicking the Save link after the last modification.

 

Uninstalling WebSphere Portal from a cluster

Use the administrative console or the removeNode script to remove application server nodes from a Network Deployment cell. When you remove a node, it becomes a stand-alone application server instance, and its original configuration is restored. Any configuration changes that you make to the application server when it is part of the Network Deployment cell are lost. The configuration information for the application server is restored, meaning that if app server config changes were made since federation, those changes will be lost. The applications that were part of the node remain in the Network Deployment cell.

 

Cleaning up Network Deployment after a failed deployment

In the event of a problem during deployment of WebSphere Portal in a cell or cluster, use the steps outlined in the section Uninstalling WebSphere Portal from a cluster to clean up Network Deployment.

 

Cluster maintenance

Maintaining WebSphere Portal in a cluster typically means applying corrective service or updating the software release level on each node in the cluster. Instructions for applying corrective service (fix packs, interim fixes) to a Websphere Portal cluster will be provided with the corrective service package. Updating the software release level will more than likely involve temporarily removing each node from the cell to upgrade the software.

Once WebSphere Portal has been added to a cell, extreme care needs to be taken when removing that node from the cell. If any portlets are deployed to the cluster while this node is part of that cluster, and then that node is removed from the cell of the cluster, those portlets are not available to the node as a stand-alone server, yet their configuration will be in the database shared by the rest of the nodes including this one. As a result, not attempt to start WebSphere Portal on this node or attempt to update its configuration using the XML configuration interface. These actions will introduce conflicts between this node in the cell.

It is recommended that after a WebSphere Portal node is removed from a cell and before it is started or modified, it should be reconnected to another database schema that represents the portlet and page configuration of that node before it was added to the cell. If you intend to use WebSphere Portal again as a stand-alone server, use the same procedure to reconnect WebSphere Portal to the database.

 

See also