Manage the cluster - WebSphere Portal

 

+
Search Tips   |   Advanced Search

 


  1. Manage portlets
  2. Manage themes and skins
  3. Install additional WAS features
  4. Manage WAS replicators
  5. Work with the Portal Scripting Interface in a cluster
  6. Using custom user registry security with a WAS node
  7. Manually editing Member Manager files on a federated node

 

Manage portlets

Because all WebSphere Portal servers in the cluster share the same database, any WebSphere Portal node can be used to manage portlets. Cluster nodes do not need to be stopped when managing portlets.

 

Deploying portlets (installing or updating)

When you deploy a portlet, WebSphere Portal stores the portlet configuration data in the WebSphere Portal database and then forwards the portlet application's Web module and associated configuration to the deployment manager. The deployment manager is responsible for pushing the Web module to each node in the cluster.

The deployed portlets must be activated before they can be used, which cannot be accomplished until the deployment manager has synchronized the associated Web modules to each node in the cluster. Auto-synchronization of the Web modules to each node in the cluster might not happen immediately, or at all, depending on how the administrator has auto-synchronization configured in the deployment manager. For this reason, WebSphere Portal cannot guarantee that the portlet has been successfully synchronized to each node in the cluster and thus cannot automatically activate the portlet during deployment.

To deploy and activate portlets in the cluster, perform the following steps from any WebSphere Portal node in the cell:

  1. Deploy the portlets using either the WebSphere Portal Administration page or the XML configuration interface utility (xmlaccess command).

  2. Activate the deployed portlets in the cluster by running the command below. In addition to activating the portlets, this step causes the deployment manager to synchronize the changes across the cluster members.

    cd wp_root/config directory:
    ./WPSconfig.sh activate-portlets

    If you run the activate-portlets task while you are logged in to WebSphere Portal, log out and log back in before you can see the updated status for the portlets.

 

Providing portlets as WSRP services

By providing a portlet as WSRP service, a Producer makes the deployed portlet available remotely to Consumers. The WebSphere Portal database stores information about whether a portlet deployed in the cluster is provided as a WSRP service. Because the WebSphere Portal database is shared between the nodes in a cluster, all nodes are updated when providing a portlet as a WSRP service.

The URLs of the Producer services definitions in the WSDL document always automatically point to the Web server performing load balancing in the cluster. This ensures that all requests of Consumers invoking WSRP services of the Producer will be correctly load balanced.

The Producers URLs are generated by first checking the settings of the WSRP SOAP ports. If the SOAP port values are not set, the values of the host.name and host.port properties in the ConfigService.properties file are used. These values typically point to the load balancing traffic dispatcher. If no values are specified for either the SOAP ports or in the ConfigService.properties file, the host.name and port of the request used to reference the Producer WSDL document is used.

 

Uninstalling portlets

Uninstalling portlets in cluster environment is the same as in a stand-alone environment. Because uninstalling the portlet removes the portlet configuration from WebSphere Portal databases and all cluster members share the same database, the uninstalled portlet will be unavailable to all other members automatically.

 

Manage themes and skins

Theme and skin JSPs are managed as part of the main WebSphere Portal enterprise application and are thus part of the WebSphere Portal EAR file. When customized themes and skins are deployed into the cluster, they must be updated in the cluster's master configuration, which is managed by the deployment manager.

 

Install additional WAS features

In some situations you might need to install additional WAS features on the nodes in the cluster. For example, this could be necessary if a portlet requires a WAS feature not currently installed on the nodes. When installing additional features in a clustered environment, install the features on every node in the cluster. Also, the Network Deployment node cannot have fewer features than the nodes that it manages. Consequently, if you install a new feature on the managed nodes, review the features on the Network Deployment node to determine whether also install the feature on the Network Deployment node.

For example, if you select the full installation option for WebSphere Portal, the WAS will not have the extended messaging feature installed by default. If any of the portlets use the extended messaging feature, you will have to install this new feature on all nodes before you are able to deploy these portlets.

To verify whether the feature is installed on the node, you can do one of the following:

  • Run the following command from the was_root/bin directory (managed node) or the nd_root/bin directory (Network Deployment node):

    ./versionInfo.sh -components

  • Alternatively, if you know any of the file names of the binaries associated with the feature, you can determine whether the feature is installed by verifying whether the files are present in the was_root/lib directory (managed nodes) or the nd_root/lib directory (Network Deployment node).

The following example instructions are for installing the extended messaging component on a single node. In a clustered environment, you would need to perform these steps on each node.

  1. Run the WebSphere Business Integration Server Foundation installation program, located on the WebSphere Business Integration Server Foundation disc for the platform. During installation, select to add to the existing copy of WAS, and follow the Custom install path.

  2. On the features list panel, ensure that only Extended Messaging is selected, and complete the installation.

  3. Before updating the extended messaging feature to the required level, make a list of any interim fixes that are currently installed on the node. Use the versionInfo command in the was_root/bin directory to display currently installed fixes.

  4. Install Fix Pack 1 and Cumulative Fix 1 (5.1.1.1) for WebSphere Business Integration Server Foundation to update the extended messaging feature. The fix pack files are located on the WBISF - WAS Fix Pack disc for the platform.

  5. Reapply any required interim fixes.

 

Manage WAS replicators

WAS provides a service called Internal Replication that transfers data or events among WAS servers.

The replication service transfers both J2EE application data and any internal data used to maintain the application data among WebSphere run-time processes in a cluster of appservers.

A replicator is a WAS run-time component that handles the transfer of internal WAS data. Replicators operate within a running appserver process. define replicators as needed as part of the cluster management.

 

WebSphere Portal implications

WebSphere Portal uses replicators for dynamic caching and memory-to-memory session replication. Enabling replication for dynamic caching in a WebSphere Portal cluster environment is absolutely necessary to maintain data integrity between various WebSphere Portal nodes in the cluster. Replication also helps improve performance by generating data once and then replicating it to other servers in the cluster. Therefore, a replication domain with at least one replicator entry needs to exist for WebSphere Portal.

The action-set-dynacache task supplied with WebSphere Portal is limited to creating the replicator entry on an existing WebSphere Portal cluster member. However, you can also define a replicator on individual servers within the cell, as long as the replicator belongs to the same replication domain.

 

Performance and failover considerations when using replicators

Because a replicator runs within an existing appserver instance process, there is a performance impact from defining the replicator on a WebSphere Portal server instance. All replicators within a replication domain connect with each other, forming a network of replicators. WAS processes can connect to any replicator within a domain to receive data from other processes connected to any other replicator in the same domain. If a WAS process is connected to a replicator and the replicator goes down, the process automatically attempts to reconnect to another replicator in the domain and recover any data missed while unconnected.

The more replicators you have defined in the environment, the better the replication failover capability will be. In the event that an appserver process on which the replicator is defined is unavailable or goes down, there will be other replicators available to fill the gap. However, because additional replicators will impact the overall performance of the environment, carefully plan the total number of replicators needed.

For best performance, you can also provide a completely separate system running a dedicated appserver instance as the replicator host. This dedicated appserver instance need not have WebSphere Portal installed on it, although it must be in the same cell and in the same replication domain as the WebSphere Portal cluster.

For more information about using replicators, refer to the WAS documentation.

 

Work with the Portal Scripting Interface in a cluster

 

Prerequisite

Before you can use the Portal Scripting Interface in a clustered environment, manually copy the wp.wire.jar file supplied with WebSphere Portal to the Network Deployment machine:

  1. Verify whether the wp.wire.jar file is present in the nd_root/lib/ext directory on the Network Deployment machine.

  2. If the file is not present, copy the file from the was_root/lib/ext directory on any WebSphere Portal node to the nd_root/lib/ext directory on the Network Deployment machine.

  3. Restart the deployment manager.

 

Run the scripting client in a cluster

To start the Portal Scripting Client in a clustered environment, the scripting client must connect to the SOAP port of the Network Deployment machine, as in the following examples.

When WAS security is disabled:

./wpscript.sh -port <ND SOAP Port>

When WAS security is enabled:

./wpscript.sh -port <ND SOAP Port> -user <user> -password <password>

...where...

  • <ND SOAP Port> is the Network Deployment machine's SOAP connector-address. The default value is 8879.

  • <user> is the WAS administrative user name.

  • <password> is the administrative user password.

For more information about the Portal Scripting Client, go to the Portal Scripting Interface section.

 

Using custom user registry security with a WAS node

If you are using a custom user registry for security with a WebSphere Portal cluster, complete the following steps to ensure that the security configuration is set up properly:

  1. Ensure that you have copied the WMM binary files from one of the WebSphere Portal nodes in the cluster to the Network Deployment machine. Instructions for doing this are provided in Enable WAS security for WebSphere Portal.

  2. Because all nodes being managed by a deployment manager must have the same security settings, perform additional configuration for any nodes in the cell that are running WAS without WebSphere Portal, so that they are also using the custom user registry for security.

    1. Copy the wasextarchive.jar file created as part of the previous step to the installation root directory of the WAS node.

    2. Stop WAS.

    3. Unjar the contents of the wasextarchive.jar file in the WAS installation directory.

    4. Verify that the WMM binary files (wmm*.jar) are in the was_root/lib/ext directory.

    5. Restart WAS.

 

Manually editing Member Manager files on a federated node

If WebSphere Portal is installed on a federated node and you want to manually edit the Member Manager configuration, first check out the Member Manager files from the deployment manager configuration repository, according to the following instructions:

  1. On one of the WebSphere Portal nodes in the cluster, check out the files:

    cd wp_root/config

    ./WPSconfig.sh check-out-wmm-cfg-files-from-dmgr

  2. Make any changes to the Member Manager files. The files can be edited in the wp_root/wmm directory on the WebSphere Portal node.

  3. When you have completed the changes, check the files back in:

    cd wp_root/config
    ./WPSconfig.sh check-in-wmm-cfg-files-to-dmgr

  4. The Member Manager changes will be replicated to the other cluster nodes the next time the cluster is synchronized.

 

See also

 

WebSphere is a trademark of the IBM Corporation in the United States, other countries, or both.

 

IBM is a trademark of the IBM Corporation in the United States, other countries, or both.