WebSphere Portal - Manage clusters

 

+
Search Tips   |   Advanced Search

 

  1. Manage portlets
  2. Manage themes and skins
  3. Installing additional WAS features
  4. Manage WAS replicators
  5. Working with the Portal Scripting Interface in a cluster
  6. Change single sign-on (SSO) settings in a cluster
  7. Using custom user registry security with a WAS node
  8. Check-out Member Manager files on a federated node

 

Overview

Any portal node can be used to manage portlets. This is because all portal servers in the cluster share the same database,

Cluster nodes do not need to be stopped when managing portlets.

 

Deploy portlets

During portlet deployment, portal stores the portlet configuration data in the database and then forwards the portlet's Web module and configuration files to the deployment manager, which pushes the Web module to each node in the cluster.

Deployed portlets are activated after the deployment manager has synchronized the associated Web modules to each node in the cluster.

Note that auto-synchronization of the Web modules to each node might not happen immediately, or at all, depending on how auto-synchronization has been configured in the deployment manager.

To deploy and activate portlets in the cluster, from any portal node in the cell...

  1. Deploy the portlets using either...

  2. Activate and synchronize deployed portlets in the cluster...

  3. To see changes, log out and back in to Portal.

 

Provide portlets as WSRP services

By providing a portlet as WSRP service, a Producer makes the deployed portlet available remotely to Consumers. The portal database stores information about whether a portlet deployed in the cluster is provided as a WSRP service. Because the portal database is shared between the nodes in a cluster, all nodes are updated when providing a portlet as a WSRP service.

The URLs of the Producer services definitions in the WSDL document always automatically point to the Web server performing load balancing in the cluster. This ensures that all requests of Consumers invoking WSRP services of the Producer will be correctly load balanced.

The Producers URLs are generated by first checking the settings of the WSRP SOAP ports. If the SOAP port values are not set, the values of the host.name and host.port properties in ConfigService are used. These values typically point to the load balancing traffic dispatcher. If no values are specified for either the SOAP ports or in ConfigService, the host name and port of the request used to reference the Producer WSDL document is used.

 

Uninstall portlets

Uninstalling portlets in cluster environment is the same as in a stand-alone environment. Because uninstalling the portlet removes the portlet configuration from portal databases and all cluster members share the same database, the uninstalled portlet will be unavailable to all other members automatically.

 

Manage themes and skins

Theme and skin JSPs are managed as part of the main portal enterprise application and are thus part of the portal EAR file. When customized themes and skins are deployed into the cluster, they must be updated in the cluster's master configuration, which is managed by the deployment manager.

 

Install additional WAS features

When installing additional features in a clustered environment, install the features on every node in the cluster.

The WAS Network Deployment node cannot have fewer features than the nodes that it manages. Consequently, if you install a new feature on the managed nodes, review the features on the WAS ND node to determine whether also to install the feature on the WAS ND node. For example, if you select the full installation option for portal, the WAS ND node will not have the extended messaging feature installed by default.

To verify whether the feature is installed on the node, do one of the following...

  • Go to correct directory...

    was_profile_root/bin Managed node
    nd_root/bin directory WAS ND node
    ...and execute...

      ./versionInfo.sh -components

  • Alternatively, if you know any of the file names of the binaries associated with the feature, we can determine whether the feature is installed by verifying whether the files are present in...

    was_profile_root/bin Managed node
    nd_root/bin directory WAS ND node

The following example instructions are for installing the extended messaging component on a single node. In a clustered environment, you would need to perform these steps on each node.

  1. Run the WebSphere Process Server (WPS) installation program. During installation, select to add to the existing copy of WAS and follow the Custom install path.

  2. On the features list panel, ensure that only...

    Extended Messaging

    ...is selected, and complete the installation.

  3. Before updating the extended messaging feature to the required level, make a list of any interim fixes that are currently installed on the node.

    To display currently installed fixes....

    cd was_profile_root/bin
    versionInfo.sh

  4. Install the required fix packs and cumulative fixes for WPS to update the extended messaging feature. The fix pack files are located on the WPS - WAS Fix Pack disc for the platform.

  5. Reapply any required interim fixes.

 

Manage WAS replicators

Replication is a service provided by WAS that transfers data, objects, or events among application servers. Data replication can be used to make data for session manager, dynamic cache, and stateful session beans available across many application servers in a cluster.

 

Manage WAS distributed sessions

portal can make use of the WAS capabilities to support HTTP session failover, which enables one node in a cluster to be able to access information from the existing HTTP session in case of a failure in the cluster node originally handling that session. This capability to have session information accessible to other application servers is referred to as distributed sessions. WAS provides two techniques that can be used for distributed sessions, either of which can be used in a portal cluster: memory-to-memory session replication and database persistent sessions. For further information on distributed session support, refer to...

Manage WAS distributed sessions portal can make use of the WAS capabilities to support HTTP session failover, which enables one node in a cluster to be able to access information from the existing HTTP session in case of a failure in the cluster node originally handling that session. This capability to have session information accessible to other application servers is referred to as distributed sessions. WAS provides two techniques that can be used for distributed sessions, either of which can be used in a portal cluster: memory-to-memory session replication and database persistent sessions. For further information on distributed session support, refer to...

Distributed session support is not enabled by default, so you will have to determine whether to provide this capability in the cluster, and, if so, which of the two techniques you will use. For details on these two techniques, refer to the WAS information center.

Memory-to-memory session replication

Database session persistence

 

Work with the Portal Scripting Interface in a cluster

Before we can use the Portal Scripting Interface in a clustered environment, manually copy the wp.wire.jar file supplied with portal to the WAS ND machine:

  1. Verify whether the wp.wire.jar file is present in...

    app_server_root/lib

    ...on the deployment manager machine.

  2. If the file is not present, copy the file from...

    app_server_root/lib

    ...on any portal node to...

    app_server_root/lib

    ...on the deployment manager machine.

  3. Restart the deployment manager.

 

Run the scripting client in a cluster

To start the Portal Scripting Client in a clustered environment, the scripting client must connect to the SOAP port of the WAS ND machine, as in the following examples.

When WAS security is disabled:

./wpscript.sh -port <ND SOAP Port>

When WAS security is enabled:

./wpscript.sh -port <ND SOAP Port> -user <user> -password <password>

...where...

<ND SOAP Port> WAS ND machine's SOAP connector-address. The default value is 8879.
<user> WAS administrative user name.
<password> administrative user password.

 

Change single sign-on (SSO) settings in a cluster

If you have configured the nodes in the cluster to use single sign-on (SSO), we can change the SSO settings for the cluster by completing the following steps:

  1. In the deployment manager administrative console, click...

    Security | Global security | Authentication mechanisms | LTPA | Single signon (SSO)

  2. Update the SSO settings, and click OK.

  3. Ensure that...

    Synchronize changes with Nodes

    ...is selected, and click Save to update the deployment manager configuration.

  4. Restart all servers in the cell, including the node agents and the deployment manager.

 

Use custom user registry security with a WAS node

If you are using a custom user registry for security with a portal cluster, complete the following steps to ensure that the security configuration is set up properly:

  1. Ensure that you have copied the WMM binary files from one of the portal nodes in the cluster to the deployment manager machine.

  2. Because all nodes being managed by a deployment manager must have the same security settings, perform additional configuration for any nodes in the cell that are running WAS without portal, so that they are also using the custom user registry for security.

    1. Copy the wasextarchive.jar file created as part of the previous step to the installation root directory of the WAS node.

    2. Stop WAS.

    3. Unjar the contents of the wasextarchive.jar file in the WAS installation directory.

    4. Verify that the WMM binary files (wmm*.jar) are in the app_server_root/lib directory.

    5. Restart WAS.

 

Check-out Member Manager files on a federated node

For portal on a federated node, check-out Member Manager manager files before editing them.

  1. On the primary node of the portal cluster, check out the Member Manager files:

    cd portal_server_root/config
    ./WPSconfig.sh check-out-wmm-cfg-files-from-dmgr

  2. Edit files in...

    portal_server_root/wmm

  3. There is a special procedure to change the LDAP administrator password that involves encrypting the password.

  4. If wmm.xml references files not in the following list, scp referenced files to the dmgr machine and each node in the cluster.

  5. When you have completed the changes, check the files back in:

    cd portal_server_root/config

    ./WPSconfig.sh check-in-wmm-cfg-files-to-dmgr

  6. The Member Manager changes will be replicated to the other cluster nodes the next time the cluster is synchronized.

 

Parent topic:

Clustering and portal