+

Search Tips   |   Advanced Search

Install multiple clusters in a single cell on Linux


Create a new, independent IBM WebSphere Portal cluster in a cell where a WebSphere Portal cluster already exists.

In the following steps, Cluster A is used to describe the existing cluster. Portal B is used to describe the new server profile that is the basis for the new cluster definition, Cluster B.

To install multiple clusters in a single cell:

  1. If necessary, upgrade Cluster A, including the dmgr node, to the current, supported hardware and software levels and to the current version of IBM WebSphere Portal. Only upgrade if Cluster B is at a higher version than the existing Cluster A.

  2. Complete the following tasks to install and configure Portal B; see the links in Related:

    Important: Maintain the same number of data sources with identical names to the Cluster A data sources so that data source bindings in the applications can be resolved on every cluster in which they run. If implementing database sharing across the clusters, the previous statement refers to both the shared and non-shared domains; all domains should use the same names.

    • Install WebSphere Portal on the primary node

    • Configure Portal to use a database

  3. Optional: Use the same database user ID and password for each identically named domain/data source will allow the existing JAAS Authentication Aliases to be functional. If unique database user ID and password are required, additional manual configuration is needed to create new JAAS Authentication Aliases for each data source and map these accordingly. On the primary node of Cluster A, run the ./ConfigEngine.sh create-alias-multiple-cluster -DauthDomainList=release,jcr -DWasPassword=dmgr_password to create the new JAAS Authentication Aliases.

    ...where... authDomainList is set to a list of domains which use unique database user ID and passwords and those domain properties are set correctly in wkplc_comp.properties, including user ID and password.

  4. If necessary, upgrade Portal B to the current cumulative fix.

  5. If we are adding a dynamic cluster and IBM WebSphere Virtual Enterprise is not already installed on Portal B, install it, apply all required ifixes, and augment the wp_profile profile to make it compatible with WebSphere Extended Deployment Operations Optimization.

    See the "Planning the product installation" link in Related section for information about installing WebSphere Virtual Enterprise. Profile augmentation is completed using the pmt.sh (PMT) GUI on systems that support it or through manageprofiles.sh available on all systems. When using the GUI, make sure we select type of augmentation..

      Operations Optimization

    to complete. When using manageprofiles.sh, be sure to follow the instructions to augment a stand-alone application server profile.

  6. ./ConfigEngine.sh mapped-app-list-create -DWasPassword=footo build an inventory list of Portal B enterprise applications and portlets.

  7. If the dmgr is configured to use a stand-alone LDAP user registry, update wkplc.properties, located in...

      WP_PROFILE/ConfigEngine/properties

    on the primary node with the stand-alone LDAP user registry property values from the dmgr. We can find these settings under the heading...

      VMM Standalone LDAP configuration


    Set WasUserid and WasPassword to the dmgr user ID and password.

  8. Run the following command from the WP_PROFILE\bin directory to federate Portal B:
    addNode.sh dmgr_hostname dmgr_port -includeapps
     -username wasadmin  -password foo
    

    The variables are defined as:

    • dmgr_hostname is the TCP/IP host name of the dmgr server

    • dmgr_port is the SOAP port number of the dmgr server

    • wasadmin and foo are the user ID and password for the dmgr administrator

    If the WAS administrator user ID and password for the local node are different than the dmgr administrator user ID and password, add the following parameters:

    • -localusername local_wasadmin
    • -localpassword local_foo

    See the addNode command file for more information about the addNode command and other optional parameters.

    If addNode.sh fails for any reason before rerunning the task:

    1. Remove the node if the AddNode task succeeded in creating the node.

    2. If items exist, log on to the dmgr and...

      1. Remove all enterprise applications.

      2. Remove the WebSphere_Portal server definition.

      3. Remove the WebSphere Portal JDBC Provider.

  9. Stop the WebSphere_Portal server on the primary node and verify the following parameters are set correctly in wkplc.properties:

    Although we can add these parameters (particularly passwords) directly to any task while creating the cluster, we might want to temporarily add them to the properties file. We can then remove them when we are finished to keep the environment secure.

    1. Set WasSoapPort to the port used to connect remotely to the dmgr.

    2. Set WasRemoteHostName to the full host name of the server used to remotely connect to the dmgr.

    3. Verify that WasUserid is set to the dmgr administrator user ID.

    4. Verify that WasPassword is set to the dmgr administrator password.

    5. Verify that PortalAdminPwd is set to the WebSphere Portal administrator password.

    6. Verify that ClusterName is set.

    7. Verify that PrimaryNode is set to true.

  10. ./ConfigEngine.sh map-apps-to-server -DWasPassword=foo task to determine which applications from the inventory list are no longer mapped to Portal B. The task uses the application profiles already in the cell to restore the mappings. Wait 30 minutes after running this task to allow all EAR files to expand before proceeding to the next step.

  11. Run...

      ./ConfigEngine.sh cluster-node-config-post-federation -DWasPassword=dmgr_password

  12. The node is federated and using the dmgr cell and its user registry. If the admin user ID and group name are different in the WebSphere Portal and dmgr configurations, choose one of the following options depending on your security policies:

    • Add the existing admin user ID and group to the dmgr security configuration

    • To change the values in the WebSphere Portal configuration to match the dmgr values:

    If the dmgr cell is using a stand-alone LDAP user registry, complete these steps after the cluster-node-config-cluster-setup (static cluster) or cluster-node-config-dynamic-cluster-setup (dynamic cluster) task completes.

    1. Start the WebSphere_Portal server.

    2. Verify the required WebSphere Portal admin user ID and group ID are defined in the dmgr user registry that provides security for the cell.

    3. Run...

        ./ConfigEngine.sh wp-change-portal-admin-user -DWasPassword=foo -DnewAdminId=newadminid -DnewAdminPw=newpassword -DnewAdminGroupId=newadmingroupid

      If the value for newAdminGroupId contains a space; for example Software Group, open wkplc.properties and add the values for newAdminId, newAdminPw, and newAdminGroupId. Save the changes and then run...

        ./ConfigEngine.sh wp-change-portal-admin-user -DWasPassword=dmgr_password

      • WasPassword is set to the administrative password for the dmgr cell
      • newAdminId is set to the fully qualified DN of the WebSphere Portal admin user ID in the cell
      • newAdminGroupId is set to the fully qualified DN of the group for the WebSphere Portal admin user ID in the cell

    4. After the task completes, stop the WebSphere_Portal server.

  13. Restart the WebSphere_Portal server.

    on Cluster A. Verify that Cluster A is functionally intact by spot checking pages and portlets and then verify that Portal B is functionally intact by spot checking pages and portlets that you deployed into Portal B before it was federated. Any discrepancies or errors should be corrected now before continuing.

    If Portal B is using a nondefault Portal server administrative ID, not wpsadmin, the server will not be functional until the cluster configuration is complete and the Portal administrative ID has been configured to match the Cells security settings.

  14. Choose one of the following options to define a cluster using Portal B as the basis:

    • To define a static cluster using Portal B as the basis:

      1. ./ConfigEngine.sh cluster-node-config-cluster-setup -DWasPassword=dmgr_password task.

      2. Configure the cluster to use an external web server to take advantage of features such as workload management. Choose one of the following options:

        Start with the step about launching the Plug-ins installation wizard.

      3. To access the WCM content through an external web server:

        1. Log on to the dmgr console.

        2. Select Environment > WebSphere Variables.

        3. From the Scope drop-down menu, select the option...

            Node=nodename, Server=servername

          ...where Node=nodename is the node containing the portal application server.

        4. Update the WCM_HOST variable with the fully qualified host name used to access the portal server through the web server or On Demand Router.

        5. Update the WCM_PORT variable with the port number used to access the portal server through the web server or On Demand Router.

      4. Save the changes and then restart the dmgr, the node agent(s), and the WebSphere_Portal servers.

    • To define a dynamic cluster using Portal B as the basis:

      1. Log on to the dmgr console.

      2. To create a node group:

        1. Click New.

        2. Type the node group Name.

        3. Type any information about the node group in the Description text box.

        4. Click OK.

        5. Click the Save link to save the changes to the master configuration.

      3. To add members to the node group:

        1. Click System administration > Node groups.

        2. Click the name of the node group to add members to.

        3. Click Node group members under Additional Properties.

        4. Click Add.

        5. Select the primary node and then click Add.

        6. Click the Save link to save the changes to the master configuration.

      4. To create a dynamic cluster in the node group:

        1. Click Servers > Clusters > Dynamic clusters.

        2. Click New.

        3. Select WAS from the Server Type pull-down menu and then click Next.

        4. Type the cluster name in the Dynamic cluster name text box and then click Next. Type the same value that you provided for the ClusterName parameter in wkplc.properties of the primary node.
        5. Remove all default membership policies and then click Subexpression builder.

        6. Enter the following information in the Subexpression builder window:

          1. Select and from the Logical operator pull-down menu.

          2. Select Nodegroup from the Select operand pull-down menu.

          3. Select Equals (=) from the Operator pull-down menu.

          4. Type the nodegroup name createdd in the previous step in the Value text box.

          5. Click Generate subexpression.

          6. Click Append.

        7. Click Preview membership to verify that all nodes included in the nodegroup display and then click Next.

        8. Click the Create the cluster member using an existing server as a template radio button and then select the WebSphere_Portal server for the primary node from the pull-down menu.

        9. Click Next.

        10. Specify the required dynamic cluster properties for minimum and maximum number of server instances.

        11. Review the summary page to verify your actions and then click Finish.

      5. Set the following parameters in wkplc.properties:

        1. Verify that CellName is set to the dmgr cell.

        2. Verify that NodeName is set to the local node.

        3. Set ServerName to the server used for the dynamic cluster member on this node.

        4. Verify that PrimaryNode is set to true.

      6. Create the dynamic cluster...

          ./ConfigEngine.sh cluster-node-config-dynamic-cluster-setup -DWasPassword=dmgr_password

      7. To access the WCM content through an external web server:

        1. Log on to the dmgr console.

        2. Select Environment > WebSphere Variables.

        3. From the Scope drop-down menu, select the option...

            Node=nodename, Server=servername

          ...where Node=nodename is the node containing the portal application server.

        4. Update the WCM_HOST variable with the fully qualified host name used to access the portal server through the web server or On Demand Router.

        5. Update the WCM_PORT variable with the port number used to access the portal server through the web server or On Demand Router.

      8. Save the changes and then restart the dmgr, the node agent(s), and the WebSphere_Portal servers.

  15. Install any additional nodes to the cell to support additional cluster members for Cluster B identically to the primary node, and then federate as them as secondary nodes and define as cluster members on these nodes. For information about adding additional nodes navigate to Installing WebSphere Portal > Setting up a cluster. Select the appropriate operating system and navigate to Prepare additional nodes. We can add additional nodes to a static or dynamic cluster, and we can also add additional vertical cluster members to an existing node in a static or dynamic cluster to provide vertical scaling.

    Remember: If we are creating a multiple, dynamic cluster, remember to install WebSphere Virtual Enterprise on the additional node and augment the Portal profile with WebSphere Virtual Enterprise.

  16. Restart the WebSphere_Portal server.

    on Cluster A and Cluster B.

  17. After setting up the multiple clusters, there are additional tasks that we can complete to ensure a balanced workload and failover support.

    • Update the web server configuration to enable user requests to be routed to the new cluster. Refer "Routing requests across clusters" for information about using a web server with multiple clusters in a cell.

    • Update the database configuration to share database domains between clusters. Refer to "Sharing database domains between clusters for information about redundancy and failover support.

  18. If you entered passwords in any of the properties files while creating the cluster, you should remove them for security purposes. See "Deleting passwords from properties files" under Related for information.

Installation of Cluster B is complete. It is now an independent cluster from Cluster A, which means that Cluster B can have its own configuration, set of user portlets, and target community. Any applications that are common between Cluster A and Cluster B are most likely infrastructure or related to administration, and special care needs to be taken to preserve their commonality between clusters and correct maintenance levels.


Parent: Set up multiple clusters on Linux
Related:
Linux clustered server: Configure the portal to use a database

Recommended fixes for WebSphere Extended Deployment

Middleware nodes and servers

Plan the product installation
Related:
Delete passwords from properties files