Install multiple clusters in a single cell on Solaris

Create a new, independent WebSphere Portal cluster in a cell where a WebSphere Portal cluster already exists.

In the following steps, Cluster A will be used to describe the existing cluster. Portal B will be used to describe the new server profile that will be the basis for the new cluster definition, Cluster B.

To install multiple clusters in a single cell:

  1. Upgrade Cluster A, including the Deployment Manager node, to the current, supported hardware and software levels and to the current version of IBM WebSphere Portal.

  2. Install and configure Portal B; see the "Preparing the primary node" topic and choose the appropriate OS for details; you can skip the steps in the "Preparing the WAS Deployment Manager" because the Deployment Manager is already operational.

    Maintain the same number of data sources with identical names to the Cluster A data sources so that data source bindings in the applications can be resolved on every cluster in which they run. If implementing database sharing across the clusters, the above statement refers to both the shared and non-shared domains; all domains should use the same names.

  3. Optional. Using the same database user ID and password for each identically named domain/data source will allow the existing JAAS Authentication Aliases to be functional. If unique database user ID and password are required, additional manual configuration is needed to create new JAAS Authentication Aliases for each data source and map these accordingly. On the primary node of Cluster A, run the ./ConfigEngine.sh create-alias-multiple-cluster -DauthDomainList=release,jcr -DWasPassword=dmgr_passwordfrom the WP_PROFILE/ConfigEngine to create the new JAAS Authentication Aliases.

      where authDomainList is set to a list of domains which use unique database user ID and passwords and those domain properties are set correctly in wkplc_comp.properties, including user ID and password.

  4. Optional. If necessary, upgrade Portal B to the current cumulative fix.

  5. If you are adding a dynamic cluster and IBM WebSphere Virtual Enterprise is not already installed on Portal B, install it, apply all required ifixes, and augment the wp_profile profile to make it compatible with WebSphere Extended Deployment Operations Optimization. See the "Planning the product installation" link below for information about installing WebSphere Virtual Enterprise. Profile augmentation is performed using the Profile Management Tool (PMT) graphical interface on systems that support it or through the manageprofiles command that is available on all systems. When using the graphical interface as discussed in the link below, make sure you select Operations Optimization when choosing the type of augmentation to perform. When using the manageprofiles command as discussed in the link below, be sure to follow the instructions to augment a stand-alone application server profile.
    ./ConfigEngine.sh mapped-app-list-create -DWasPassword=foofrom the WP_PROFILE/ConfigEngine to build an inventory list of Portal B enterprise applications and portlets.

  6. If the Deployment Manager is configured to use a stand-alone LDAP user registry, update ...

      WP_PROFILE/ConfigEngine/properties/wkplc.properties
    on the primary node with the stand-alone LDAP user registry property values from the Deployment Manager. You can find these settings under the VMM Stand-alone LDAP configuration heading.

    Ensure that you set WasUserid and WasPassword to the Deployment Manager user ID and password.

  7. Optional. Run the following command from the WP_PROFILE/bin directory to federate Portal B:

    If you chose the option to automatically federate this new profile to a Deployment Manager as part of the profile creation, then this step is not necessary. If you are not sure, go ahead and run the addNode command as documented below. The task will fail if the node is already federated.

    addNode.sh dmgr_hostname dmgr_port -includeapps
    	-username was_admin_user
    	-password was_foo

    The above variables are defined as:

    • dmgr_hostname is the TCP/IP host name of the Deployment Manager server

    • dmgr_port is the SOAP port number of the Deployment Manager server

    • was_admin_user and was_foo are the user ID and password for the Deployment Manager administrator

    If the WAS administrator user ID and password for the local node are different than the Deployment Manager administrator user ID and password, add the following parameters to the addNode task:

    • -localusername local_was_admin_user

    • -localpassword local_was_foo

    See the addNode command file for more information about the addNode command and other optional parameters.
    Warning: If the addNode task fails for any reason, perform the following steps before rerunning the task:

    1. Remove the node if the AddNode task succeeded in creating the node.

    2. Log on to the dmgr and perform the following steps if the items exist:

      1. Remove all enterprise applications.

      2. Remove the WebSphere_Portal server definition.

      3. Remove the WebSphere Portal JDBC Provider.

  8. Stop the WebSphere_Portal server on the primary node and ensure that the following parameters are set correctly in ...

      WP_PROFILE/ConfigEngine/properties/wkplc.properties

      Although you can add these parameters directly to any task that you run while creating cluster, you may want to add them to the properties file while creating cluster and then remove them when you are finished to keep environment secure.

      1. Set WasSoapPort to the port used to connect remotely to the dmgr.

      2. Set WasRemoteHostName to the full host name of the server used to remotely connect to the dmgr.

      3. Verify that WasUserid is set to Deployment Manager administrator user ID.

      4. Verify that WasPassword is set to Deployment Manager administrator password.

      5. Verify that PortalAdminPwd is set to WebSphere Portal administrator password.

      6. Verify that ClusterName is set.

      7. Verify that PrimaryNode is set to true.


    ./ConfigEngine.sh map-apps-to-server -DWasPassword=footo determine which applications from the inventory list are no longer mapped to Portal B. The task uses the application profiles already in the cell to restore the mappings. Wait 30 minutes after running this task to allow all EAR files to expand before proceeding to the next step.

  9. Run the ./ConfigEngine.sh cluster-node-config-post-federation -DWasPassword=dmgr_password

  10. Since the WebSphere Portal node is now using security settings from the Deployment Manager cell, you need to update the WebSphere Portal administrative user ID and password to match an administrative user defined in the cell's user registry. Run the ./ConfigEngine.sh wp-change-portal-admin-user -DWasPassword=foo -DnewAdminId=newadminid -DnewAdminPw=newpassword -DnewAdminGroupId=newadmingroupid from the WP_PROFILE/ConfigEngine, to update the WebSphere Portal administrative user ID if the Deployment Manager cell is using a different user registry.

    The password value for -DWasPassword is the Deployment Manager administrative password.

    You must provide the full distinguished name (DN) for the newAdminId and newAdminGroupId parameters.
    Additional parameter for stopped servers: This task verifies the user against a running server instance. If the server is stopped, add the -Dskip.ldap.validation=true parameter to the task to skip the validation.
    Fastpath: If the value for newAdminGroupId contains a space; for example Software Group, open wkplc.properties and add the values for newAdminId, newAdminPw, and newAdminGroupId. Save changes and run the ConfigEngine.bat wp-change-portal-admin-user -DWasPassword=dmgr_password

    If standalone LDAP security is already enabled on the Deployment Manager cell, delay running the wp-change-portal-admin-user task until after the cluster-node-config-cluster-setup (static cluster) or cluster-node-config-dynamic-cluster-setup (dynamic cluster) task completes. After running the wp-change-portal-admin-user task, start or restart the WebSphere_Portal server to use the updated administrator user ID. The WebSphere Portal administrative user ID and administrative group must exist in the Deployment Manager before running the wp-change-portal-admin-user task. -DnewAdminPw is an optional parameter to update the Administrative password in wkplc.properties if required.

  11. Restart the WebSphere_Portal server on Cluster A. Verify that Cluster A is functionally intact by spot checking pages and portlets and then verify that Portal B is functionally intact by spot checking pages and portlets that you deployed into Portal B before it was federated. Any discrepancies or errors should be corrected now before continuing.

      If Portal B is using a non-default Portal server administrative ID, not wpsadmin, the server will not be functional until the cluster configuration is complete and the Portal administrative ID has been configured to match the Cells security settings.

  12. Choose one of the following options to define a cluster using Portal B as the basis:

    • Define a static cluster using Portal B as the basis:


        ./ConfigEngine.sh cluster-node-config-cluster-setup -DWasPassword=dmgr_password

      1. Configure the cluster to use an external Web server to take advantage of features such as workload management. Choose one of the following options:

          Start with the step about launching the Plug-ins installation wizard.

      2. Access the Web Content Manager content through an external Web server:

        1. Log on to the dmgr console.

        2. Select Environment -> WebSphere Variables.

        3. From the Scope drop-down menu, select the Node=nodename, Server=servername option to narrow the scope of the listed variables, where Node=nodename is the node that contains the application server.

        4. Update the WCM_HOST variable with the fully qualified host name used to access the WebSphere Portal server through the Web server or On Demand Router.

        5. Update the WCM_PORT variable with the port number used to access the WebSphere Portal server through the Web server or On Demand Router.

      3. Add a new JCRSeedBus cluster bus member and remove the previous server bus member:

        1. Log on to the Deployment Manager Administrative Console.

        2. Navigate to Service Integration -> Buses and then click JCRSeedBus.

        3. Under the Topology heading, click Bus members.

        4. Click Add.

        5. Click the Cluster radio button to define the type of new bus member and then select the name of this newly created Portal cluster from the drop-down menu. Click Next to continue.

        6. Ensure that the Enable Messaging Engine Policy Assistance checkbox is checked. Then choose the required messaging engine policy setting; refer to Messaging engine policy assistance for information.

        7. Click Next.

        8. Select the Data store radio button and then click Next.

        9. Click the name of the first messaging engine listed in the table.

        10. Enter the following information on the Specify data store properties panel:

          Data store fields that need information.

          Field Value
          Data source JNDI name jdbc/data_source_name, where data_source_name is the value for the jcr.DataSourceName property in wkplc_dbdomain.properties file located in the WP_PROFILE/ConfigEngine/properties directory of the primary node.
          Schema name Value of the jcr.DbSchema property in wkplc_dbdomain.properties.
          Authentication alias Choose the entry in the drop-down list whose name contains the JCR data source name.

  13. Ensure that the Create tables checkbox is checked and then click Next to continue.

  14. The Configure messaging engines panel displays containing the list of messaging engines. If any additional messaging engines are displayed, repeat the steps to configure each additional messaging engine in the table. Then click Next to continue.

  15. Review the heap sizes on the Tune performance parameters panel. Click Next to continue.

      To change the values, click the Change heap sizes checkbox and enter new values in the Proposed heap sizes text fields.

  16. The Summary panel displays, which allows you to review the messaging engine configuration changes. Click Previous if you need to go back and make additional changes or click Finish to complete the configuration process.

  17. Click Save to save the changes to the master configuration.

  18. Synchronize changes to the primary node.

  • Save changes and then restart the dmgr, the node agent(s), server1, and the WebSphere_Portal servers.

  • Define a dynamic cluster using Portal B as the basis:

    1. Log on to the dmgr console.

    2. Create a node group:

      1. Click New.

      2. Type the node group Name.

      3. Type any information about the node group in the Description text box.

      4. Click OK.

      5. Click the Save link to save changes to the master configuration.

    3. Add members to the node group:

      1. Click System administration -> Node groups.

      2. Click on the name of the node group that you want to add members to.

      3. Click Node group members under Additional Properties.

      4. Click Add.

      5. Select the primary node and then click Add.

      6. Click the Save link to save changes to the master configuration.

    4. Create a dynamic cluster in the node group:

      1. Click Servers -> Clusters -> Dynamic clusters.

      2. Click New.

      3. Select WAS from the Server Type pull-down menu and then click Next.

      4. Type the cluster name in the Dynamic cluster name text box and then click Next. Type the same value that you provided for the ClusterName parameter in wkplc.properties of primary node.

      5. Remove all default membership policies and then click Subexpression builder.

      6. Enter the following information in the Subexpression builder window:

        1. Select and from the Logical operator pull-down menu.

        2. Select Nodegroup from the Select operand pull-down menu.

        3. Select Equals (=) from the Operator pull-down menu.

        4. Type the nodegroup name you created in the previous step in the Value text box.

        5. Click Generate subexpression.

        6. Click Append.

      7. Click Preview membership to verify that all nodes included in the nodegroup display and then click Next.

      8. Click the Create the cluster member using an existing server as a template radio button and then select the WebSphere_Portal server for the primary node from the pull-down menu.

      9. Click Next.

      10. Specify the required dynamic cluster properties for minimum and maximum number of server instances.

      11. Review the summary page to verify actions and then click Finish.

    5. Define or verify the following parameters in ...

        WP_PROFILE/ConfigEngine/properties/wkplc.properties

      1. Verify that CellName is set to the dmgr cell.

      2. Verify that NodeName is set to the local WebSphere Portal node.

      3. Set ServerName to the server that will be used for the dynamic cluster member on this node.

      4. Verify that PrimaryNode is set to true.


      ./ConfigEngine.sh cluster-node-config-dynamic-cluster-setup -DWasPassword=dmgr_passwordfrom the WP_PROFILE/ConfigEngine to create the dynamic cluster.

    6. Access the Web Content Manager content through an external Web server:

      1. Log on to the dmgr console.

      2. Select Environment -> WebSphere Variables.

      3. From the Scope drop-down menu, select the Node=nodename, Server=servername option to narrow the scope of the listed variables, where Node=nodename is the node that contains the application server.

      4. Update the WCM_HOST variable with the fully qualified host name used to access the WebSphere Portal server through the Web server or On Demand Router.

      5. Update the WCM_PORT variable with the port number used to access the WebSphere Portal server through the Web server or On Demand Router.

    7. Add a new JCRSeedBus cluster bus member and remove the previous server bus member:

      1. Log on to the Deployment Manager Administrative Console.

      2. Navigate to Service Integration -> Buses and then click JCRSeedBus.

      3. Under the Topology heading, click Bus members.

      4. Click Add.

      5. Click the Cluster radio button to define the type of new bus member and then select the name of this newly created Portal cluster from the drop-down menu. Click Next to continue.

      6. Ensure that the Enable Messaging Engine Policy Assistance checkbox is checked. Then choose the required messaging engine policy setting; refer to Messaging engine policy assistance for information.

      7. Click Next.

      8. Select the Data store radio button and then click Next.

      9. Click the name of the first messaging engine listed in the table.

      10. Enter the following information on the Specify data store properties panel:

        Data store fields that need information.

        Field Value
        Data source JNDI name jdbc/data_source_name, where data_source_name is the value for the jcr.DataSourceName property in wkplc_dbdomain.properties file located in the WP_PROFILE/ConfigEngine/properties directory of the primary node.
        Schema name Value of the jcr.DbSchema property in wkplc_dbdomain.properties.
        Authentication alias Choose the entry in the drop-down list whose name contains the JCR data source name.

      11. Ensure that the Create tables checkbox is checked and then click Next to continue.

      12. The Configure messaging engines panel displays containing the list of messaging engines. If any additional messaging engines are displayed, repeat the steps to configure each additional messaging engine in the table. Then click Next to continue.

      13. Review the heap sizes on the Tune performance parameters panel. Click Next to continue.

          To change the values, click the Change heap sizes checkbox and enter new values in the Proposed heap sizes text fields.

      14. The Summary panel displays, which allows you to review the messaging engine configuration changes. Click Previous if you need to go back and make additional changes or click Finish to complete the configuration process.

      15. Click Save to save the changes to the master configuration.

      16. Synchronize changes to the primary node.

    8. Save changes and then restart the dmgr, the node agent(s), server1, and the WebSphere_Portal servers.

  • Install any additional nodes to the cell to support additional cluster members for Cluster B identically to the primary node, and then federate as them as secondary nodes and define as cluster members on these nodes. For information about adding additional nodes navigate to Install WebSphere Portal -> Set up a cluster. Select the appropriate OS and navigate to Prepare additional nodes. You can add additional nodes to a static or dynamic cluster, and you can also add additional vertical cluster members to an existing node in a static or dynamic cluster to provide vertical scaling.

    If you are creating a multiple, dynamic cluster, remember to install WebSphere Virtual Enterprise on the additional node and augment the Portal profile with WebSphere Virtual Enterprise.

  • Restart the server1 and WebSphere_Portal servers on Cluster A and Cluster B.

  • After setting up multiple clusters, there are additional tasks that you can perform to ensure a balanced workload and failover support.

    • Update the web server configuration to enable user requests to be routed to the new cluster. Refer "Routing requests across clusters" for information about using a web server with multiple clusters in a cell.

    • Update database configuration to share database domains between clusters. Refer to "Sharing database domains between clusters for information about redundancy and failover support.

  • If you entered passwords in any of the properties files while creating cluster, you should remove them for security purposes. See "Deleting passwords from properties files" under Related tasks for information.

    Installation of Cluster B is complete. It is now an independent cluster from Cluster A, which means that Cluster B can have its own configuration, set of end-user portlets, and target community. Any applications that are common between Cluster A and Cluster B are most likely infrastructure or related to administration, and special care needs to be taken to preserve their commonality between clusters and correct maintenance levels.


    Parent

    Set up multiple clusters on Solaris


    Related tasks

    Recommended fixes for WebSphere Extended Deployment

    Plan the product installation

    Use the graphical user interface to augment profiles

    Use the manageprofiles command to augment and unaugment profiles


    Delete passwords from properties files

     


    +

    Search Tips   |   Advanced Search