+

Search Tips   |   Advanced Search


Install multiple clusters in a single cell on i5/OS

Create a new, independent WebSphere Portal cluster in a cell where a WebSphere Portal cluster already exists.

In the following steps, Cluster A will be used to describe the existing cluster. Portal B will be used to describe the new server profile that will be the basis for the new cluster definition, Cluster B.

Perform the following steps to install multiple clusters in a single cell:

  1. Upgrade Cluster A, including the Deployment Manager node, to the current, supported hardware and software levels and to the current version of WebSphere Portal.

  2. Configure Portal B; see the "Preparing the primary node" topic for the appropriate operating system for details.

    Maintain the same number of data sources with identical names to the Cluster A data sources so that data source bindings in the applications can be resolved on every cluster in which they run. If implementing database sharing across the clusters, the above statement refers to both the shared and non-shared domains; all domains should use the same names.

  3. Optional: Using the same database user ID and password for each identically named domain/data source will allow the existing JAAS Authentication Aliases to be functional. If unique database user ID and password are required, additional manual configuration is needed to create new JAAS Authentication Aliases for each data source and map these accordingly. Run...

      ConfigEngine.sh create-alias-multiple-cluster -DauthDomainList=release,jcr -DWasPassword=dmgr_password task from...

        profile_root/ConfigEngine

      ...to create the new JAAS Authentication Aliases.

      where authDomainList is set to a list of domains which use unique database user ID and passwords and those domain properties are set correctly in the wkplc_comp.properties file, including user ID and password.

    • Optional: If necessary, upgrade Portal B to the current cumulative fix.

    • Run...

        ConfigEngine.sh mapped-app-list-create -DWasPassword=password task from...

          profile_root/ConfigEngine

        ...to build an inventory of Portal B enterprise applications and portlets.

      • Stop the server1 and WebSphere_Portal servers on Portal B and ensure that the following parameters are set correctly in wkplc.properties, located in...

          profile_root/ConfigEngine/properties

      • If the Deployment Manager is configured to use a stand-alone LDAP user registry, update wkplc.properties, located in...

          profile_root/ConfigEngine/properties

        on the primary node with the stand-alone LDAP user registry property values from the Deployment Manager. You can find these settings under the VMM Stand-alone LDAP configuration heading.

        Ensure that you set WasUserid and WasPassword to the Deployment Manager user ID and password.

      • Run...

          ConfigEngine.sh cluster-node-config-pre-federation -DWasPassword=password task, from profile_root/ConfigEngine of the primary node.

        • Run...

            ConfigEngine.sh map-apps-to-server -DWasPassword=password task to determine which applications from the inventory list are no longer mapped to Portal B.

            The task uses the application profiles already in the cell to restore the mappings. Wait 30 minutes after running this task to allow all EAR files to expand before proceeding to the next step.

          • To federate Portal B into the deployment manager cell:

            1. Ensure that all database parameters are correctly set, including passwords, in the wkplc_comp.properties and wkplc_dbtype.properties files.

            2. Run...

                ConfigEngine.sh cluster-node-config-post-federation -DWasPassword=dmgr_password task.

              • After running the cluster-node-config-post-federation task, wait at least 30 minutes to allow all EAR files to expand.

              • Since the WebSphere Portal node is now using security settings from the Deployment Manager cell, update the WebSphere Portal administrative user ID and password to match an administrative user defined in the cell's user registry. Run...

                  ConfigEngine.sh wp-change-portal-admin-user -DWasPassword=password -DnewAdminId=newadminid -DnewAdminPw=newpassword -DnewAdminGroupId=newadmingroup task, from...

                    profile_root/ConfigEngine

                  ...to update the WebSphere Portal administrative user ID if the Deployment Manager cell is using a different user registry.

                  This task verifies the user against a running server instance.

                  If the server is stopped, add the -Dskip.ldap.validation=true parameter to the task to skip the validation.

                  If standalone LDAP security is already enabled on the Deployment Manager cell, delay running the wp-change-portal-admin-user task until after the cluster-node-config-cluster-setup (static cluster) or cluster-node-config-dynamic-cluster-setup (dynamic cluster) task completes. After running the wp-change-portal-admin-user task, start or restart the WebSphere_Portal server to use the updated administrator user ID.

                  The WebSphere Portal administrative user ID and administrative group must exist in the Deployment Manager before running the wp-change-portal-admin-user task. -DnewAdminPw is an optional parameter to update the Administrative password in wkplc.properties if required.

                • From the administrative console, click System Administration > Node Agents.

                • Check the box next to the required node agent and then click Restart.

                • Cycle the deployment manager.

                • Cycle the WebSphere_Portal server on Portal B

          • Restart the WebSphere_Portal server on Cluster A. Verify that Cluster A is functionally intact by spot checking pages and portlets and then verify that Portal B is functionally intact by spot checking pages and portlets that you deployed into Portal B before it was federated. Any discrepancies or errors should be corrected now before continuing.

            If Portal B is using a non-default Portal server administrative ID, not wpsadmin, the server will not be functional until the cluster configuration is complete and the Portal administrative ID has been configured to match the Cells security settings.

          • To define a cluster using Portal B as the basis:

            1. Run...

                ConfigEngine.sh cluster-node-config-cluster-setup -DWasPassword=dmgr_password task.

              • Configure the cluster to use an external Web server to take advantage of features such as workload management. Choose one of the following options:

              • Access the Web Content Management content through an external Web server:

                1. Log on to the deployment manager administrative console.

                2. Select Environment > WebSphere Variables.

                3. From the Scope drop-down menu, select the Node=nodename, Server=servername option to narrow the scope of the listed variables, where Node=nodename is the node that contains the appserver.

                4. Update the WCM_HOST variable with the fully qualified host name used to access the WebSphere Portal server through the Web server or On Demand Router.

                5. Update the WCM_PORT variable with the port number used to access the WebSphere Portal server through the Web server or On Demand Router.

              • Save changes and then restart the deployment manager, the node agent(s), server1, and the WebSphere_Portal servers.

              • Change the task list settings to point to the cluster instead of a single server if using Portal business process integration. See one of the following sections in the Information Center for information:

                • Cross cell: Adding a BPI-enabled portal server to a managed cell in a cross-cell setup

                • Single cell: Adding a BPI-enabled portal server to a managed cell in a single-cell setup

          • Install any additional nodes to the cell to support additional cluster members for Cluster B identically to the primary node, and then federate as them as secondary nodes and define as cluster members on these nodes. For information about adding additional nodes navigate to Installing WebSphere Portal > Setting up WebSphere Portal > Setting up a clustered production environment > Setting up a production environment on i5/OS > Preparing additional nodes on i5/OS. You can add additional nodes and/or vertical clusters.

          • Restart the server1 and WebSphere_Portal servers on Cluster A and Cluster B.

Installation of Cluster B is complete. It is now an independent cluster from Cluster A, which means that Cluster B can have its own configuration, set of end-user portlets, and target community. Any applications that are common between Cluster A and Cluster B are most likely infrastructure or related to administration, and special care needs to be taken to preserve their commonality between clusters and correct maintenance levels.


Parent topic:

Set up multiple clusters on i5/OS


Related tasks


Add a BPI-enabled portal server to a managed cell in a cross-cell setup
Add a BPI-enabled portal server to a managed cell in a single-cell setup