+

Search Tips   |   Advanced Search

Set up farm instances using a GPFS file shared configuration


Choose this option to set up the portal farm from a shared filesystem using IBM GPFS file sharing. The GPFS configuration is only supported on Windows, Linux, and AIX OSs. This option allows us to only maintain one image, which makes using the farm maintenance easier because multiple copies of this one image can be used to update farm instances on a server-by-server basis. There are fewer resources to manage and the environment is simple. The disadvantage is that there is no ability to affect specific changes on a per-server basis and most changes require a server restart.


Set up farm instances using a GPFS file shared configuration

The first machine where IBM WebSphere Portal is installed is used as a basis for the portal farm and is termed the original server.

  1. Install and configure GPFS on each of the farm clients and farm master servers.

    The GPFS configuration is only supported on Windows, Linux, and AIX OSs.

  2. On the farm master server, create the following two file systems with a minimum of 40 gigabytes of space that are for GPFS and have read/write access; for example:

    • /dev/gpfs1nsd mounted on /apps

    • /dev/gpfs3nsd mounted on /profiles

  3. Ensure that the file system inodes limit is set to 250000 or higher.

    • mmchfs /dev/gpfs1nsd -F 250000
    • mmchfs /dev/gpfs3nsd -F 250000

  4. Create an empty file system on a central file server with enough capacity for a full installation and then mount it on the original server as a writable file system in the location where WebSphere Portal should be installed; for example:

    • Linux: /opt/IBM
    • AIX: /usr/IBM

  5. Install WebSphere Portal on the original server into the GPFS mounted file system, for example: /apps/Portal_WP8, which results in has a set of product binaries (PortalServer directory) and a default configuration profile (wp_profile); for example: /profiles. WebSphere Portal runs off the mounted file system on the original server.

  6. Re-configure the local instance to use a different TCP/IP hostname. This hostname should either be localhost or a hostname configured in the local hosts file alias localhost. This allows the local instance to always reference itself using the loopback address.

    1. cd WP_PROFILE/bin

    2. Run the following task, where node_name is the node name assigned to the local node:

        ./wsadmin.sh –c “\$AdminTask changeHostName {-nodeName node_name -hostName localhost}; \$AdminConfig save” –conntype NONE

    3. cd WP_PROFILE/ConfigEngine

    4. Propagate the profile changes to the WebSphere Portal configuration:

        ./ConfigEngine.sh localize-clone -DWasPassword=foo

  7. Configure this instance to represent the baseline configuration for the entire farm, including configuring databases and configuring the user registry.

    See the appropriate database and user registry topics under the Setting up a stand-alone server topic in the Related task section.

  8. For IBM Web Content Manager, run the following task, from WP_PROFILE/ConfigEngine, to set up the local messaging bus and queue:

    To avoid the system receiving all content updates from the authoring system, one server outside the farm needs to be identified as the subscriber. This server also requires a message queue where content update messages are posted from members to the farm. All farm servers will listen for these messages to update their own content caches.

    This step is only performed once when setting up the portal farm; it is not performed on each server in the farm. This step is only executed on the server identified as the WCM SUBSCRIBER used in the portal farm.

      ./ConfigEngine.sh create-wcm-jms-resources -DWasPassword=foo

    If we are using a remote content environment, see the "Work with syndicators and subscribers" link .

  9. If we are using Web Content Manager on the original server so that all farm instances will listen for content update messages:

    This step is performed only once on the original server.

    1. Edit...

        WP_PROFILE/PortalServer/wcm/confprepreq.wcm.properties

      ...and update the following properties with the appropriate information for the farm servers:

      See the prepreq.wcm.properties file for specific information about the required parameters.

      • remoteJMSHost

      • remoteJMSBootstrapPort

      • remoteJMSNodeName

      • remoteJMSServerName

    2. Set up the remote messaging bus and queue:

        ./ConfigEngine.sh create-wcm-jms-resources-remote -DWasPassword=foo

  10. To enable the server to run in farm mode, where the systemTemp parameter specifies where the server-specific directory is located. This directory will contain all directories and file that the running portal instance will write to, such as for logging and page compiling:

    1. Create the target directory path.

      For example:

        /var/log/was_tmp

    2. Run the following task to enable the server to run in farm mode:

        ./ConfigEngine.sh enable-farm-mode –DsystemTemp=/var/log/was_tmp -DWasPassword=foo

  11. On each server you plan to add to the portal farm, mount the network accessible file system on a new system in the same location as on the original server to preserve the installation path configuration.

  12. On the farm client...

      cd WP_PROFILE/PortalServer/bin

  13. Start or stop an instance of WebSphere Portal from a farm server:

      ./start_WebSphere_Portal.sh

      ./stop_WebSphere_Portal.sh

  14. To use a Web server for load balancing, complete Set up the HTTP server plug-in on a portal farm next.


Parent: Choose the farm instance using a shared configuration