Portal farm topology
A portal farm topology has a set of identically configured, stand-alone server instances.Use clusters if...
- a central administration point (dmgr) is required
- you have services that require only one instance running in the cell (such as an EJB or other service implemented as a JCA connector)
- you have scheduled tasks that should run only in one server of the cluster instead of in each server of a farm
Use portal farms if...
- you require a dynamic expansion and contraction of capacity by adding or removing machines, such as in a cloud-computing environment
- if you large deployment (100* appservers) that would exceed the capacity of a managed cell
For portal farm administrative actions, rather than rely on the dmgr, we automate administrative actions across the servers, including restarting servers or application.
Considerations
When a shared file system is used, each farm instance, including the original portal installation in the farm, must be on the same OS, including Linux variations.
If session persistence and replication is required, configure a distributed session database, then configure all farm instances use the same database and session table. By default, WebSphere Portal does not require an enabled session persistence.
Because dynamic cache replication is not supported across a farm of standalone instances, independently maintain caches on each farm instance.
With farm installations, there is no cell to manage the synchronization of application server configuration updates. Updates, such as to enterprise applications or datasource configuration, must be repeated on every server in the farm.
Each portal instance must have its own release database. Portal configuration updates must be made to every portal instance on the farm.
Portal search should be configured like it was part of a clustered environment. Set up the search server as a separate portal instance outside the farm.
Copy Java classes and JDBC system resources within the shared filesystem available to all servers in the farm.
Every farm instance should have an identical security configuration, including identical user repository configuration (LDAP),
Cache management
Because the server instances are independent, there is no coordination of dynamic cache content or invalidations. Changes made on one server may not be seen immediately on another server in the farm. Configure caches to expire within a reasonable amount of time to ensure updates are seen in a timely manner.
Portal caches Navigation, page layout, and access control. Automatically configured to expire. Most updates are seen immediately on a server when administrative changes are made directly and/or when a user logs off and then back into WebSphere Portal. In a farm of unique installations, since administrative actions are necessary against all servers, and must be applied to all servers manually, expiration of the portal caches should not be a general concern. For shared configurations, configuration changes will generally appear across the farm when their caches expire, or when individual servers are restarted.
Content caches Updated when a content item changes either through authoring activity or syndication. With a farm, one server is designated as the subscriber for content syndication, whichs updates the cache on the subscriber. Deploy a content cache invalidation message bean on servers in the farm to consume messages from the subscriber server. By consuming content item update messages from the subscriber, the content caches on all servers can be kept in sync and up-to-date. Portal farm setup options
Options:
- Install WebSphere Portal on each server.
To expedite, we can use virtualization technology, such as VMWare, to mass-replicate portal instances. Note that portal cloning does not require virtualization technology.
- Install a single installation and profile, then share across multiple systems.
Configure server-specific mutable files to be outside the shared filesystem. Configure installation and configuration files on the shared file system of the original server. All farm instances have the same configuration and use the same database domains.
Database sharing
For a farm with independent servers, all database domains can be shared except for the release domain. For a shared filesystem farm, all database domains can be shared because the application server configuration to which the release data is bound is also shared.
Load balancing
During the login process, several request steps are required to the same server. Moving between servers during the login process can cause the login to fail. Configure a network appliance that supports load balancing with affinity, such as IBM HTTP Server. The application server plugin assumes an application server cluster is in use, so there is some extra configuration.
Set up a portal farm
Creating the farm requires...
- An established content subscriber
- Two or more installed instances of IBM WebSphere Portal
- A Web server for configured load balancing
This documentation only covers the HTTP server plug-in but we can use any supported Web server.
If created a farm client with a read-only file system, manually gather logs from the temp directories on the farm members.
Parent: Installing
See also
- Server topologies
- Set up farm instances as unique installations
- Web content cache configuration
- Work with syndicators and subscribers
- Caching
- Choose the type of portal farm to create
- Set up the HTTP server plug-in on a portal farm
- Set up a highly available message bus
- Administer a portal farm
- Maintain a portal farm
- Portal farm considerations