Program guide > (deprecated) Partitioning facility > An example of the partitioning facility > Run operations for the partitioning facility > Monitor transaction performance statistics


Deprecated feature: The partitioning facility (WPF) feature is deprecated. You can configure partitioning with WebSphere eXtreme Scale.


Manage policies example overview

If the cluster is running, stop it. Also ensure the deployment manager and NodeA's node agent is running. Open a command shell, change the directory to the deployment manager \bin directory. Ensure the policyPK1_startup.properties created previously is also in the deployment manager \bin directory.

Run the following command to load the policy created previously:

wpfadmin createpolicy policyPK1_startup.properties

The generated output should look similar to the following. If not, verify the properties file.

wsadmin -lang jython -f wpfadmin.pty createPolicy policyPK1_startup.properties
WASX7209I: Connected to process "Deployment Manager" on node CellManager 
using SOAP connector;  The type of process is: DeploymentManager
The policy PK1StartupPolicy has been created

The coregroup.xml in for the deployment manager now has the following policy:

<policies xmi:type="coregroup:OneOfNPolicy" xmi:id="OneOfNPolicy_1097944892103" 
name="PK1StartupPolicy" description="WPF Cluster Scoped Partition Policy 
Extended PK000001 Start" policyFactory="com.ibm.ws.hamanager.coordinator.policy.
impl.OneOfNPolicyFactory" isAlivePeriodSec="-1" quorumEnabled="true" 
failback="true" preferredOnly="true" preferredServers="CoreGroupServer_1097678779756
CoreGroupServer_1097678774418">
<MatchCriteria xmi:id="MatchCriteria_1097944898452" name="-gt" value="-p" 
description=",None"/>
<MatchCriteria xmi:id="MatchCriteria_1097944898532" name="-ps" value="-c" 
description=",None"/>
<MatchCriteria xmi:id="MatchCriteria_1097944898582" name="-pn" value="PK000001" 
description=",None"/>
</policies>

If the node agent is running, the coregroup file is propagated. You can verify this by looking at the coregroup.xml file and finding the previous entry. Otherwise, the node is synchronized so the policy update in the coregroup can be propagated to the nodes in the cluster.

To do so, with the nodeagent disabled, use the syncNode command.

Use the administrative console, and start the cluster.

The partitions start typically on cluster_member_2, thus PK000001 should be the only one on cluster_member_1, Your configuration might differ.

To verify after cluster startup:

wpfadmin listActive

wsadmin -lang jython -f wpfadmin.pty listActive
WASX7209I: Connected to process "Deployment Manager" on node CellManager 
using SOAP connector;  The type of process is: DeploymentMana
ger
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000010: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000009: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000008: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000007: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000006: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000005: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000004: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000003: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000002: 
Server Cell\NodeA\cluster_member_2
WPFC0050I: Application WPFKeyBasedPartitionSample, Partition PK000001: 
Server Cell\NodeA\cluster_member_1

All the partitions started on cluster_member_2 rather than cluster_member_1.

To update the policy, you could update the -pn option, and adjust so that it applies to PK000002 instead. Or, a second policy could be created. An additional operation could be simply stop cluster member 1, e.g. stopserver cluster_member_1. In this case, since Failback is set, the highly available (HA) manager attempts to activate on cluster_member_2.

The policy can be updated using the following command:

wpfadmin updatePolicy "PK1StartupPolicy" -failback true -preferredOnly 
false -preferredServers NodeA/cluster_member_2,NodeA/cluster_member_1

The command results in the following output:

wsadmin -lang jython -f wpfadmin.pty updatePolicy PK1StartupPolicy 
-failback true -preferredOnly false -preferredServers NodeA/cluster_member_2,
NodeA/cluster_member_1
WASX7209I: Connected to process "Deployment Manager" on node CellManager 
using SOAP connector;  The type of process is: DeploymentManager
The policy PK1StartupPolicy has been updated

To delete the policy:

wpfadmin deletePolicy PK1StartupPolicy

wsadmin -lang jython -f wpfadmin.pty deletePolicy PK1StartupPolicy
WASX7209I: Connected to process "Deployment Manager" on node CellManager 
using SOAP connector;  The type of process is: DeploymentMana
ger
The policy PK1StartupPolicy has been deleted

If the cluster is restarted, you have reverted back to the default startup partition placement algorithms. This is not the optimal way to modify the selected startup servers. Refer to the HA manager policy section for more information.


Parent topic:

Monitor transaction performance statistics


Related tasks

Monitor transaction performance statistics


+

Search Tips   |   Advanced Search