Shared services

+

Search Tips   |   Advanced Search

  1. Deploy shared services
  2. View shared service details
  3. Manage existing shared services
    1. System Monitoring service
      1. Deploy
      2. Restart components
      3. Extend with custom hypervisor images
      4. Extend to middleware
      5. IBM PureApplication System Monitoring Portal
      6. Deprovisioning components
      7. Set trace levels
      8. View log files
      9. Scenarios
      10. Historical data collection
      11. Known restrictions
    2. Caching service
      1. Interacting with the caching service through clients
      2. Use an external caching service
      3. Manage caching grids
    3. Elastic load balancing (ELB) proxy service
      1. Deploy an ELB proxy shared service
      2. Configure a deployed ELB instance
    4. Database Performance Monitor service
      1. Deploy
  4. Use the shared services client for virtual system script packages
    1. Configure the virtual system plug-in to include the shared services client plug-in
    2. Configure script packages to run on demand
    3. Activate script packages
    4. Example: Use a script package to retrieve details about a shared service
    5. Example: Use a script package to store the IP address of a shared service


Work with shared services

Shared services provide a predefined virtual application pattern deployed and shared by multiple application deployments in the cloud, including virtual applications, virtual systems, and virtual appliances.

To view shared services, you must be assigned either the Workload resources administration with full permissions or the Cloud Administrator role.

A shared service provides certain runtime services to multiple applications or services to the end user on behalf of multiple applications. Shared services are built on the multi-tenant architecture model in that a single deployment can service multiple tenants or client applications.

Only one instance of a type of shared service can be deployed in a cloud group, which is a physical group of hardware that defines a cloud. This shared service can be used by all application deployments in the cloud group.

Several shared services are offered by default and can be deployed as needed.


Deploy shared services

Only one instance of a type of shared service can be deployed in a cloud group, which is a physical group of hardware that defines a cloud. This shared service can be used by all application deployments in the cloud group.

You can use the workload console or the command line interface to complete this task. For the command line information, see the Related information section.

To deploy a shared service...

Provide information in the fields to configure the shared service. The information that you must provide differs depending on the shared service that you are working with.

Name Name of the application that is being shared as a service. Do not use more than two consecutive underscore characters in a shared service name.
Profile Information related to deployment configuration, such as virtual machine names, IP address assignment, and cloud groups. Deploying patterns with profiles enables deployments across tiers from a single pattern.
Target cloud group Target cloud group associated with the shared service.
SSH Key SSH public key. To upload a public key for connecting to the deployed virtual machines, select the Advanced check box and complete the SSH section. If you do not have an existing SSH key pair, you can generate one that can be reused with other deployments. Click Generate. The SSH Key field populates with the generated public key. Select Click here to download the file containing the private key and you can save the private key to your local system. By downloading and saving the key, you can access the virtual machines even if the appliance loses connectivity or encounters problems.

To change the shared service, click the Operation menu and configure settings depending on your requirements. The settings differ depending on the shared service that you are working with. For example, for the ELB proxy service, an Outbound timeout setting section is displayed, where you can configure proxy settings on deployed ODR instances. These changes take effect immediately, without the need to restart the ODR servers. This action is only available for the ELB proxy service.

To stop the shared service, click the Stop the selected shared service icon. The shared service status is displayed as STOPPED in the details.


View shared service details

To view details about your shared service, including ID, description, creation date, pattern type...

Shared services are shipped with the product and are included in the IBM Foundation Pattern type. If you must perform operations through a shared service that impact the role status, ensure that the environment of the shared service is restored to the original status before you log out. For example, if you change the role status in a web application, the role is not reverted to its original state when the service is stopped.


The System Monitoring service

The System Monitoring service provides the infrastructure that allows monitoring agents to collect performance and availability information. The System Monitoring service does not monitor virtual appliances or other virtual appliances or images based on IBM DataPower.

The System Monitoring service...

All of the preceding operations are automated and require no manual intervention after the service is deployed.


Deploy the System Monitoring service

After you deploy the System Monitoring service, three virtual machines are created to host...

You must be assigned the following roles to perform these steps:

To deploy system monitoring services...

Complete the following fields:

Password for the user ID "sysadmin"

Password for the sysadmin user ID. The sysadmin ID is an initial user ID for logging on to IBM PureApplication System Monitoring Portal and it has full access and complete administrator authority. Re-enter the same password in the following field for verification.

Password for the user ID "itmuser"

Enter a password for the itmuser user ID. The itmuser ID is a user ID for collecting historical data. Re-enter the same password in the following field for verification.

Password for the user ID "db2inst1"

Enter a password for the db2inst1 user ID. The db2inst1 ID is a user ID for creating and maintaining the Tivoli Data Warehouse database. Re-enter the same password in the following field for verification.

Shared Service Sizing

Expected size of the System Monitoring service. You can decide which option to use based on the number of monitoring agents that you plan to connect to the monitoring servers in the System Monitoring service. The system adjusts for sizing by deploying and undeploying instances of the remote monitoring server, as needed. The following list shows the overall limit of active monitoring agents that is supported in each sizing.

  • Tiny - 200 agents
  • Small - 500 agents
  • Medium - 2000 agents
  • Large - 5000 agents

Click OK.

In the Deploy Virtual Application window, do the following steps:

  1. Select the cloud group to which you will deploy the virtual application from the Cloud group drop-down list.
  2. Select the profile to use from the Profile drop-down list.
  3. Click Generate to generate an SSH Key. The SSH Key field populates with the generated key.
  4. Click Download to save the key to your local system.
  5. Click OK.

An instance of the System Monitoring service is created in the cloud group that you selected. When you view the status of the System Monitoring components, you will see that virtual machines that host a hub monitoring server, a remote monitoring server, a PureApplication System Agent, and a Tivoli Data Warehouse, are created and they are all in running status, as shown in the following figure.


Restart components of the System Monitoring service

Restart components of the System Monitoring service to perform such actions as allowing configuration changes to take effect.

Select one of the following roles:

Depending on the role that you selected in the previous step, do one of the following steps:

The status and result of the operation is shown in the Operation Execution Results panel. When the value of the Status column becomes done, the selected component of the System Monitoring service is restarted.


Extend the System Monitoring service with custom hypervisor images

By default, system monitoring is supported on the hypervisor images included on the system. Taking the following actions on these images on the virtual system might negatively impact monitoring functionality:

When using the IBM Image Construction and Composition Tool to create custom hypervisor images, note the following naming restrictions for image parts. To gain support from the base System Monitoring service and take advantage of default monitoring capabilities, place the original name of the image part at the end of the new name, for example...

If the original name is not at the end of the new part name, the new image is not recognized as a cloned extended image. If you prefer to create custom images that do not take advantage of the base System Monitoring service, for example when testing images in a nonproduction environment, append the new names to the original names, for example...


Extend the System Monitoring service to middleware

Administrators can extend the System Monitoring service to collect performance and availability data about middleware applications in a cloud group. Monitoring services for middleware are separate shared services deployed in addition to the base System Monitoring service.

Currently, the System Monitoring service does not support custom monitoring agents.

When configuring cloud groups, you can decide whether you want to enable monitoring of middleware applications such as IBM WebSphere Application Server and IBM HTTP Servers. You can make this decision at any time during the lifetime of your cloud group. For example, you can configure a cloud group for middleware monitoring even before you enable overall system monitoring or deploy any middleware applications.

Similar to the base System Monitoring service, the monitoring services for middleware applications have agents that collect data about health status, availability, and performance. Users can access this monitoring data in the System Monitoring Portal.

Administrators can configure monitoring services for middleware applications at deployment time. Configuration settings include options for automatically starting the monitoring agents in one or more deployments, restarting applications to ensure that all monitoring data is collected, and more.

If you notice that deployments that are monitored by services for middleware fail, confirm that the base System Monitoring service is running. The base System Monitoring service must be running before deploying any monitoring services for middleware.


The System Monitoring for WebSphere Application Server service

The System Monitoring for WebSphere Application Server service is a shared service that collects performance and availability data in application server environments.

You can deploy the System Monitoring for WAS service in a cloud group to collect performance and availability data from applications that are running on virtual systems and virtual applications within that cloud group. The shared service uses a monitoring agent and data collector to collect the data from these applications.

The monitoring agent and data collector are dynamically installed and configured on supported platforms if the shared service is deployed in the target cloud group. Pattern authors do not need to take any action to enable the monitoring agent. For example, if your organization has a development cloud group in which the System Monitoring shared services are not deployed and a production cloud group in which the services are deployed, if the same pattern is deployed to each of these cloud groups, only the deployment to the production cloud group will have monitoring installed.

You can deploy the System Monitoring for WAS service in a cloud group at any time. For supported patterns deployed after the shared service is deployed, the monitoring agent and data collector are added to those patterns at deployment time. For supported patterns that are already deployed in the cloud group, those patterns can discover the shared service and install the monitoring capabilities later. This discovery of the shared service happens with a background process running in the deployment that periodically checks for any monitoring shared services that were not detected at deployment time. Running this background process can lead to a delay of up to one hour before an existing deployment will react to the deployment of the System Monitoring for WAS service.

The System Monitoring for WAS service does not have any virtual machines associated with it. Without this association, the service cannot be stopped, started, or upgraded. The service can only be deployed or deleted.

The version of the System Monitoring for WAS service deployed determines the version of the monitoring agent and data collector that are running on every deployment in the cloud group. When the service is deployed, the monitoring agent is installed in the appropriate virtual machines. When the service is deleted, the monitoring agent is also deleted from those same virtual machines. To upgrade to a new version of the service, you must delete the existing service and then deploy the new version.

At certain stages in the lifecycle of the deployment, the monitoring agent must reconfigure itself. To complete this configuration, any application servers that are to be monitored must be restarted. Therefore, if monitoring is enabled in a deployment, application server instances are restarted by default in the following situations:

You can control the restarting process of application server instances by selecting an option on the user interface when you deploy the System Monitoring for WAS service. The option is enabled by default so that administrators do not have to restart servers manually every time they perform a deployment or upgrade. If you disable the option when deploying the service, you must manually restart every individual application server instance so that the monitoring agent can reconfigure itself.

You can stop the monitoring agent manually and restart it later. When you manually stop the agent, the state of monitoring is 'stopped'. This is important because, after you manually stop monitoring on a deployment, the IBM PureApplication System service does not automatically restart monitoring on that deployment. This behavior is different from a lifecycle-initiated stop of monitoring, which triggers a state of 'paused'. The service provides a deployment option to force all deployments on the cloud group out of 'stopped' state, returning them to the normal lifecycle management process of system monitoring.

When deploying a WAS pattern that does not have a defined cluster, the System Monitoring for WAS service might not connect as expected. To connect to the service and enable monitoring, complete the following steps to reconfigure and restart the monitoring agent on the nodes in the cluster:

  1. Use the WAS administrative console for the deployment to create the clusters and install your applications.
  2. Start the monitoring agent.


Deploy the System Monitoring for WebSphere Application Server service

Extend the capabilities of the base System Monitoring service by deploying the System Monitoring for WebSphere Application Server service. You must be assigned the following roles to perform these steps:

To deploy the System Monitoring for WAS service...

Complete the following fields:

Auto-start monitoring agent when deployed

Start the monitoring agent for WAS automatically when the application server instances are started.

When you deploy the System Monitoring for WAS service and then deploy an application server pattern, a monitoring agent is installed on each application server instance in the pattern. If this option is enabled, the monitoring agent is also started.

When you deploy an application server pattern and then deploy the System Monitoring for WAS service, application server instances in the pattern discover the shared service and a monitoring agent is installed. If this option is enabled and the Start monitoring agent when existing deployments discover Shared Service for the first time option is enabled, the monitoring agent is also started.

When you delete and then redeploy the System Monitoring for WAS service, any monitoring agents that are stopped when you redeploy are started if this option is enabled and the Auto-start monitoring agent on all deployments option is enabled.

If you disable this option when you deploy the System Monitoring for WAS service, you can start the monitoring agent later.

Restart WAS servers automatically

Enable this option to restart monitored application server instances.

If this option is enabled, application server instances are restarted in the following situations:

  • The virtual machine is deployed.
  • The virtual machine is rebooted.
  • The System Monitoring shared service is started or restarted in the same cloud group as the deployment.
  • The System Monitoring for WAS service is deployed in the same cloud group as the deployment.
  • The monitoring agent is started after it has been manually stopped previously.

Restarting application server instances allows the monitoring agent to complete its configuration and make monitoring data available in the IBM PureApplication System Monitoring Portal. This option is enabled by default so that administrators do not have to restart servers manually every time they perform a deployment or upgrade the shared services. Although the default setting is recommended, certain administrators might choose to disable this option when they are working with sensitive systems and prefer to initiate server starts manually.

Auto-start monitoring agent on all deployments

Enable this option to start all monitoring agents that are currently stopped.

If this option is enabled and the Auto-start monitoring agent when deployed option is enabled, the monitoring agent is started on all deployments in which it is currently stopped. This combination setting is ideal for administrators who want to start all monitoring agents in a cloud group regardless of whether they were stopped by the user or stopped automatically.

If this option is enabled and the Auto-start monitoring agent when deployed option is disabled, only those monitoring agents that are currently stopped automatically are started. Any monitoring agents that are currently stopped by the user are not started at this time but are made ready to start again automatically if you redeploy the System Monitoring for WAS service later with the Auto-start monitoring agent when deployed option enabled.

Start monitoring agent when existing deployments discover Shared Service for the first time

Enable this option to install and start the monitoring agent on existing deployments that were not configured to monitor WAS when they were deployed. This setting has no effect unless the Auto-start monitoring agent when deployed option is also enabled. If this option is disabled, the monitoring agent is installed on existing deployments, but it must be started manually.

Click OK.

In the Deploy Virtual Application window, do the following steps:

  1. Select the cloud group to which you will deploy the virtual application from the Cloud group list.
  2. Select the profile to use from the Profile list.
  3. Click OK.

An instance of the System Monitoring for WebSphere Application Server service is created in the selected cloud group. At this point, the shared service does not have any middleware or virtual machines associated with it.


Manage the System Monitoring for WAS service

In virtual applications, start and stop the monitoring agent for the System Monitoring for WebSphere Application Server service in the Virtual Application Console. In virtual systems, start and stop the monitoring agent by using a helper script that is available on the virtual machine.


Start and stop the monitoring agent for the System Monitoring for WAS service in virtual applications

You can manually stop the monitoring agent from collecting data, for example in virtual application environments that are still in development. Then, when you are ready to start collecting data again, you can manually restart the agent.

To start the monitoring agent, expand Start, and then click Submit. To stop the monitoring agent, expand Stop, and then click Submit.


Start and stop the monitoring agent for the System Monitoring for WAS service in virtual systems

To manually stop the monitoring agent from collecting data, for example in virtual system environments that are still in development, SSH to the virtual machine as root and run...

After approximately ten minutes, open the IBM PureApplication System Monitoring Portal and confirm that the monitoring agent for WebSphere Application Server is connected and running.

To stop the monitoring agent:

Click Submit.


The System Monitoring for HTTP Servers service

The System Monitoring for HTTP Servers service is a shared service that collects performance and availability data in HTTP server environments.

You can deploy the System Monitoring for HTTP Servers service in a cloud group to collect performance and availability data from HTTP servers that are running on virtual systems. The shared service uses a monitoring agent to collect the data from these applications.

The monitoring agent is dynamically installed and configured on supported platforms if the shared service is deployed in the target cloud group. Pattern authors do not need to take any action to enable the monitoring agent.

For example, if your organization has a development cloud group in which the System Monitoring shared services are not deployed and a production cloud group in which the services are deployed, if the same pattern is deployed to each of these cloud groups, only the deployment to the production cloud group will have monitoring installed.

You can deploy the System Monitoring for HTTP Servers service in a cloud group at any time. For supported patterns deployed after the shared service is deployed, the monitoring agent is added to those patterns at deployment time. For supported patterns that are already deployed in the cloud group, those patterns can discover the shared service and install the monitoring capabilities later. This discovery of the shared service happens with a background process running in the deployment that periodically checks for any monitoring shared services that were not detected at deployment time. Running this background process can lead to a delay of up to one hour before an existing deployment will react to the deployment of the System Monitoring for HTTP Servers service.

The System Monitoring for HTTP Servers service does not have any virtual machines associated with it. Without this association, the service cannot be stopped, started, or upgraded. The service can only be deployed or deleted.

The version of the System Monitoring for HTTP Servers service deployed determines the version of the monitoring agent running on every deployment in the cloud group. When the service is deployed, the monitoring agent is installed in the appropriate virtual machines. When the service is deleted, the monitoring agent is also deleted from those same virtual machines. To upgrade to a new version of the service, you must delete the existing service and then deploy the new version.

At certain stages in the lifecycle of the deployment, the monitoring agent must reconfigure itself. To complete this configuration, any HTTP servers that are to be monitored must be restarted. Therefore, if monitoring is enabled in a deployment, HTTP server instances are restarted by default in the following situations:

You can control the restarting process of HTTP server instances by selecting an option on the user interface when you deploy the System Monitoring for HTTP Servers service. The option is enabled by default so that administrators do not have to restart servers manually every time they perform a deployment or upgrade. If you disable the option when deploying the service, you must manually restart every individual HTTP server instance so that the monitoring agent can reconfigure itself.

You can stop the monitoring agent manually and restart it later. When you manually stop the agent, the state of monitoring is 'stopped'. This is important because, after you manually stop monitoring on a deployment, the IBM PureApplication System service does not automatically restart monitoring on that deployment. This behavior is different from a lifecycle-initiated stop of monitoring, which triggers a state of 'paused'. The service provides a deployment option to force all deployments on the cloud group out of 'stopped' state, returning them to the normal lifecycle management process of system monitoring.


Deploy the System Monitoring for HTTP Servers service

Extend the capabilities of the base System Monitoring service by deploying the System Monitoring for HTTP Servers service. You must be assigned the following roles to perform these steps:

To deploy the System Monitoring for HTTP Servers service...

Complete the following fields:

Auto-start monitoring agent when deployed

Start the monitoring agent for HTTP Servers automatically on HTTP server instances. When you deploy the System Monitoring for HTTP Servers service and then deploy an HTTP server pattern, a monitoring agent is installed on each HTTP server instance in the pattern. If this option is enabled, the monitoring agent is also started.

When you deploy an HTTP server pattern and then deploy the System Monitoring for HTTP Servers service, HTTP server instances in the pattern discover the shared service and a monitoring agent is installed. If this option is enabled and the Start monitoring agent when existing deployments discover Shared Service for the first time option is enabled, the monitoring agent is also started.

When you delete and then redeploy the System Monitoring for HTTP Servers service, any monitoring agents that are stopped when you redeploy are started if this option is enabled and the Auto-start monitoring agent on all deployments option is enabled.

If you disable this option when you deploy the System Monitoring for HTTP Servers service, you can start the monitoring agent later.

Restart IHS servers automatically

Enable this option to restart monitored HTTP server instances. If this option is enabled, HTTP server instances are restarted in the following situations:

  • The virtual machine is deployed.
  • The virtual machine is rebooted.
  • The System Monitoring shared service is started or restarted in the same cloud group as the deployment.
  • The System Monitoring for HTTP Servers service is deployed in the same cloud group as the deployment.
  • The monitoring agent is started after it has been manually stopped previously.

Restarting HTTP server instances allows the monitoring agent to complete its configuration and make monitoring data available in the IBM PureApplication System Monitoring Portal. This option is enabled by default so that administrators do not have to restart servers manually every time they perform a deployment or upgrade the shared services. Although the default setting is recommended, certain administrators might choose to disable this option when they are working with sensitive systems and prefer to initiate server starts manually.

Auto-start monitoring agent on all deployments

Enable this option to start all monitoring agents that are currently stopped. If this option is enabled and the Auto-start monitoring agent when deployed option is enabled, the monitoring agent is started on all deployments in which it is currently stopped. This combination setting is ideal for administrators who want to start all monitoring agents in a cloud group regardless of whether they were stopped by the user or stopped automatically. If this option is enabled and the Auto-start monitoring agent when deployed option is disabled, only those monitoring agents that are currently stopped automatically are started. Any monitoring agents that are currently stopped by the user are not started at this time but are made ready to start again automatically if you redeploy the System Monitoring for HTTP Servers service later with the Auto-start monitoring agent when deployed option enabled.

Start monitoring agent when existing deployments discover Shared Service for the first time

Enable this option to install and start the monitoring agent on existing deployments that were not configured to monitor HTTP servers when they were deployed. This setting has no effect unless the Auto-start monitoring agent when deployed option is also enabled. If this option is disabled, the monitoring agent is installed on existing deployments, but it must be started manually.

Click OK.

In the Deploy Virtual Application window, do the following steps:

  1. Select the cloud group to which you will deploy the virtual application from the Cloud group list.
  2. Select the profile to use from the Profile list.
  3. Click OK.

An instance of the System Monitoring for HTTP Servers service is created in the selected cloud group. At this point, the shared service does not have any middleware or virtual machines associated with it.


Manage the System Monitoring for HTTP Servers service

In virtual systems, start and stop the monitoring agent by using a helper script that is available on the virtual machine.


Start and stop the monitoring agent for the System Monitoring for HTTP Servers service in virtual systems

to manually stop the monitoring agent from collecting data, for example in virtual system environments that are still in development, SSH to the virtual machine as root, and run...

After approximately ten minutes, open the IBM PureApplication System Monitoring Portal and confirm that the monitoring agent for HTTP Servers is connected and running.

To stop the monitoring agent:

Click Submit.


Open the IBM PureApplication System Monitoring Portal

IBM PureApplication System Monitoring Portal gives you access to the data that is collected by the monitoring agents. You can open the portal from the user interface.

Ensure that you have IBM Java 6 or later installed on the computer on which you want to open the monitoring portal. Also, the browser link that opens the monitoring portal does not work unless the system can resolve the host name. You can either configure the DNS server with the IP address of the monitoring portal or add an entry to the hosts file, which is located in either...

The monitoring portal supports only 32-bit JRE. If you are assigned one of the following roles, when you open the monitoring portal, you will be added to the Monitoring Administrator group, and you will see the physical view and monitoring data for all deployments in the cloud group:

If you are assigned one of the following roles, when you open the monitoring portal, you will be added to the Monitoring Operator group , and you will see monitoring data for all deployments in the cloud group:

If you are assigned the Cloud user role, when you open the monitoring portal, you will be added to the Monitoring User group, and you will be able to see the monitoring data for the deployments that you created. If you are granted access to deployments by their owner, when you open the monitoring portal, you will be added to the Monitoring User group, and you will be able to see the monitoring data for the deployments that you have access to.

System administrator tasks


View monitoring data for cloud groups

If you have deployed an instance of the System Monitoring service to a cloud group, you can open monitoring portal to access the data collected by the monitoring agents for the cloud group.

  1. Click...

      Instances | Shared Services | instance

  2. Click the plus sign (+) beside Virtual machine perspective.

    A list of virtual machines in the three roles required to facilitate the instance of the System Monitoring service is displayed. These roles are namely Data Warehouse, Hub TEMS, and Remote TEMS.

  3. Click the Monitor link for the virtual machine that hosts the hub Tivoli Enterprise Monitoring Server.

    The 64-bit version of the JRE is not supported for this operation. You must have the 32-bit version of the IBM JRE 6 (SR 12 and later) or IBM JRE 7 installed. If the Monitor link is not displayed, it indicates that the Monitoring Agent for Virtual Applications in the shared service is stopped. Do the following steps to start the agent. Do not click the Endpoint link, otherwise, you will receive an error message:

      Manage | Operation | Monitoring | Fundamental

    Click the plus sign (+) beside Connect and disconnect. Select Connect and then click Submit. After the operation is completed, the Monitor link will be displayed. The monitoring portal opens and you are presented with the data collected by the monitoring agents for the cloud group.


View monitoring data for virtual application instances

If you have deployed a virtual application instance, you can open IBM PureApplication System Monitoring Portal to access the data collected by the monitoring agents for the virtual application instance.

  1. Click...

      Instances | Virtual Applications | virtual application instance

  2. Click the plus sign (+) beside Virtual machine perspective.

    A list of virtual machines that were created for the virtual application instance is displayed.

  3. Click the Monitor link in the Middleware Status column of a virtual machine.

    The 64-bit version of the JRE is not supported for this operation. You must have the 32-bit version of the IBM JRE 6 (SR 12 and later) or IBM JRE 7 installed.

The IBM PureApplication System Monitoring Portal opens and you are presented with the data collected by the monitoring agents for the virtual application instance.


Deprovisioning components of the System Monitoring service

You can deprovision the components of the System Monitoring service to release system resources, or when you do not want to use the monitoring service.

  1. Click...

      Instances | Shared Services | instance of the System Monitoring service | Stop

    All the components of the System Monitoringservice that are listed in the Virtual machine perspective panel are stopped. The VM Status of each component is set to terminated as opposed to running.

  2. Click Delete.

    All the components of the System Monitoring service are deprovisioned.

All the components of the System Monitoring service are deprovisioned and the resources used by them are released. The OS agents that connected to the monitoring servers of the System Monitoring service are changed to autonomous state. Other monitoring agents that connected to the monitoring servers are automatically stopped. If at a later stage you want to resume the monitoring service, you need to deploy another instance of the System Monitoring service to this cloud group.


Set trace levels for the components of the System Monitoring service

You can set the trace level for the components of the System Monitoring service to determine the type and amount of information that is added to the log files.

  1. Click...

      Instances | Shared Services | instance | Manage | Operations tab

  2. Select one of the following components:

    • DW, to set trace levels for one of the following components:

      • Summarization and Pruning Agent
      • Warehouse Proxy Agent

    • HTEMS, to set trace levels for one of the following components:

    • RTEMS, to set trace levels for one of the following components:

      • Remote Tivoli Enterprise Monitoring Server
      • Warehouse Proxy Agent

  3. Depending on the role that you selected in the previous step, do one of the following steps:

    • If you selected DW:

      1. Click Set Trace Level for Warehouse Proxy Agent or Set Trace Level for Summarization and Pruning Agent , depending on the service component for which you want to set a trace level.

      2. In the Trace Log Parameters field, enter the trace command that corresponds to the trace information you want to be logged. There are two types of tracing, component level tracing and file level tracing. The following list describes the tracing levels available:

        All Provides all trace levels.
        Flow Provides control flow data describing function entry and exit.
        Error Logs internal error conditions.
        Detail Produces a detailed level of tracing.
        Input Records data created by a particular API, function, or process.
        Metrics Records data for a particular metric.
        Output Records data disseminated by a particular API, function, or process.
        State Records the condition or current setting of flags and variables in the process. If state tracing is enabled, you can see the current state of particular variables or flags as the process is running.

        For example, if you enter the following command in the Trace Log Parameters field:

        (UNIT:kbbacdl Flow State)
        
        it turns on the flow and state tracing for all files whose names start with kbbacdl in the IBM Tivoli Monitoring component.

        If you enter the following command,

        (COMP:kdh Detail)
        
        it turns the detail tracing on for all files that are identified as part of the kdh component.

      3. Click Submit to confirm.

    • If you selected HTEMS:

      1. Click Set Trace Level for Hub TEMS, Set Trace Level for TEPS, or Set Trace Level for PureApplication System Agent, depending on the service component for which you want to set a trace level.

      2. In the Trace Log Parameters field, enter the trace command that corresponds to the trace information you want to be logged. There are two types of tracing, component level tracing and file level tracing. The following list describes the tracing levels available:

        All Provides all trace levels.
        Flow Provides control flow data describing function entry and exit.
        Error Logs internal error conditions.
        Detail Produces a detailed level of tracing.
        Input Records data created by a particular API, function, or process.
        Metrics Records data for a particular metric.
        Output Records data disseminated by a particular API, function, or process.
        State Records the condition or current setting of flags and variables in the process. If state tracing is enabled, you can see the current state of particular variables or flags as the process is running.

        For example, if you enter the following command in the Trace Log Parameters field:

          (UNIT:kbbacdl Flow State)

        ...it turns on the flow and state tracing for all files whose names start with kbbacdl in the IBM Tivoli Monitoring component.

        If you enter the following command,

          (COMP:kdh Detail)

        ...it turns the detail tracing on for all files that are identified as part of the kdh component.

      3. Click Submit to confirm.

    • If you selected RTEMS:

      1. Click Set Trace Level for Remote TEMS or Set Trace Level for Warehouse Proxy Agent depending on the service component for which you want to set a trace level.

      2. In the Trace Log Parameters field, enter the trace command that corresponds to the trace information you want to be logged. There are two types of tracing, component level tracing and file level tracing. The following list describes the tracing levels available:

        All Provides all trace levels.
        Flow Provides control flow data describing function entry and exit.
        Error Logs internal error conditions.
        Detail Produces a detailed level of tracing.
        Input Records data created by a particular API, function, or process.
        Metrics Records data for a particular metric.
        Output Records data disseminated by a particular API, function, or process.
        State Records the condition or current setting of flags and variables in the process. If state tracing is enabled, you can see the current state of particular variables or flags as the process is running.

        For example, if you enter the following command in the Trace Log Parameters field:

          (UNIT:kbbacdl Flow State)

        ...it turns on the flow and state tracing for all files whose names start with kbbacdl in the IBM Tivoli Monitoring component.

        If you enter the following command,

        (COMP:kdh Detail)
        
        it turns the detail tracing on for all files that are identified as part of the kdh component.

      3. In the Virtual Machine IP Address field, enter the IP Address of the virtual machine on which the instance of remote Tivoli Enterprise Monitoring Server is hosted.

      4. Click Submit to confirm.

The operation modifies trace information on a component of the System Monitoring service and allows you to do so without restarting the component. Settings take effect immediately, however; modifications made this way are not persistent and need to be configured again when the component is restarted.


View VM log files

The log files associated with the virtual machine that plays a role in the instance of the System Monitoring service can be found at...

A list of virtual machines in the three roles required to facilitate the instance of the System Monitoring service is displayed. These roles are namely Data Warehouse, Hub TEMS, and Remote TEMS.

Click "Log" for the VM to view. Click the plus sign (+) in the navigator on the left side. The available log files for the virtual machine are displayed. They are grouped under four categories:

Keep clicking the plus sign (+) until you reach the log file to view and then click the file name to open it on the right side of the window. You can also click Download All to save all the available log files to your file system. The compressed file includes the current error.log file and the current trace.log file and the available archived versions of these logs.


Scenarios

Learn how to use the monitoring service in IBM PureApplication System.

The typical usage scenarios are intended to help different roles understand how to use the monitoring service in IBM PureApplication System for different situations.


Scenario: Enable the System Monitoring service in IBM PureApplication System

The IBM PureApplication System administrator can use this usage scenario to understand how to enable the System Monitoring service in IBM PureApplication System. After you enable the System Monitoring service, users with different roles can view different monitoring information in the IBM PureApplication System Monitoring Portal.

As an administrator of IBM PureApplication System, David has completed the installation of IBM PureApplication System. Now he wants to enable the System Monitoring service in IBM PureApplication System so that users can view monitoring information in IBM PureApplication System Monitoring Portal. Therefore, he performs the following tasks to enable the monitoring service:

  1. Deploy the System Monitoring service.

  2. If you have middleware applications in a cloud group and you want to view their performance and availability data, you can extend the System Monitoring service.

    For example, if you have IBM WebSphere Application Server in a cloud group, you can extend the capabilities of the base System Monitoring service by deploying the System Monitoring service for WAS.

  3. Deploy the virtual application patterns and virtual system patterns.

    After the deployment, the monitoring agents on the virtual machines that are created as a result of the deployment automatically connect to the System Monitoring service and collect performance and availability data for the deployed virtual applications and virtual systems.

  4. Open the IBM PureApplication System Monitoring Portal to view monitoring information.

After the System Monitoring service is enabled, users with the Cloud user role can view monitoring information that includes metrics on middleware and database virtual system patterns that they deploy, users with the Operator role can view monitoring information regarding all of the user deployments within a cloud group, and users with the Administrator role can view all of the monitoring information that is available to operators, in addition to viewing hardware monitoring information.


Scenario: Cloud Administrator

The cloud administrator can use information gathered by the Monitoring Agent for IBM PureApplication System to solve problems that might occur with the system and affect any applications deployed. The administrator Thomas has deployed the System Monitoring service and the agent is running and gathering information about the associated cloud.


Procedure

  1. Thomas logs into the system and opens the IBM PureApplication System Monitoring Portal.

  2. He then uses the Monitoring Agent for IBM PureApplication System to access the PureApplication System Overview workspace.

  3. The Situation Event Console view features a situation alert, denoting that one of the compute nodes is experiencing problems.

  4. Thomas links to the Failure Analysis workspace to further explore the reasons for the situation alert. The Failure Analysis view shows that the compute node is indeed unavailable .

  5. From the Failure Analysis workspace, Thomas links to the Compute Node Performance workspace. The Top 5 CPU Utilizers view shows that the CPU usage on the compute node in question is unusually high.

  6. Thomas decides that maintenance and repair work must be carried out on the compute node He then uses the Virtual Machine Performance workspace to find the virtual machines associated with the compute node.

  7. Thomas sends an email to all application deployment owners, listing the virtual machines that will be affected by the downtime, which is required to carry out maintenance and repair work on the compute node.

  8. After the maintenance and repair work is completed, Thomas sends another email to all the application deployment owners, informing them that all affected virtual machines are now back online.


Historical data collection

Use historical data collection and reporting to gather useful metrics about your managed network. You can also use historical data with the chart baselining tools for predictive analysis and in situation modeling for key performance indicators.

After the System Monitoring shared service is started, data samples are collected and saved in history files at the monitoring agent for the short term. The data is stored in tables, one for each attribute group for which data is being collected. The Tivoli data warehousing facility is set up by default and the data is moved to a relational database for longer term storage every one hour.


Create a historical collection

If you want to change the default settings for predefined historical data collections, or create a historical data collection for a specific attribute group, see the detailed information at Historical collection configuration.

When you create a historical data collection, to distribute it to all monitoring servers, including the remote monitoring servers that are provisioned later as a result of scale-out, ensure that on the Distribution tab page, you select Managing System (Agent) for the Distribute To field and then select *agent_type for the Available Managed System Groups field, where agent_type is the type of agents for which you want to collect historical data. For example, to collect historical data for Linux OS agents, select *LINUX_SYSTEM in the Available Managed System Groups field.

For the hub monitoring server and remote monitoring server that are created as a result of the deployment of a System Monitoring service, historical data collection for these monitoring servers is started when their status is changed to running; however, this is not the case for the remote monitoring server that is provisioned later as a result of a scale-out. When the status of the remote monitoring server is changed to running, the configuration of historical data collection continues and might take some time (at most an hour). If monitoring agents connect to the remote monitoring server in the configuration period, historical data for these agents will not be collected before the configuration for the historical data collection completes.


View historical data

After historical data collection is started for an attribute group, historical data for that group can be retrieved and displayed in query-based views (chart views, table view, and the topology view).

By default, historical data is collected every 15 minutes and it is moved to the Tivoli Data Warehouse every one hour.

You know that historical data collection has been enabled for the attributes in a view when you can see Time Span in the view's toolbar. The view shows current data samplings unless you use this tool to specify a broader time period. The first 24 hours of collected data is retrieved from the short-term history files; beyond 24 hours, the data is pulled from the data warehouse.

Historical data is valuable for use in chart baselining. Applying historical data to chart baselines enables you to perform predictive analysis.

Requests that return large amounts of data can negatively affect the performance of monitoring servers, monitoring agents, and your network. For attributes or views that return considerable amounts of data, try to limit the time span to as short a period as will provide the required information.


Summarizing and pruning historical data

Summarization and pruning is enabled for historical data that is stored in the Tivoli Data Warehouse. It keeps the database from growing to an unwieldy size and minimizes the amount of data that gets retrieved to the IBM PureApplication System Monitoring Portal.

By default, the historical data that is stored in the Tivoli Data Warehouse is aggregated hourly, daily, weekly and monthly. The following are names of the summarization tables. The x represents the original table name of the detailed data. The summarization interval that is chosen for the particular attribute group is appended to the original table name. Names can be different between the detailed data table and summarized table name because of database name length restrictions:

Monthly

x_M

Weekly

x_W

Daily

x_D

Hourly

x_H

Pruning takes places hourly, daily, weekly and monthly. See Table 1 for the length of time that the historical data is kept in the Tivoli Data Warehouse.

Pruning intervals and duration of storage

Data type Time that the data is kept
Original data samples 7 days
Hourly data 7 days
Daily data 30 days
Weekly data 6 months
Monthly data 6 months

If you want to change the default settings for historical data collection, or create a historical data collection for a specific attribute group, see the detailed information at Historical collection configuration.

If you want to change the default settings for summarization and pruning, see the detailed information at Summarization and pruning configuration.

By default, historical data collection, summarization and pruning is enabled for a set of attribute groups for the following agents.


Historical data collection for the AIX Premium agent

By default, historical data collection is enabled for a set of attribute groups for the AIX Premium agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the AIX Premium agent:

For detailed information about these attribute groups, see Attribute groups and attributes for the AIX Premium agent.


Historical data collection for the CEC base agent

By default, historical data collection is enabled for a set of attribute groups for the CEC base agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the CEC base agent:

For more information about these attribute groups, see Attribute groups and attributes for the CEC Base agent.


Historical data collection for the HMC base agent

By default, historical data collection is enabled for a set of attribute groups for the HMC base agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the HMC base agent:

For more information about these attribute groups, see Attribute groups and attributes for the HMC Base agent.


Historical data collection for the ITCAM Agent for HTTP Servers

By default, historical data collection is enabled for a set of attribute groups for the ITCAM Agent for HTTP Servers.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the ITCAM Agent for HTTP Servers:

For detailed information about these attribute groups, see Attributes of the ITCAM Agent for HTTP Servers.


Historical data collection for the ITCAM agent for J2EE

By default, historical data collection is enabled for a set of attribute groups for the ITCAM agent for J2EE.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the AIX Premium agent:

For more information about these attribute groups, see Attribute groups for ITCAM agent for J2EE.


Historical data collection for the ITCAM agent for SOA

By default, historical data collection is enabled for a set of attribute groups for the ITCAM agent for SOA.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the ITCAM agent for SOA:

For detailed information about these attribute groups, see Attribute groups for the ITCAM agent for SOA.


Historical data collection for the Linux OS agent

By default, historical data collection is enabled for a set of attribute groups for the Linux OS agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the Linux OS agent:

For detailed information about these attribute groups, see Attributes of the Linux OS agent.


Historical data collection for the PureApplication agent

By default, historical data collection is enabled for a set of attribute groups for the PureApplication agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the PureApplication agent:


Historical data collection for the UNIX OS agent

By default, historical data collection is enabled for a set of attribute groups for the UNIX OS agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the UNIX OS agent:

For detailed information about these attribute groups, see Attributes of the UNIX OS agent.


Historical data collection for the VIOS premium agent

By default, historical data collection is enabled for a set of attribute groups for the VIOS premium agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the VIOS Premium agent:

For detailed information about these attribute groups, see Attribute groups and attributes for the VIOS Premium agent.


Historical data collection for the WebSphere Applications agent

By default, historical data collection is enabled for a set of attribute groups for the WebSphere Applications agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the WebSphere Applications agent:

For detailed information about these attribute groups, see Attributes of the WebSphere Applications agent.


Historical data collection for the WebSphere Message Broker Monitoring agent

By default, historical data collection is enabled for a set of attribute groups for the WebSphere Message Broker Monitoring agent.

After the System Monitoring service is started, historical data for the following attributes groups is collected for the WebSphere Message Broker Monitoring agent:

For detailed information about these attribute groups, see Attributes of the WebSphere Message Broker Monitoring agent.


Historical data collection for the WebSphere MQ Monitoring agent

By default, historical data collection is enabled for a set of attribute groups for the WebSphere MQ Monitoring agent.

After the System Monitoring service is started, historical data for the following attributes groups is collected for the WebSphere MQ Monitoring agent:

For detailed information about these attribute groups, see Attributes of WebSphere MQ Monitoring agent.


Historical data collection for the Workload agent

By default, historical data collection is enabled for a set of attribute groups for the Workload agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the Workload agent:

For detailed information about these attribute groups, see Attribute groups for the Workload agent.


Historical data collection for the Windows OS agent

By default, historical data collection is enabled for a set of attribute groups for the Windows OS agent.

After the System Monitoring service is started, historical data for the following attribute groups is collected for the Windows OS agent:

For detailed information about these attribute groups, see Attribute groups and attributes for the Windows OS agent.


Known restrictions for the System Monitoring service

Be aware of the following known restrictions and limitations when using the System Monitoring service.


Monitoring agents and situations cannot be used externally

Monitor agents that collect and display metrics data for the System Monitoring service cannot be registered with an external IBM Tivoli Monitoring environment. The metrics data can only be displayed in the IBM PureApplication System Monitoring Portal. Similarly, you must set and receive situations about the System Monitoring service on the product, not through an external environment.


System Monitoring cannot be managed by an external Hub or TEPS

Currently, the remote Tivoli Enterprise Monitoring Server (RTEMS) or Hub Tivoli Enterprise Monitoring Server (TEMS) in the System Monitoring service cannot be managed by an external Hub or Tivoli Enterprise Portal Server (TEPS). Instead, use the PureApplication System Agent to get an appliance view of the system, or subscribe to the Simple Network Monitoring Protocol (SNMP) events in the event console.


Caching service

The caching service is a shared service that allows virtual deployments in the cloud to store, share, and access caching information among components. The caching service also provides auto-scale capabilities so that new component instances can be created or destroyed without retransmitting information.

The caching service is based on WebSphere eXtreme Scale code and provides highly efficient caching. The caching service is self managed and highly available, providing simple and quick usage.

Each deployed instance of the caching service contains the following three virtual machines:

Caching-Master

Hosts the shared service administrative API and the WebSphere eXtreme Scale catalog and container processes.

Caching-Catalog

Hosts the WebSphere eXtreme Scale catalog (redundancy) and container processes.

Caching-Container

Hosts the WebSphere eXtreme Scale container processes.

When deploying the caching service, you can specify both the instance size and the number of instances. These settings determine the size and initial number of virtual machines in the cloud that are devoted to caching service. For example, if you select 8 GB and four instances, the caching service deploys with four virtual machines that can each handle 8 GB of caching information, for a total capacity of 32 GB. The information about each instance is automatically replicated to the other caching virtual machines.

A development version of the caching service is available to reduce the amount of capacity required and the start up time when not in production. The development version of the caching service deploys two virtual machines.

You can manage the caching service in the following ways:

To manage the caching service, click Instances > Virtual Applications from the workload console. Next, select a deployed virtual application instance in the list, and click Manage.

When managing the caching service, you can perform operations that are related to grid caching. Grid caching maintains data that can be accessed from multiple clients, thereby minimizing network latency and reducing bandwidth. You can set the following options when you are using the caching service to configure grid caching:

Create grid

Create a new grid to maintain cached data. Add the following information:

  • The name of the new grid. This name is used by deployments to find and access the cached data from the grid.
  • The type of grid, for example:

    • Simple Data Grid: A simple data grid that can be used for anything.
    • Dynamic Cache: A WebSphere Application Server dynamic cache replication grid.
    • Session Data Cache: A WebSphere Application Server HTTP session replication grid.

  • The unique user ID used to access this grid.
  • The password that is associated with the unique ID used to access the grid.

List grid

Returns a list of all of the grids that currently exist in the caching service.

List grid details

Returns the details of a specific grid.

Delete grid

Deletes the specified grid. If you choose to delete a grid, all of the cached data on that grid is deleted. This action cannot be undone. Deleting a grid also deletes the user ID that is associated to the grid.

Public SSL Caching Service Certificate

Extract the public SSL certificate of the caching service that can be used by external clients when connecting.

Import trusted caching client certificate into truststore

Adds a public client SSL certificate to the trust list of certificates so the service trust any clients that are using ssl RMI/IIOP communication for grid connections. This enables manually configured virtual systems to use the caching service over ORB SSL. The format of the certificate that you import must be one that is accepted by keytool.

Remove trusted caching client certificate from the caching service truststore

Restarts the caching service and removes the previously added public client SSL certificate from the service so it is not trusted any more.

Update a grid

Enables the user to update existing grids to change maximum capacity of the grid.

Add trace to the caching server

Enables the user to configure different traces in the caching service.

Remove trace from the caching server

Remove traces enabled in caching service.

Get current caching service trace level

Gets the current trace settings.

When deploying the caching service, you can also provide input such as the initial number of VM instances, the maximum number of VM instances and the cache size per instance. The cache size defaults to 8 GB, and you are provided with choices for 4, 8, 16, and 32 GB. You can disable scaling, tune scaling parameters and the usage percentage range beyond which automatic scale out and in occurs. You can also specify a time limit that the usage can be outside the range before scaling occurs.


Interacting with the caching service through clients

Administrators can interact with the caching service through a client that manages the caching grid. The caching service is a shared service that enables multiple clients in a cloud environment to use the service. The caching service supports the client API version 3.0.

The interaction with the caching service is performed in two phases. First, the caching service interacts with the application through the grid that was created for caching. Next, the caching service interacts with the administrator as he manages the grid for the client.

Based on WebSphere eXtreme Scale, the caching service uses the same application interaction model as WebSphere eXtreme Scale. This model includes WebSphere eXtreme Scale client for session and dynamic caching interaction and objectgrid APIs for simple grid interaction.

The administrative interaction with the caching service uses the shared service infrastructure to help clients who want to configure a client to the caching service automate the interaction for plug-ins. Administrative interaction involves first looking up the registry and then using the caching service APIs to manage the grid.


Registry lookup

After deploying the caching service, a registry entry is created in the shared service infrastructure. Caching clients read the registry entry to look up the location (IP address or URI) of the service. This registry entry, which is obtained via public registry APIs, also contains details about how to use the caching service public API to interact with the caching service. When calling the registry for the caching service, the caller should use version 3.0 of the client API.

Service provisioners typically use the RegistryService.getRegistry(...) life cycle script with the following code:

maestro.registry.getRegistry(<service_name>, <client_api_version>)
This script returns the registry information of the shared service (<service_name>), as long as a service with that name is deployed in the cloud group and supports the client version (<client_api_version>). The shared service supplies a registry provider that returns this information based on the shared service deployment information that is passed to it.

Depending on the type of caching service deployed, a call to the registry service returns different information. The internal caching service registry provides the following information:

The external caching service registry provides the following information:


Internal caching service API calls

The internal caching service is an in-cloud service that exposes client version 3.0 REST APIs. Use PUT methods to create resources, DELETE methods to delete resources, and GET methods to retrieve resources.

The <ip> variable in the examples represent the IP address that is obtained from the registry information.

Examples of using PUT methods

Example Description
PUT https://<ip>:9999/sharedservice/caching/sessionGrid Creates a session grid with a specified cap size. Also creates the user who is allowed to access the grid.
PUT https://<ip>:9999/sharedservice/caching/simpleGrid Creates a simple grid with specified cap size. Also creates the user who is allowed to access the grid.
PUT https://<ip>:9999/sharedservice/caching/dynamicGrid Creates a dynamic grid with specified cap size. Also creates the user who is allowed to access the grid.
The following JSON code is used with PUT methods:

{"user":<user>,"password":<password>,"gridname":<name>,"gridcap":<cap>}

See the following limitations:

As a best practice, in cases where HTTP error response codes are returned, send a DELETE grid call to confirm that any grids that might have been created are completely removed.

Examples of using DELETE methods

Example Description
DELETE https://<ip>:9999/sharedservice/caching/sessionGrid/<gridname> Delete the session grid and the users who can access the grid.
DELETE https://<ip>:9999/sharedservice/caching/simpleGrid/<gridname> Delete the simple grid and the users who can access the grid.
DELETE https://<ip>:9999/sharedservice/caching/dynamicGrid/<gridname> Delete the dynamic grid and the users who can access the grid.


External caching service API calls

The external caching service is built on IBM WebSphere DataPower XC10 Appliance. The service requires the client to use the public APIs of the appliance to manage the grid. For more information, see the Related information section.

The IBM Web Application Pattern caching client performs the following calls when creating the grid. These calls configure security and provide proper grid isolation for the deployment:

IBM WebSphere DataPower XC10 Appliance Information Center


Use an external caching service

You can use IBM WebSphere DataPower XC10 Appliance as an external caching service for your cloud environment. The caching service can be deployed either as an internal service, or as a pointer to an external WebSphere DataPower XC10 Appliance.

This task describes setting up the caching service in a cloud environment to point to an external system. In this scenario, a WebSphere DataPower XC10 Appliance is set up outside the cloud environment and provides caching services for applications running in the cloud. To set up an XC10 Appliance to cache your cloud environment, complete the following procedure. This approach saves cloud resources and the XC10 Appliance provides a larger space than the internal caching service.