Configure the visualization data service
The visualization data service logs historic data in text files for reuse with other charting programs. Historic data is logged in comma-separated values with time stamps in standard long value from the java.util.Date class. By using the visualization data service, we can log historical data, calculate charge back values, or perform capacity planning.
We must use the deployment manager to implement this feature. Ensure that if you are using multiple core groups, they are correctly bridged.
If we are a user with either a monitor or an operator administrative role, then we can only view the visualization data service information. If we have a configurator administrative role, then we can change the configuration. If we have an administrator role, then we have all the privileges for the visualization data service.
: We must configure the visualization data service before you enable logging. If we need to make changes to the configuration after we have enabled logging, you must restart the deployment manager after we make the configuration changes.
- In the administrative console, click System administration > Visualization data service.
- Enter a value in the Time stamp format field. The time stamp format specifies a time and date pattern used when logging the visualization data. Use the SimpleDateFormat Java class to format the time stamp. For example, to output the 12.06.2006 5:26:30:978 PM PDT timestamp, use the following time stamp format value:
MM.dd.yyyy hh:mm:ss:SSS aaa z
If you are using IBM Tivoli Usage and Accounting Manager, then use a format that separates the date and time into different fields:
'MM.dd.yyy, hh:mm:ss:SSS' 'yyyy.MMMMM.dd, hh:mm:ss'We can also specify the time stamp format with wsadmin.sh:wsadmin.sh -lang jython wsadmin>> vds = AdminConfig.getid("/Cell:OpsManTestCell/VisualizationDataService:/") wsadmin>> vdl = AdminConfig.showAttribute(vds,"visualizationDataLog") wsadmin>> AdminConfig.modify(vdl,[["timestampFormat","MM.dd.yyyy hh:mm:ss:SSS aaa z"]]) wsadmin>> print AdminConfig.show(vdl) wsadmin>> AdminConfig.save()
- In the Maximum file size field, type a whole integer for the maximum file size for logs.
- In the Maximum number of historical files field, type a whole integer for the maximum number of logs to be generated per historic cache type.
- In the File name field, type the path where the log files are generated. Use a variable in the file name value, for example: ${LOG_ROOT}/visualization.
- In the Data log write interval field, type a whole number between 1 and 365 for the interval in which the logs are generated in seconds, minutes, hours, or days. If we plan to log data for several metrics over a period longer than 1 week, increase the Data log write interval for better performance.
- From the Data transformer action list, select Average or Skip to specify how to transform data when the interval reaches its maximum value. More data points are provided than you might want to use. The AVERAGE option averages the existing data points between the specified interval, and the SKIP option skips the data points to only use the points specifically at the intervals.
- Select Enable log to start logging historic data.
- If logging was enabled before configuringd the visualization data service, restart the deployment manager.
Results
Operational data is exported to the file name that specified.
What to do next
Now that we have configured the visualization data service, we can import data into an external charting program.
Subtopics
- Intelligent Management: performance logs
Use log files to help with performance monitoring, accounting and problem determination.
- BusinessGridStatsCache
This log file describes the business grid statistics cache.
- StrfTime format conversions
The format used when using the %{format}t log parameter is based on the non-extended BSD strftime(3) time conversion functions. The parameters that are specifically supported, and sample output, are listed in the following table.
- DeploymentTargetStatsHistoricCache
This log file contains historic information on the deployment target statistics historic cache.
- NodeStatsHistoricCache
This log file contains historic information on the node statistics cache.
- ServerStatsCache
This log file describes the server statistics cache.
- TCModuleInstanceStatsCache
This log file describes the transaction class module instance cache.
- TierStatsCache
This log file describes the tier statistics cache.
- FineGrainedPowerConsumptionStatsCache
This log file contains fine grained power and work consumption data. A record is written for every transaction class module and server instance. This action creates a record for every middleware application, module, transaction class, and server instance that has had work routed through an on demand router (ODR). Additional fields exist that hold relationship information such as the cluster to which the server belongs, the node group with which the cluster is associated, and the service policy with which the transaction class is associated.
- ServerPowerConsumptionStatsCache
This file is a consolidation of the FineGrainedPowerConsumptionStatsCache at the server level with some additional server data.
Related tasks
Monitor Intelligent Management operations
Intelligent Management: performance logs Intelligent Management: administrative roles and privileges
SimpleDateFormat class API documentation