Configure the visualization data service
The visualization data service logs historic data in text files for reuse with other charting programs. Historic data is logged in comma-separated values with time stamps in standard long value from the java.util.Date class. By using the visualization data service, we can log historical data, calculate charge back values, or perform capacity planning.
Use the deployment manager to implement this feature. Ensure that if we are using multiple core groups, they are correctly bridged.
If we are a user with either a monitor or an operator administrative role, then we can only view the visualization data service information. If we have a configurator administrative role, then we can change the configuration. If we have an administrator role, then you have all the privileges for the visualization data service.
Configure the visualization data service before enabling logging. To make changes to the configuration enabling logging, restart the deployment manager after making the configuration changes.
- In the administrative console, click...
System administration > Visualization data service
- Enter a value in the Time stamp format field.
The time stamp format specifies a time and date pattern used when logging the visualization data. Use the SimpleDateFormat Java class to format the time stamp. For example, to output the 12.06.2006 5:26:30:978 PM PDT timestamp, use the following time stamp format value:
MM.dd.yyyy hh:mm:ss:SSS aaa z
If we are using IBM Tivoli Usage and Accounting Manager, then use a format that separates the date and time into different fields:
'MM.dd.yyy, hh:mm:ss:SSS' 'yyyy.MMMMM.dd, hh:mm:ss'We can also specify the time stamp format with wsadmin.sh:
wsadmin.sh -lang jython wsadmin>> vds = AdminConfig.getid("/Cell:OpsManTestCell/VisualizationDataService:/") wsadmin>> vdl = AdminConfig.showAttribute(vds,"visualizationDataLog") wsadmin>> AdminConfig.modify(vdl,[["timestampFormat","MM.dd.yyyy hh:mm:ss:SSS aaa z"]]) wsadmin>> print AdminConfig.show(vdl) wsadmin>> AdminConfig.save()
- In the Maximum file size field, type a whole integer for the maximum file size for logs.
- In the Maximum number of historical files field, type a whole integer for the maximum number of logs to be generated per historic cache type.
- In the File name field, type the path where the log files are generated. Use a variable in the file name value, for example: ${LOG_ROOT}/visualization.
- In the Data log write interval field, type a whole number between 1 and 365 for the interval in which the logs are generated in seconds, minutes, hours, or days. If we plan to log data for several metrics over a period longer than 1 week, increase the Data log write interval for better performance.
- From the Data transformer action list, select Average or Skip to specify how to transform data when the interval reaches its maximum value. More data points are provided than we might want to use. The AVERAGE option averages the existing data points between the specified interval, and the SKIP option skips the data points to only use the points specifically at the intervals.
- Select Enable log to start logging historic data.
- If logging was enabled before configuringd the visualization data service, restart the deployment manager.
Results
Operational data is exported to the file name that specified.
What to do next
Now that you have configured the visualization data service, we can import data into an external charting program.
Subtopics
- Intelligent Management: performance logs
- BusinessGridStatsCache
- StrfTime format conversions
- DeploymentTargetStatsHistoricCache
- NodeStatsHistoricCache
- ServerStatsCache - Intelligent Management
- TCModuleInstanceStatsCache
- TierStatsCache
- FineGrainedPowerConsumptionStatsCache
- ServerPowerConsumptionStatsCache
Related tasks
Monitor Intelligent Management operations Intelligent Management: performance logs Intelligent Management: administrative roles and privileges
SimpleDateFormat class API documentation