Observing the Liberty runtime in a Kubernetes environment
We can monitor the Liberty runtime and analyze logs with tools such as Kibana, Prometheus, and Grafana so that you resolve issues in the environment and manage work flows.
Monitoring our Liberty runtimes
When we enable the Liberty runtime with MicroProfile metrics, we can track and observe metrics from the JVM and Liberty server. We can also track metrics when a deployed application is instrumented with MicroProfile Metrics. We can then use Prometheus to scrape the metric data and Grafana to visualize it.Add MicroProfile metrics to a WebSphere Liberty image
Ensure that the MicroProfile Metrics feature is built as part of our Liberty image.- Create a server_mpMetrics.xml file and put it in the same directory as your Dockerfile.
We can use the following example.
<?xml version="1.0" encoding="UTF-8"?> <server> <featureManager> <feature>mpMetrics-5.1</feature> <!-- Include the monitor-1.0 feature if we use an mpMetrics version earlier than 2.3. <feature>monitor-1.0</feature> --> </featureManager> <quickStartSecurity userName="${env.username}" userPassword="${env.password}"/> </server>
Your Liberty image now outputs Prometheus formatted metrics at the /metrics endpoint. When we use Prometheus to scrape data from the /metrics endpoint, we must update a service monitor to negotiate authentication with the Liberty server.
We can use any version of the MicroProfile Metrics feature. We must include the Performance Monitoring 1.0 feature if we use a version of the MicroProfile Metrics feature that is earlier than version 2.3.
- Set up the username and password environment variables used in the <quickStartSecurity> element.
The example configuration secures access to the server with basic authentication using the <quickStartSecurity> element. We can alternatively secure the server with a basic user registry or an LDAP user registry.
- In your Dockerfile, add the following line to copy the server_mpMetrics.xml
file into the configDropins/overrides
directory.
COPY --chown=1001:0 server_mpMetrics.xml /config/configDropins/overrides/
See the MicroProfile Metrics feature.
Enable Prometheus to scrape data
The Prometheus operator must be installed on our Kubernetes cluster so that Prometheus can scrape data. If the Prometheus operator is not already installed, install it.
The WebsphereLibertyApplication custom resource creates a service monitor for you. Configure the service monitor so that it uses basic authentication credentials. These credentials are specified in the server.xml file for accessing the /metrics endpoint. Add the following basicAuth section to the monitoring definition in your WebsphereLibertyApplication custom resource, and replace the basic-auth name with your secret.
kind: WebSphereLibertyApplication spec: … monitoring: endpoints: - basicAuth: username: key: username name: basic-auth password: key: password name: basic-auth interval: 30s
Visualizing your data with Grafana
We can use IBM provided Grafana dashboards that include metrics from the JVM and the Liberty runtime. If we are deploying Grafana on Red Hat OpenShift® Container Platform, see Monitoring applications on Red Hat OpenShift Container Platform with Prometheus and Grafana.
The sample Grafana dashboards are available for Liberty servers that use mpMetrics-1.x or mpMetrics-2.x. Use the Grafana dashboard that matches the version of the MicroProfile metrics feature previously configured.
Umbrella Feature | mpMetrics Feature | Grafana dashboard |
---|---|---|
microProfile-1.2 - microProfile 2.2 | mpMetrics-1.x | ibm-websphere-liberty-grafana-dashboard.json |
microProfile-3.0 - microProfile-6.1 | mpMetrics-2.x - mpMetrics-5.1 | ibm-websphere-liberty-grafana-dashboard-metrics-2.0.json |
Analyzing Liberty logs
We can manage log data that comes from Kubernetes pods by deploying log aggregation tools on a Kubernetes cluster.
Pod processes that run in Kubernetes frequently produce logs. To effectively manage this log data and ensure that no loss of log data occurs when a pod ends, deploy a log aggregation tool on the Kubernetes cluster. Log aggregation tools help you persist, search, and visualize the log data that is gathered from the pods across the cluster.
One choice for application logging with log aggregation is the Elasticsearch, Fluentd, and Kibana (EFK) set of open source tools. If we use the EFK stack, we can customize and deploy Kibana dashboards to monitor Liberty logging events.
Deploy Kibana dashboards to monitor Liberty logging events
We can use Kibana dashboards to visualize JSON logging events from Liberty servers. To effectively manage application logs, we can deploy our own Elasticsearch, Fluentd, and Kibana (EFK) stack on a Kubernetes cluster to aggregate application logs and analyze these logs on the Kibana dashboard.
- For effective management of logs emitted from applications, deploy our own Elasticsearch, Fluentd, and Kibana (EFK) stack. If we are deploying the EFK stack on Red Hat OpenShift, see Analyzing application Logs on Red Hat OpenShift Container Platform with Loki, Vector, and the RHOCP Cluster Observability Operator.
- To use Kibana dashboards, the logging events must be emitted in JSON format to standard-output. For the WebSphere Liberty operator, JSON logging is enabled by default. For more information about how to configure a WebSphere Application Server Liberty image with JSON logging, see Logging.
- We can use Kibana dashboards that display Liberty logging events.
- We can use command-line JSON parsers like the open source JSON Query tool (jq) to create
human-readable views of JSON-formatted logs. We can create an alias for your jq
command in the format you like. In the following example, the logs are piped through grep to ensure
that the message field is available before jq parses the line.
alias prettylog="grep --line-buffered message | jq '.ibm_datetime + \" \" + .loglevel + \"\\t\" + \" \" + .message' -r" kubectl logs -f pod_name -n namespace | prettylog