Operator configuration examples
Browse the WebSphere Liberty operator examples to learn how to use custom resource (CR) parameters to configure the operator.
For more information about the WebSphereLibertyApplication custom resource definition (CRD) configurable parameters, see WebSphereLibertyApplication custom resource.
- Reference image streams (.spec.applicationImage)
- Configure service account (.spec.serviceAccount)
- Add or change labels (.metadata.labels)
- Add annotations (.metadata.annotations)
- Set environment variables for an application container (.spec.env or .spec.envFrom)
- Override console logging environment variable default values (.spec.env)
- Configure multiple application instances for high availability (.spec.replicas or .spec.autoscaling)
- Set privileges and permissions for a pod or container (.spec.securityContext)
- Persist resources (.spec.statefulSet and .spec.volumeMounts)
- Monitor resources (.spec.monitoring)
- Specify multiple service ports (.spec.service.port* and .spec.monitoring.endpoints)
- Configure probes (.spec.probes)
- Deploy serverless applications with Knative (.spec.createKnativeService)
- Expose applications externally (.spec.expose, .spec.createKnativeService,
- Allowing or limiting incoming traffic (.spec.networkPolicy)
- Bind applications with operator-managed backing services (.status.binding.name and .spec.service.bindable)
- Limit a pod to run on specified nodes (.spec.affinity)
- Constrain how pods are spread between nodes and zones
- Configure DNS (.spec.dns.policy and .spec.dns.config)
- Configure tolerations (.spec.tolerations)
Reference image streams (.spec.applicationImage)
To deploy an image from an image stream, we must specify a .spec.applicationImage field in the custom resource definition.
spec: applicationImage: my-namespace/my-image-stream:1.0
The previous example looks up the 1.0 tag from the my-image-stream image stream in the my-namespace project and populates the custom resource .status.imageReference field with a referenced image such as image-registry.openshift-image-registry.svc:5000/my-namespace/my-image-stream@sha256:*****. The operator watches the specified image stream and deploys new images as new ones are available for the specified tag.
To reference an image stream, the .spec.applicationImage field must follow the project_name/image_stream_name[:tag] format. If project_name or tag is not specified, the operator uses the default values of the custom resource namespace and of latest. For example, the applicationImage: my-image-stream configuration is the same as the applicationImage: my-namespace/my-image-stream:latest configuration.
The operator tries to find an image stream name first with the project_name/image_stream_name format and falls back to the registry lookup if it can't to find any image stream that matches the value.
This feature is only available if we are running on Red Hat OpenShift®. The operator requires ClusterRole permissions if the image stream resource is in another namespace.
Configure service account (.spec.serviceAccount)
The operator can create a ServiceAccount resource when deploying an WebSphereLibertyApplication custom resource (CR). If .spec.serviceAccount.name is not specified in a CR, the operator creates a service account with the same name as the custom resource (such as my-app). In addition, this service account is dynamically updated when pull secret changes are detected in the custom resource field .spec.pullSecret.
Alternatively, the operator can use a custom ServiceAccount that you provide. If .spec.serviceAccount.name is specified in a CR, the operator uses the service account as is, with read only permissions when provisioning new Pods. It is your responsibility to add any required image pull secrets to the service account when accessing images behind a private registry.
Note: .spec.serviceAccountName is now deprecated. The operator still looks up the value of .spec.serviceAccountName, but we must switch to using .spec.serviceAccount.name.
We can set .spec.serviceAccount.mountToken to disable mounting the service account token into the application pods. By default, the service account token is mounted. This configuration applies to either the default service account that the operator creates or to the custom service account that you provide.
If applications require specific permissions but still want the operator to create a ServiceAccount, we can manually create a role binding to bind a role to the service account that the operator created. To learn more about role-based access control (RBAC), see the Kubernetes documentation.
Add or change labels (.metadata.labels)
By default, the operator adds the following labels into all resources created for a WebSphereLibertyApplication CR.
Label | Default value | Description |
---|---|---|
app.kubernetes.io/instance | metadata.name | A unique name or identifier for this component.
We cannot change the default. |
app.kubernetes.io/name | metadata.name | A name that represents this component. |
app.kubernetes.io/managed-by | websphere-liberty-operator | The tool that manages this component. |
app.kubernetes.io/component | backend | The type of component created. For a full list, see the Red Hat OpenShift documentation. |
app.kubernetes.io/part-of | applicationName | The name of the higher-level application that this component is a part of. If the component is not a stand-alone application, configure this label. |
app.kubernetes.io/version | version | The version of the component. |
We can add new labels or overwrite existing labels, excluding the app.kubernetes.io/instance label. To set labels, specify them in your custom resource as key-value pairs in the .metadata.labels field.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app labels: my-label-key: my-label-value spec: applicationImage: quay.io/my-repo/my-app:1.0
After the initial deployment of the custom resource, any changes to its labels are applied only if a spec field is updated.
When running in Red Hat OpenShift, there are additional labels and annotations that are standard on the platform. Overwrite defaults where applicable and add any labels from the Red Hat OpenShift list that are not set by default using the previous instructions.
Add annotations (.metadata.annotations)
To add new annotations into all resources created for a WebSphere Liberty operator, specify them in your CR as key-value pairs in the .metadata.annotations field. Annotations in a CR override any annotations specified on a resource, except for the annotations set on Service with .spec.service.annotations.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app annotations: my-annotation-key: my-annotation-value spec: applicationImage: quay.io/my-repo/my-app:1.0
After the initial deployment of the custom resource, any changes to its annotations are applied only if a spec field is updated.
When running in Red Hat OpenShift, there are additional annotations that are standard on the platform. Overwrite defaults where applicable and add any labels from the Red Hat OpenShift list that are not set by default using the previous instructions.
Set environment variables for an application container (.spec.env or .spec.envFrom)
To set environment variables for the application container, specify .spec.env or .spec.envFrom fields in a CR. The environment variables can come directly from key-value pairs, ConfigMap, or Secret. The environment variables set by the .spec.env or .spec.envFrom fields override any environment variables specified in the container image.
Use .spec.envFrom to define all data in a ConfigMap or a Secret as environment variables in a container. Keys from ConfigMap or Secret resources become environment variable names in the container. The following custom resource sets key-value pairs in .spec.env and .spec.envFrom fields.
spec: applicationImage: quay.io/my-repo/my-app:1.0 env: - name: DB_NAME value: "database" - name: DB_PORT valueFrom: configMapKeyRef: name: db-config key: db-port - name: DB_USERNAME valueFrom: secretKeyRef: name: db-credential key: adminUsername - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credential key: adminPassword envFrom: - configMapRef: name: env-configmap - secretRef: name: env-secrets
For another example that uses .spec.envFrom.secretRef, see Use environment variables for basic authentication credentials. For an example that overrides the console logging environment variable default values, see Override console logging environment variable default values (.spec.env).
Override console logging environment variable default values (.spec.env)
The WebSphere Liberty operator sets environment variables related to console logging by default. We can override the console logging default values with our own values in your custom resource .spec.env list.
The following table lists the console logging environment variables and their default values.
Variable name | Default value |
---|---|
WLP_LOGGING_CONSOLE_LOGLEVEL | info |
WLP_LOGGING_CONSOLE_SOURCE | message,accessLog,ffdc,audit |
WLP_LOGGING_CONSOLE_FORMAT | json |
To override default values for the console logging environment variables, set your preferred values manually in your custom resource .spec.env list. For information about values we can set, see the Open Liberty logging documentation.
The following example shows a custom resource .spec.env list that sets nondefault values for the console logging environment variables.
spec: applicationImage: quay.io/my-repo/my-app:1.0 env: - name: WLP_LOGGING_CONSOLE_FORMAT value: "DEV" - name: WLP_LOGGING_CONSOLE_SOURCE value: "messages,trace,accessLog" - name: WLP_LOGGING_CONSOLE_LOGLEVEL value: "error"
For more information about overriding variable default values, see Set environment variables for an application container (.spec.env or .spec.envFrom). For information about monitoring applications and analyzing application logs, see Observing with the WebSphere Liberty operator
Configure multiple application instances for high availability (.spec.replicas or .spec.autoscaling)
To run multiple instances of the application for high availability, use the .spec.replicas field for multiple static instances or the .spec.autoscaling field for auto-scaling, which autonomically creates or deletes instances based on resource consumption. The .spec.autoscaling.maxReplicas and .spec.resources.requests.cpu fields are required for auto-scaling.
Set privileges and permissions for a pod or container (.spec.securityContext)
A security context controls privilege and permission settings for a pod or application container. By default, the operator sets several .spec.securityContext parameters for an application container as shown in the following example.
spec: containers: - name: app securityContext: capabilities: drop: - ALL privileged: false runAsNonRoot: true readOnlyRootFilesystem: false allowPrivilegeEscalation: false seccompProfile:To override the default values or set more parameters, change the .spec.securityContext parameters, for example:
spec: applicationImage: quay.io/my-repo/my-app:1.0 securityContext: readOnlyRootFilesystem: true runAsUser: 1001 seLinuxOptions: level: "s0:c123,c456"
Note: If our Kubernetes cluster does not generate a user ID and .spec.securityContext.runAsUser is not specified, the user ID defaults to the value in the image metadata. If the image also does not specify a user ID, assign a user ID through .spec.securityContext.runAsUser to meet the .spec.securityContext.runAsNonRoot requirement.
The WebSphere Liberty operator sets the securityContext field to the RuntimeDefault seccomp profile. If our Kubernetes cluster uses custom security context constraints, seccompProfiles must be set to runtime/default.
To use custom security context constraints with our Kubernetes cluster, add the following section.
seccompProfiles: - runtime/default
If the application requires seccomp to be disabled, the seccompProfile must be set to unconfined in both the security context constraints and the WebSphereLibertyApplication CR To disable seccompProfile, add the following section to the security context constraints.
seccompProfiles: - unconfined
To disable seccomp, add the following section to the WebSphereLibertyApplication CR.
spec: securityContext: seccompProfile: type: Unconfined
See Set the security context for a Container.
Persist resources (.spec.statefulSet and .spec.volumeMounts)
If storage is specified in the WebSphereLibertyApplication CR, the operator can create a StatefulSet and PersistentVolumeClaim for each pod. If storage is not specified, StatefulSet resource is created without persistent storage.
The following custom resource definition uses .spec.statefulSet.storage to provide basic storage. The operator creates a StatefulSet with the size of 1Gi that mounts to the /data folder.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app spec: applicationImage: quay.io/my-repo/my-app:1.0 statefulSet: storage: size: 1Gi mountPath: "/data"
A WebSphereLibertyApplication custom resource definition can provide more advanced storage. With the following custom resource definition, the operator creates a PersistentVolumeClaim called pvc with the size of 1Gi and ReadWriteOnce access mode. The operator enables users to provide an entire .spec.statefulSet.storage.volumeClaimTemplate for full control over the automatically created PersistentVolumeClaim. To persist to more than one folder, the custom resource definition uses the .spec.volumeMounts field instead of .spec.statefulSet.storage.mountPath.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app spec: applicationImage: quay.io/my-repo/my-app:1.0 volumeMounts: - name: pvc mountPath: /data_1 subPath: data_1 - name: pvc mountPath: /data_2 subPath: data_2 statefulSet: storage: volumeClaimTemplate: metadata: name: pvc spec: accessModes: - "ReadWriteMany" storageClassName: 'glusterfs' resources: requests: storage: 1Gi
Limitation: After StatefulSet is created, the persistent storage and PersistentVolumeClaim cannot be added or changed.
The following custom resource definition does not specify storage and creates StatefulSet resources without persistent storage. We can create StatefulSet resources without storage if you require only ordering and uniqueness of a set of pods.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app spec: applicationImage: quay.io/my-repo/my-app:1.0 statefulSet: {}
Monitor resources (.spec.monitoring)
A WebSphere Liberty operator can create a ServiceMonitor resource to integrate with Prometheus Operator. Limitation: The operator monitoring does not support integration with Knative Service. Prometheus Operator is required to use ServiceMonitor.
At minimum, provide a label for Prometheus set on ServiceMonitor objects. In the following example, the .spec.monitoring label is apps-prometheus.
spec: applicationImage: quay.io/my-repo/my-app:1.0 monitoring: labels: app-prometheus: '' endpoints: - interval: '30s' basicAuth: username: key: username name: metrics-secret password: key: password name: metrics-secret tlsConfig: insecureSkipVerify: true
For more advanced monitoring, set many ServicerMonitor parameters such as authentication secret with Prometheus Endpoint.
spec: applicationImage: quay.io/my-repo/my-app:1.0 monitoring: labels: app-prometheus: '' endpoints: - interval: '30s' basicAuth: username: key: username name: metrics-secret password: key: password name: metrics-secret tlsConfig: insecureSkipVerify: true
Specify multiple service ports (.spec.service.port* and .spec.monitoring.endpoints)
To provide multiple service ports in addition to the primary service port, configure the primary service port with the .spec.service.port, .spec.service.targetPort, .spec.service.portName, and .spec.service.nodePort fields. The primary port is exposed from the container that runs the application and the port values are used to configure the Route (or Ingress), Service binding and Knative service.
To specify an alternative port for Service Monitor, use the .spec.monitoring.endpoints field and specify either the port or targetPort field, otherwise it defaults to the primary port.
Specify the primary port with the .spec.service.port field and additional ports with the .spec.service.ports field as shown in the following example.
spec: applicationImage: quay.io/my-repo/my-app:1.0 service: type: NodePort port: 9080 portName: http targetPort: 9080 nodePort: 30008 ports: - port: 9443 name: https monitoring: endpoints: - basicAuth: password: key: password name: metrics-secret username: key: username name: metrics-secret interval: 5s port: https scheme: HTTPS tlsConfig: insecureSkipVerify: true labels: app-monitoring: 'true'
Configure probes (.spec.probes)
Probes are health checks on an application container to determine whether it is alive or ready to receive traffic. The WebSphere Liberty operator has startup, liveness, and readiness probes.
Probes are not enabled in applications by default. To enable a probe with the default values, set the probe parameters to {}. The following example enables all 3 probes to use default values.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app spec: probes: startup: {} liveness: {} readiness: {}
The following code snippet shows the default values for the startup probe.
httpGet: path: /health/started port: 9443 scheme: HTTPS timeoutSeconds: 2 periodSeconds: 10 failureThreshold: 20
The following code snippet shows the default values for the liveness probe.
httpGet: path: /health/live port: 9443 scheme: HTTPS initialDelaySeconds: 60 timeoutSeconds: 2 periodSeconds: 10 failureThreshold: 3
The following code snippet shows the default values for the readiness probe.
httpGet: path: /health/ready port: 9443 scheme: HTTPS initialDelaySeconds: 10 timeoutSeconds: 2 periodSeconds: 10 failureThreshold: 10
To override a default value, specify a different value. The following example overrides a liveness probe initial delay default of 60 seconds and sets the initial delay to 90 seconds.
spec: probes: liveness: initialDelaySeconds: 90
When a probe initialDelaySeconds parameter is set to 0, the default value is used. To set a probe initial delay to 0, define the probe instead of using the default probe. The following example overrides the default value and sets the initial delay to 0.
spec: probes: liveness: httpGet: path: "/health/live" port: 9443 initialDelaySeconds: 0
Deploy serverless applications with Knative (.spec.createKnativeService)
If Knative is installed on a Kubernetes cluster, to deploy serverless applications with Knative on the cluster, the operator creates a Knative Service resource which manages the entire life cycle of a workload. To create a Knative service, set .spec.createKnativeService to true.
spec: applicationImage: quay.io/my-repo/my-app:1.0 createKnativeService: true
The operator creates a Knative service in the cluster and populates the resource with applicable WebSphereLibertyApplication fields. Also, it ensures non-Knative resources such as Kubernetes Service, Route, and Deployment are deleted.
The CRD fields that can populate the Knative service resource include .spec.applicationImage, .spec.serviceAccount.name, .spec.probes.liveness, .spec.probes.readiness, .spec.service.Port, .spec.volumes, .spec.volumeMounts, .spec.env, .spec.envFrom, .spec.pullSecret and .spec.pullPolicy. Startup probe is not fully supported by Knative, thus .spec.probes.startup does not apply when Knative service is enabled.
For details on how to configure Knative for tasks such as enabling HTTPS connections and setting up a custom domain, see the Knative documentation.
Autoscaling fields in WebSphereLibertyApplication are not used to configure Knative Pod Autoscaler (KPA). To learn how to configure KPA, see Configure the Autoscaler.
Expose applications externally (.spec.expose, .spec.createKnativeService, .spec.route)
Expose an application externally with a Route, Knative Route, or Ingress resource.
To expose an application externally with a route in a non-Knative deployment, set .spec.expose to true.
The operator creates a secured route based on the application service when .spec.manageTLS is enabled. To use custom certificates, see information about .spec.service.certificateSecretRef and .spec.route.certificateSecretRef.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app spec: applicationImage: quay.io/my-repo/my-app:1.0 expose: true
To expose an application externally with Ingress in a non-Knative deployment, complete the following steps.
- To use the Ingress resource to expose your cluster, install an Ingress controller such a Nginx or Traefik.
- Ensure that a Route resource is not on the cluster. The Ingress resource is created only if the Route resource is not available on the cluster.
- To use the Ingress resource, set the defaultHostName variable in the Operator ConfigMap object to a hostname such as mycompany.com
- Enable TLS. Generate a certificate and specify the secret containing the certificate with the .spec.route.certificateSecretRef
field.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app namespace: backend spec: applicationImage: quay.io/my-repo/my-app:1.0 expose: true route: certificateSecretRef: mycompany-tls
- Specify .spec.route.annotations to configure the Ingress resource.
Annotations such as Nginx, HAProxy, Traefik, and others are specific to the Ingress controller
implementation.
The following example specifies annotations, an existing TLS secret, and a custom hostname.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app namespace: backend spec: applicationImage: quay.io/my-repo/my-app:1.0 expose: true route: annotations: # We can use this annotation to specify the name of the ingress controller to use. # We can install multiple ingress controllers to address different types of incoming traffic such as an external or internal DNS. kubernetes.io/ingress.class: "nginx" # The following nginx annotation enables a secure pod connection: nginx.ingress.kubernetes.io/ssl-redirect: true nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # The following traefik annotation enables a secure pod connection: traefik.ingress.kubernetes.io/service.serversscheme: https # Use a custom hostname for the Ingress host: app-v1.mycompany.com # Reference a pre-existing TLS secret: certificateSecretRef: mycompany-tls
To expose an application as a Knative service, set .spec.createKnativeService and .spec.expose to true. The operator creates an unsecured Knative route. To configure secure HTTPS connections for your Knative deployment, see Configure HTTPS with TLS certificates.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app spec: applicationImage: quay.io/my-repo/my-app:1.0 createKnativeService: true expose: true
Allowing or limiting incoming traffic (.spec.networkPolicy)
By default, network policies for an application isolate incoming traffic.
- The default network policy created for applications that are not exposed limits incoming traffic to pods in the same namespace that are part of the same application. Traffic is limited to only the ports configured by the service. By default, traffic will be exposed to .spec.service.targetPort when specified and otherwise fallback to using the .spec.service.port. Use the same logic, traffic will be exposed for each additional targetPort or port provided in the .spec.service.ports[] array.
- Red Hat OpenShift supports network policies by default. For exposed applications on Red Hat OpenShift, the network policy allows incoming traffic from the Red Hat OpenShift ingress controller on the ports in the service configuration. The network policy also allows incoming traffic from the Red Hat OpenShift monitoring stack.
- For exposed applications on other Kubernetes platforms, the network policy allows incoming traffic from any pods in any namespace on the ports in the service configuration. For deployments to other Kubernetes platforms, ensure that your network plug-in supports the Kubernetes network policies.
To disable the creation of network policies for an application, set .spec.networkPolicy.disable to true.
spec: networkPolicy: disable: true
We can change the network policy to allow incoming traffic from specific namespaces or pods. By default, .spec.networkPolicy.namespaceLabels is set to the same namespace to which the application is deployed, and .spec.networkPolicy.fromLabels is set to pods that belong to the same application specified by .spec.applicationName. The following example allows incoming traffic from pods that are labeled with the frontend role and are in the same namespace.
spec: networkPolicy: fromLabels: role: frontend
The following example allows incoming traffic from pods that belong to the same application in the example namespace.
spec: networkPolicy: namespaceLabels: kubernetes.io/metadata.name: example
The following example allows incoming traffic from pods that are labeled with the frontend role in the example namespace.
spec: networkPolicy: namespaceLabels: kubernetes.io/metadata.name: example fromLabels: role: frontend
Bind applications with operator-managed backing services (.status.binding.name and .spec.service.bindable)
The Service Binding Operator enables application developers to bind applications together with operator-managed backing services. If the Service Binding Operator is installed on your cluster, we can bind applications by creating a ServiceBindingRequest custom resource.
We can configure a WebSphere Liberty application to behave as a Provisioned Service defined by the Service Binding Specification. According to the specification, a Provisioned Service resource must define a .status.binding.name that refers to a Secret. To expose the application as a Provisioned Service, set the .spec.service.bindable field to a value of true. The operator creates a binding secret that is named CR_NAME-expose-binding and adds the host, port, protocol, basePath, and uri entries to the secret.
To override the default values for the entries in the binding secret or to add new entries to the secret, create an override secret that is named CR_NAME-expose-binding-override and add any entries to the secret. The operator reads the content of the override secret and overrides the default values in the binding secret.
After a WebSphere Liberty application is exposed as a Provisioned Service, a service binding request can refer to the application as a backing service.
The instructions that follow show how to bind WebSphere Liberty applications as services or producers to other workloads (such as pods or deployments). Two WebSphere Liberty applications deployed through the WebSphere Liberty operator cannot be bound.
See Known issues and limitations.
- Set up the Service Binding operator to access WebSphere Liberty applications.By default, the Service Binding operator does not have permission to interact with WebSphere Liberty applications deployed
through the WebSphere Liberty operator. We must create two RoleBindings to give the Service Binding operator view and edit access for WebSphere Liberty applications.
- In the Red Hat OpenShift dashboard, navigate to User Management > RoleBindings.
- Select Create binding.
- Set the Binding type to Cluster-wide role binding (ClusterRoleBinding).
- Enter a name for the binding. Choose a name that is related to service bindings and view access for WebSphere applications.
- For the role name, enter webspherelibertyapplications.liberty.websphere.ibm.com-v1-view.
- Set the Subject to ServiceAccount.
- A Subject namespace menu appears. Select openshift-operators.
- In the Subject name field, enter service-binding-operator.
- Click Create.
- Set Binding type to Cluster-wide role binding (ClusterRoleBinding).
- Enter a name for the binding. Choose a name that is related to service bindings and edit access for WebSphere applications.
- In the Role name field, enter webspherelibertyapplications.liberty.websphere.ibm.com-v1-edit.
- Set Subject to ServiceAccount.
- In the Subject namespace list, select openshift-operators.
- In the Subject name field, type service-binding-operator.
- Click Create.
Service bindings from WebSphere Liberty applications (or "services") to pods or deployments (or "workloads") now succeed. After a binding is made, the bound workload restarts or scales to mount the binding secret to /bindings in all containers.
- Set up a service binding using the Red Hat method.
See the Red Hat documentation or the Red Hat tutorial.
- On the Red Hat OpenShift web dashboard, click Administrator in the sidebar and select Developer.
- In the Topology view for the current namespace, hover over the border of the WebSphere application to be bound as a service, and drag an arrow to the Pod or Deployment workload. A tooltip appears entitled Create Service Binding.
- The Create Service Binding window opens. Change the name to value that is fewer than 63 characters. The Service Binding operator might fail to mount the secret as a volume if the name exceeds 63 characters.
- Click Create.
- A sidebar opens. To see the status of the binding, click the name of the secret and then scroll until the status appears.
- Check the pod/deployment workload and verify that a volume is mounted. We can also open a terminal session into a container and run ls /bindings.
- Set up a service binding using the Spec API Tech Preview / Community method.This method is newer than the Red Hat method but achieves the same results. We must add a label to our WebSphere Liberty application, such as app=frontend, if it does not have any unique labels. Set the binding to use a label
selector so that the Service Binding operator looks for a WebSphere Liberty application with a specific label.
- Install the Service Binding operator using the Red Hat OpenShift Operator Catalog.
- Select Operators > Installed Operators and set the namespace to the same one used by both our WebSphere application and pod/deployment workload.
- Open the Service Binding (Spec API Tech Preview) page.
- Click Create ServiceBinding.
- Choose a short name for the binding. Names that exceed 63 characters might cause the binding secret volume mount to fail.
- Expand the Service section.
- In the Api Version field, enter liberty.websphere.ibm.com/v1.
- In the Kind field, enter WebSphereLibertyApplication.
- In the Name field, enter the name of the application. We can get this name from the list of applications on the WebSphere Liberty operator page.
- Expand the Workload section.
- Set the Api Version field to the value of apiVersion in the target workload YAML. For example, if the workload is a deployment, the value is apps/v1.
- Set the Kind field to the value of kind in the target workload YAML. For example, if the workload is a deployment, the value is Deployment.
- Expand the Selector subsection, and then expand the Match Expressions subsection.
- Click Add Match Expression.
- In the Key field, enter the label key that you set earlier. For example, for the label app=frontend, the key is app).
- In the Operator field, enter Exists.
- Expand the Values subsection and click Add Value.
- In the Value field, enter the label value that you set earlier. For example, if using the label app=frontend, the value is frontend.
- Click Create.
- Check the Pod/Deployment workload and verify that a volume is mounted, either by scrolling down or by opening a terminal session into a container and running ls /bindings.
Limit a pod to run on specified nodes (.spec.affinity)
Use .spec.affinity to constrain a Pod to run only on specified nodes.
To set required labels for pod scheduling on specific nodes, use the .spec.affinity.nodeAffinityLabels field.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app namespace: test spec: applicationImage: quay.io/my-repo/my-app:1.0 affinity: nodeAffinityLabels: customNodeLabel: label1, label2 customNodeLabel2: label3
The following example requires a large node type and preferences for two zones, which are named zoneA and zoneB.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app namespace: test spec: applicationImage: quay.io/my-repo/my-app:1.0 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node.kubernetes.io/instance-type operator: In values: - large preferredDuringSchedulingIgnoredDuringExecution: - weight: 60 preference: matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - zoneA - weight: 20 preference: matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - zoneB
Use pod affinity and anti-affinity to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on node.
The following example shows that pod affinity is required and that the pods for Service-A and Service-B must be in the same zone. Through pod anti-affinity, it is preferable not to schedule Service_B and Service_C on the same host.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: Service-B namespace: test spec: applicationImage: quay.io/my-repo/my-app:1.0 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: service operator: In values: - Service-A topologyKey: failure-domain.beta.kubernetes.io/zone podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: service operator: In values: - Service-C topologyKey: kubernetes.io/hostname
Constrain how pods are spread between nodes and zones (.spec.topologySpreadConstraints)
Use the .spec.topologySpreadConstraints YAML object to specify constraints on how pods of the application instance (and if enabled, the Semeru Cloud Compiler instance) are spread between nodes and zones of the cluster.
Use the .spec.topologySpreadConstraints.constraints field, we can specify a list of Pod TopologySpreadConstraints to be added, such as in the example below:
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app namespace: test spec: topologySpreadConstraints: constraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/instance: my-app
By default, the operator will add the following Pod topology spread constraints on the application instance's pods (and if applicable, the Semeru Cloud Compiler instance's pods). The default behaviour is to constrain the spread of pods which are owned by the same application instance (or Semeru Cloud Compiler generation instance), denoted by <instance name> with a maxSkew of 1.
- maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/instance: <instance name> - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/instance: <instance name>
To remove the operator's default topology spread constraints from above, set the .spec.topologySpreadConstraints.disableOperatorDefaults flag to true.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app namespace: test spec: topologySpreadConstraints: disableOperatorDefaults: true
Alternatively, override each constraint manually by creating a new TopologySpreadConstraint under .spec.topologySpreadConstraints.constraints for each topologyKey we want to modify.
Note: When using the disableOperatorDefaults: true flag. If cluster-level default constraints are not enabled, by default, the K8s scheduler will use its own internal default Pod topology spread constraints as outlined in https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints.
Configure DNS (.spec.dns.policy and .spec.dns.config)
DNS can be configured in WebSphereLibertyApplication custom resource using the .spec.dns.policy field or the .spec.dns.config field. The .spec.dns.policy field is the DNS policy for the application pod and defaults to the ClusterFirst policy. The .spec.dns.config field is the DNS config for the application pod. Kubernetes supports the following pod-specific DNS policies. The following policies can be specified using the .spec.dns.policy field:
- Default: The pod inherits the name resolution configuration from the node that the pods run on.
- ClusterFirst: Any DNS query that does not match the configured cluster domain suffix, such as www.kubernetes.io, is forwarded to an upstream name server by the DNS server. Cluster administrators can have extra stub-domain and upstream DNS servers configured.
- ClusterFirstWithHostNet: Set the DNS policy to ClusterFirstWithHostNet if the pod runs with hostNetwork. Pods
running with hostNetwork and set to the ClusterFirst policy
behaves like the Default policy.
Note: ClusterFirstWithHostNet is not supported on Windows.
- None: A pod can ignore DNS settings from the Kubernetes environment. All DNS settings are provided using the .spec.dns.config field of WebSphereLibertyApplication CR.
Note: Default is not the default DNS policy. If .spec.dns.policy is not explicitly specified, then ClusterFirst is used.
DNS Config allows users more control over the DNS settings for an application Pod.
The .spec.dns.config field is optional and it can work with any .spec.dns.policy settings. However, when a .spec.dns.policy is set to None, the .spec.dns.config field must be specified. The following properties are specified within the .spec.dns.config field:
- .spec.dns.config.nameservers: a list of IP addresses used as DNS servers for the Pod. Up to 3 IP addresses are specified. When .spec.dns.policy is set to None, the list must contain at least one IP address, otherwise this property is optional. The servers that are listed are combined to the base name servers generated from the specified DNS policy with duplicate addresses removed.
- .spec.dns.config.searches: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list is merged into the base search domain names that are generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows up to 32 search domains.
- .spec.dns.config.options: an optional list of objects where each object must have a name property and can have a value property. The contents in this property are merged to the options generated from the specified DNS policy. Duplicate entries are removed.
spec: dns: policy: "None" config: nameservers: - 192.0.2.1 # this is an example searches: - ns1.svc.cluster-domain.example - my.dns.search.suffix options: - name: ndots value: "2" - name: edns0
For more information on DNS, see the Kubernetes DNS documentation.
Configure tolerations (.spec.tolerations)
Node affinity is a property that attracts pods to a set of nodes either as a preference or a hard requirement. However, taints allow a node to repel a set of pods.
Tolerations are applied to pods and allow a scheduler to schedule pods with matching taints. The scheduler also evaluates other parameters as part of its function.
Taints and tolerations work together to help ensure that application pods are not scheduled onto inappropriate nodes. If one or more taints are applied to a node, the node cannot accept any pods that do not tolerate the taints.
Tolerations can be configured in WebSphereLibertyApplication CR using the .spec.tolerations field.
spec: tolerations: - key: "key1" operator: "Equal" value: "value1"
For more information on taints and toleration, see the Kubernetes taints and toleration documentation.