Kubernetes support
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
It provides features such as:
- Self-healing
- Horizontal scaling
- Service discover and load balancing
- Secret and configuration management
Further information on Kubernetes can be found on the official Kubernetes Web site: https://kubernetes.io/.
Repository
The Security Verify Access image is available from the Docker Hub repository: 'ibmcom/verify-access'.
Secrets
Sensitive information, like passwords should never be stored directly in the yaml deployment descriptors. They should instead be stored within a Kubernetes secret and then the secret should be referenced in the yaml deployment descriptors. Instructions on how to use Kubernetes secrets can be found in the official Kubernetes documentation: https://kubernetes.io/docs/concepts/configuration/secret/ In the examples provided within this chapter, a ‘secret’ is used to store the Verify Access administration password. An example command to create the ‘secret’ is provided below (ensure the kubectl context is set to the correct environment before running this command):
kubectl create secret generic isva-passwords --type=string --from-literal=cfgsvc=Passw0rd
Service Accounts
Service accounts can be used to provide an identity for processes that run in a Pod. Information on the usage of service accounts can be found in the official Kubernetes documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/. In the examples that are provided within this chapter, the deployment descriptors use the ‘isva’ service account. The kubectl utility can be used to create the ‘isva’ service account (ensure the kubectl context is set to the correct environment before running this command):
kubectl create serviceaccount isva
Readiness and Liveness Probes
Kubernetes uses liveness probes to help determine whether a container has become unresponsive. If a container does become unresponsive Kubernetes automatically attempts to restart the container to help rectify the problem.
Kubernetes uses readiness probes to determine whether a container is ready to serve traffic. A pod with containers reporting they are not ready will not receive traffic through Kubernetes Services.
The Verify Access image provides a shell script which can be used to respond to liveness and readiness probes: ‘/sbin/health_check.sh’. If the ‘livenessProbe’ command line option is provided to the script it will report on the ‘liveness’ of the container, otherwise it will report on the ‘readiness’ of the container. For a ‘liveness’ probe the container will first check to see if it is still in the process of starting. If it is in the process of starting it will return a ‘healthy’ result. Once the container has fully started both the ‘liveness’ and ‘readiness’ probes will return the network connectivity state of the service which is hosted by the container.
For information on liveness and readiness probes, refer to the official Kubernetes documentation.
Deployment
The following section illustrates how to deploy ISAM containers into a Kubernetes environment.
Configuration Container
Instructions on how to create the ISAM configuration container are provided in the following steps:
- Ensure the kubectl context is set to the correct environment. The mechanism to do this differs based on the Kubernetes environment in use.
- Create a configuration file that is named config-container.yaml. This configuration file defines a configuration container that can be used to configure the environment.
# # The deployment description of the Verify Access configuration container. This # container is used to manage the configuration of the Verify Access # environment. # apiVersion: apps/v1 kind: Deployment metadata: name: isva-config labels: app: isva-config spec: selector: matchLabels: app: isva-config template: metadata: labels: app: isva-config spec: # The name of the service account which has the required # capabilities enabled for the isva container. serviceAccountName: isva # We want to run the container as the isam (uid: 6000) user. securityContext: runAsNonRoot: true runAsUser: 6000 # We use a volume to store the configuration snapshot for the # environment. volumes: - name: isva-config emptyDir: {} containers: - name: isva-config # The fully qualified name of the verify-access image. image: ibmcom/verify-access:10.0.0.0 # The port on which the container will be listening. ports: - containerPort: 9443 # Environment definition. The administrator password is # contained within a Kubernetes secret. env: - name: SERVICE value: config - name: ADMIN_PWD valueFrom: secretKeyRef: name: isva-passwords key: cfgsvc # The liveness and readiness probes are used by Kubernetes # to obtain the health of the container. Our health is # governed by the ability to connect to the LMI. readinessProbe: tcpSocket: port: 9443 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: tcpSocket: port: 9443 initialDelaySeconds: 120 periodSeconds: 20 # The '/var/shared' directory contains the configuration # snapshots and should be persistent. We use a volume for # this directory. volumeMounts: - mountPath: /var/shared name: isva-config --- # # The service description of the Verify Access configuration service. The # service is only accessible from within the Kubernetes cluster. # apiVersion: v1 kind: Service metadata: name: isva-config spec: ports: - port: 9443 name: isva-config selector: app: isva-config type: ClusterIP
- Create the container:
kubectl create -f config-container.yaml
- We can monitor the bootstrapping of the container using the 'logs' command:
kubectl logs -f `kubectl get -o json pods -l app=isva-config | jq -r .items[0].metadata.name`
- Start the Kubernetes proxy so that we are able to access the Web management console of the configuration container. An alternative approach is to create a Kubernetes service that directly exposes the LMI port of the configuration container.
kubectl port-forward `kubectl get -o json pods -l app=isva-config | jq -r .items[0].metadata.name` 9443
- Access the proxied Web administration console (https:/127.0.0.1:9443) authenticating as the 'admin' user, with a password of 'Passw0rd' (as defined in the isva-passwords secret). Proceed through the first-steps and then configure the environment.
- Using the Web administration console, publish the configuration of the environment.
web reverse proxy Container
The following steps illustrate how to create a WebSEAL container for the 'default' WebSEAL instance:
- Ensure the kubectl context is set to the correct environment. The mechanism to do this differs, based on the Kubernetes environment being used.
- Create a configuration file that is named wrp-container.yaml. This configuration file defines a WebSEAL container that can be used to secure access to our Web applications:
# # The deployment description of the Verify Access web reverse proxy container. # apiVersion: apps/v1 kind: Deployment metadata: name: isva-wrp labels: app: isva-wrp spec: selector: matchLabels: app: isva-wrp replicas: 1 template: metadata: labels: app: isva-wrp spec: # The name of the service account which has the required # capabilities enabled for the verify-access container. serviceAccountName: isva # We want to run the container as the isam (uid: 6000) user. securityContext: runAsNonRoot: true runAsUser: 6000 containers: - name: isva-wrp # The fully qualified name of the verify-access image. image: ibmcom/verify-access:10.0.0.0 # The port on which the container will be listening. ports: - containerPort: 443 # Environment definition for the 'default' Web reverse proxy # instance. The administrator password is contained # within a Kubernetes secret. env: - name: SERVICE value: webseal - name: INSTANCE value: default - name: CONFIG_SERVICE_URL value: https://isva-config:9443/shared_volume - name: CONFIG_SERVICE_USER_NAME value: admin - name: CONFIG_SERVICE_USER_PWD valueFrom: secretKeyRef: name: isva-passwords key: cfgsvc # The liveness and readiness probes are used by Kubernetes to # obtain the health of the container. Our health is # governed by the health_check.sh script which is provided # by the container. livenessProbe: exec: command: - /sbin/health_check.sh - livenessProbe initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: exec: command: - /sbin/health_check.sh initialDelaySeconds: 10 periodSeconds: 10
- Create the container:
kubectl create -f wrp-container.yaml
- The 'isva_cli' command can be used to directly administer a WebSEAL container:
kubectl exec -it `kubectl get -o json pods -l app=isva-wrp | jq -r .items[0].metadata.name` isva_cli
- We can monitor the bootstrapping of the container using the 'logs' command:
kubectl logs -f `kubectl get -o json pods -l app=isva-wrp | jq -r .items[0].metadata.name`
- Create a configuration file that is named wrp-service.yaml. This configuration file defines a WebSEAL service that can be used to access WebSEAL. The type of service defined is different based on whether the 'load balancer' service type is supported in the environment.
The following definition can be used if the 'load balancer' service type is not supported in the environment:
# # The service description of the Verify Access web reverse proxy service. This is # the entry point into the environment and can be accessed over port # 30443 from outside of the Kubernetes cluster. # apiVersion: v1 kind: Service metadata: name: isva-wrp spec: ports: - port: 443 name: isva-wrp protocol: TCP nodePort: 30443 selector: app: isva-wrp type: NodePort
The following definition can be used it the 'load balancer' service type is supported in your environment:
# LoadBalancer service definition.... apiVersion: v1 kind: Service metadata: name: isva-wrp spec: type: LoadBalancer ports: - port: 443 selector: app: isva-wrp
- Create the service:
kubectl create -f wrp-service.yaml
- If a 'LoadBalancer' service was defined, determine the external IP address of the service and then use your browser to access WebSEAL (port 443):
kubectl get service isva-wrp --watch
- If a 'NodePort' service was defined, determine the IP address of the Kubernetes cluster and then use your browser to access WebSEAL (port 30443). In a 'minikube' environment the IP address of the cluster can be obtained with the following command:
minikube ip
In an IBM cloud environment, the IP address of the cluster can be obtained with the following command:
bluemix cs workers mycluster --json | jq -r .[0].publicIP
Runtime Container
The Verify Access Runtime Container (called isva-runtime, or Verify Access Liberty Runtime) is similar to the web reverse proxy Container. It is a personality of the verify-access image that runs the advanced authentication, context-based access and federation services. The isva-runtime container also retrieves a snapshot from the configuration container in the same manner as the web reverse proxy Container. Besides the technical function of the container, the difference is that this container has no need to listen externally on a NodePort. Instead it only exposes its HTTPS interface on the cluster network with the isva-runtime service.
The following steps illustrate how to create a runtime container:
- Ensure the kubectl context is set to the correct environment. The mechanism to do this differs, based on the Kubernetes environment being used.
- Create a configuration file that is named runtime-container.yaml. This configuration file defines a runtime container that can be used to secure access to our Web applications:
# # The deployment description of the Verify Access runtime profile container. # This container provides the Federation and Advanced Access Control # capabilities of Verify Access. # apiVersion: apps/v1 kind: Deployment metadata: name: isva-runtime labels: app: isva-runtime spec: selector: matchLabels: app: isva-runtime replicas: 1 template: metadata: labels: app: isva-runtime spec: # The name of the service account which has the required # capabilities enabled for the isva container. serviceAccountName: isva # We want to run the container as the isam (uid: 6000) user. securityContext: runAsNonRoot: true runAsUser: 6000 containers: - name: isva-runtime # The fully qualified name of the verify-access image. image: ibmcom/verify-access:10.0.0.0 # The port on which the container will be listening. ports: - containerPort: 443 # Environment definition. The administrator password is # contained within a Kubernetes secret. env: - name: SERVICE value: runtime - name: CONFIG_SERVICE_URL value: https://isva-config:9443/shared_volume - name: CONFIG_SERVICE_USER_NAME value: admin - name: CONFIG_SERVICE_USER_PWD valueFrom: secretKeyRef: name: isva-passwords key: cfgsvc # The liveness and readiness probes are used by Kubernetes to # obtain the health of the container. Our health is # governed by the health_check.sh script which is provided # by the container. livenessProbe: exec: command: - /sbin/health_check.sh - livenessProbe initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: exec: command: - /sbin/health_check.sh initialDelaySeconds: 10 periodSeconds: 10 --- # # The service description of the isva runtime profile service. The # service is only accessible from within the Kubernetes cluster. # apiVersion: v1 kind: Service metadata: name: isva-runtime spec: ports: - port: 443 name: isva-runtime selector: app: isva-runtime type: ClusterIP
- Create the container:
kubectl create -f runtime-container.yaml
- The 'isva_cli' command can be used to directly administer a runtime container:
kubectl exec -it `kubectl get -o json pods -l app=isva-runtime | jq -r .items[0].metadata.name` isva_cli
- We can monitor the bootstrapping of the container using the 'logs' command:
kubectl logs -f `kubectl get -o json pods -l app=isva-runtime | jq -r .items[0].metadata.name`
The Verify Access Distributed Session Cache Container (called isva-dsc) is similar to the Web Reverse Proxy Container. It is a personality of the verify-access image that runs the distributed session cache and can be used by the web reverse proxy and Runtime to share sessions between multiple containers. The isva-dsc container also retrieves a snapshot from the configuration container in the same manner as the web reverse proxy Container. Besides the technical function of the container, the difference is that this container has no need to listen externally on a NodePort. Instead it only exposes its HTTPS and replication interface on the cluster network with the isva-dsc service. The following steps illustrate how to create a DSC container:
- Ensure the kubectl context is set to the correct environment. The mechanism to do this differs, based on the Kubernetes environment being used.
- Create a configuration file that is named dsc-container.yaml. This configuration file defines a DSC container that can be used to share sessions:
# # The deployment description of the Verify Access distributed session cache # container. # apiVersion: apps/v1 kind: Deployment metadata: name: isva-dsc labels: app: isva-dsc spec: selector: matchLabels: app: isva-dsc template: metadata: labels: app: isva-dsc spec: # The name of the service account which has the required # capabilities enabled for the isva container. serviceAccountName: isva # We want to run the container as the isam (uid: 6000) user. securityContext: runAsNonRoot: true runAsUser: 6000 containers: - name: isva-dsc # The fully qualified name of the verify-access image. image: ibmcom/verify-access:10.0.0.0 # The ports on which the container will be listening. Port # 443 provides the main DSC service, and port 444 provides # the replication service used when replicating # session data between DSC instances. ports: - containerPort: 443 - containerPort: 444 # Environment definition. The administrator password is # contained within a Kubernetes secret. env: - name: SERVICE value: dsc - name: INSTANCE value: '1' - name: CONFIG_SERVICE_URL value: https://isva-config:9443/shared_volume - name: CONFIG_SERVICE_USER_NAME value: admin - name: CONFIG_SERVICE_USER_PWD valueFrom: secretKeyRef: name: isva-passwords key: cfgsvc # The liveness and readiness probes are used by Kubernetes to # obtain the health of the container. Our health is # governed by the health_check.sh script which is provided # by the container. livenessProbe: exec: command: - /sbin/health_check.sh - livenessProbe initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: exec: command: - /sbin/health_check.sh initialDelaySeconds: 10 periodSeconds: 10 --- # # The service description of the verify-access distributed session cache # service. The service is only accessible from within the Kubernetes # cluster. # apiVersion: v1 kind: Service metadata: name: isva-dsc spec: ports: - port: 443 name: isva-dsc - port: 444 name: isva-dsc-replica selector: app: isva-dsc type: ClusterIP
- Create the container:
kubectl create -f dsc-container.yaml
- The 'isva_cli' command can be used to directly administer a DSC container:
kubectl exec -it `kubectl get -o json pods -l app=isva-dsc | jq -r .items[0].metadata.name` isva_cli
- We can monitor the bootstrapping of the container using the 'logs' command:
kubectl logs -f `kubectl get -o json pods -l app=isva-dsc | jq -r .items[0].metadata.name`
Kubernetes Environments
The following Kubernetes environments is validated using the ISAM image:
- Minikube
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day. Further information can be obtained from the Minikube Web site: https://kubernetes.io/docs/getting-started-guides/minikube/
To set the context for the kubectl utility use the following command:
kubectl config use-context minikube
IBM Cloud The IBM cloud container service provides advanced capabilities for building cloud-native apps, adding DevOps to existing apps, and relieving the pain around security, scale, and infrastructure management. Further information can be obtained from the IBM Cloud Web site: https://www.ibm.com/cloud/container-service
To set the context for the kubectl utility use the IBM Cloud CLI to obtain the kubectl configuration file:
bx cs cluster-config <cluster-name>
Microsoft Azure Container Registry Azure Container Service (AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking your applications offline. Further information can be obtained from the Microsoft Azure AKS Web Site: https://docs.microsoft.com/en-us/azure/aks/
To set the context for the kubectl utility use the Microsoft Azure CLI:
az aks get-credentials --resource-group <group-name> --name <cluster-name>
Google Cloud Platform Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure. Further information can be obtained from the Google Cloud Web Site: https://cloud.google.com/kubernetes-engine/
To set the context for the kubectl utility use the Google Cloud CLI:
gcloud container clusters get-credentials <cluster-name>
Redhat Openshift RedHat OpenShift is an open source container application platform based on the Kubernetes container orchestrator for enterprise application development and deployment. For information, see: https://www.openshift.com/.To set the context for the kubectl utility use the OpenShift CLI: oc login
The oc binary is the preferred mechanism for accessing the OpenShift CLI and can be used interchangeably with the kubectl utility.
The default security context which is enabled by RedHat OpenShift is too restrictive for the Verify Access container. As a result of this a less restrictive security context should be enabled for the service account which will run the Verify Access containers (in the examples provided in this chapter we use the ‘Verify Access’ service account).
The pre-defined ‘anyuid’ security context can be used, but this does provide additional capabilities that are not required by the Verify Access containers. To create a security context with the minimum set of capabilities required for the Verify Access containers:
- Ensure the oc binary is available in the environment and that a login has already been performed.
- Create a configuration file that is named -scc.yaml. This configuration file defines a new security context which can be used by the Verify Access containers:
# # The minimum security context constraints which are required to run # the Verify Access container. We cannot use the 'restricted' security # constraint as we need additional capabilities which would otherwise # be denied to the container. The 'anyuid' security constraint may # be used, but it allows additional capabilities which are not # required by the container. # kind: SecurityContextConstraints apiVersion: v1 # The name and description for the security context constraint to be # created. metadata: name: isva-scc annotations: kubernetes.io/description: The Verify Access SCC allows the container to run as any non-root user. # The following capabilities are not required. allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegedContainer: false readOnlyRootFilesystem: false # The priority is set to '10', otherwise the security constraint does # not take affect when applied to a service account. priority: 10 # The Verify Access container needs to be run as a 'custom' user, but does # not need to run as the root user. runAsUser: type: MustRunAsNonRoot seLinuxContext: type: MustRunAs fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny # The following volumes are required by the Verify Access container. volumes: - configMap - emptyDir - projected - secret - downwardAPI - persistentVolumeClaim # By default we drop all capabilities and then only add back in the # capabilities which are required by the Verify Access container. requiredDropCapabilities: - ALL # The capabilities which are required by the Verify Access container. allowedCapabilities: - CHOWN - DAC_OVERRIDE - FOWNER - KILL - NET_BIND_SERVICE - SETFCAP - SETGID - SETUID defaultAddCapabilities: - CHOWN - DAC_OVERRIDE - FOWNER - KILL - NET_BIND_SERVICE - SETFCAP - SETGID - SETUID
- Create the container:
oc create -f -isva-scc.yaml
- Associate the new security context with the 'isva' user:
oc adm policy add-scc-to-user isva-scc -z isva
Parent topic: Orchestration