+

Search Tips   |   Advanced Search


Serverless

  1. OpenShift Serverless
  2. OpenShift Serverless product architecture
  3. Knative Serving
  4. Knative Client
  5. Install OpenShift Serverless
  6. Install Service Mesh
  7. Install the OpenShift Serverless Operator
  8. Delete the OpenShift Serverless Operator
  9. Install Knative Serving
  10. Get started with Knative services
  11. Create a Knative service
  12. Deploy a serverless application
  13. Configure Knative services
  14. Configure domains for Knative services
  15. Define a template for the configured domain
  16. Monitor OpenShift Serverless components
  17. Configure cluster for application monitoring
  18. Verify an OpenShift monitoring installation for use with Knative Serving
  19. Monitor Knative Serving using the OpenShift monitoring stack
  20. Cluster logging with OpenShift Serverless
  21. About cluster logging
  22. Deploy and configure cluster logging
  23. Use cluster logging to find logs for Knative Serving components
  24. Use cluster logging to find logs for services deployed with Knative Serving
  25. Configure Knative Serving autoscaling
  26. Configure concurrent requests for Knative Serving autoscaling
  27. Configure scale bounds Knative Serving autoscaling
  28. Use Knative Client


OpenShift Serverless

OpenShift Serverless is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Tech Preview

OpenShift Serverless is based on the open source Knative project, which provides portability and consistency across hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform. It implements the building blocks for developers to create modern, source-centric, container-based applications through a series of Custom Resource Definitions (CRDs) and associated controllers in Kubernetes.

OpenShift Serverless simplifies the process of getting code from development into production by reducing the requirement for developer input in infrastructure set up or back-end development. Developers on Red Hat OpenShift Serverless can use the provided Kubernetes-native APIs, as well as familiar languages and frameworks to deploy applications and container workloads.

Combining Operators, Knative and OpenShift enables stateful, stateless, and serverless workloads to all run on a single multi-cloud container platform with automated operations. Developers can benefit from being able to use a single platform for hosting their microservices, legacy, and serverless applications. Applications are packaged as OCI compliant Linux containers that can be run anywhere.

Applications can be triggered by a variety of event sources, such as events from our own applications, cloud services from multiple providers, Software as a Service (SaaS) systems and Red Hat Services (AMQ Streams).

OpenShift Serverless applications can be integrated with other OpenShift services, such as Service Mesh and cluster monitoring, delivering a complete serverless application development and deployment experience.


OpenShift Serverless product architecture


Knative Serving

Knative Serving on OpenShift builds on Kubernetes and Istio to support deploying and serving serverless applications.

It creates a set of Kubernetes Custom Resource Definitions (CRDs) that are used to define and control the behavior of serverless workloads on an OpenShift cluster.

These CRDs can be used as building blocks to address complex use cases, such as rapid deployment of serverless containers, automatic scaling of Pods, routing and network programming for Istio components, or viewing point-in-time snapshots of deployed code and configurations.


Knative Serving components

The components described in this section are the resources that Knative Serving requires to be configured and run correctly.

Knative service resource

The service.serving.knative.dev resource automatically manages the whole lifecycle of a serverless workload on a cluster. It controls the creation of other objects to ensure that an app has a route, a configuration, and a new revision for each update of the service. Services can be defined to always route traffic to the latest revision or to a pinned revision.

Knative route resource

The route.serving.knative.dev resource maps a network endpoint to one or more Knative revisions. We can manage the traffic in several ways, including fractional traffic and named routes.

Knative configuration resource

The configuration.serving.knative.dev resource maintains the required state for your deployment. Modifying a configuration creates a new revision.

Knative revision resource

The revision.serving.knative.dev resource is a point-in-time snapshot of the code and configuration for each modification made to the workload. Revisions are immutable objects and can be retained for as long as needed. Cluster administrators can modify the revision.serving.knative.dev resource to enable automatic scaling of Pods in the OpenShift cluster.


Knative Client

The Knative Client (kn) extends the functionality of the oc or kubectl tools to enable interaction with Knative components on OpenShift. kn allows developers to deploy and manage applications without editing YAML files directly.


Install OpenShift Serverless


Cluster size requirements

The cluster must be sized appropriately to ensure that OpenShift Serverless can run correctly. We can use the MachineSet API to manually scale the cluster up to the desired size.

An OpenShift cluster with 10 CPUs and 40 GB memory is the minimum requirement for getting started with your first serverless application. This usually means we must scale up one of the default MachineSets by two additional machines. Note

For this configuration, the requirements depend on the deployed applications. By default, each pod requests ~400m of CPU and recommendations are based on this value. In the given recommendation, an application can scale up to 10 replicas. Lowering the actual CPU request of the application further pushes the boundary. Note

The numbers given only relate to the pool of worker machines of the OpenShift cluster. Master nodes are not used for general scheduling and are omitted.

For more advanced use-cases, such as using OpenShift logging, monitoring, metering, and tracing, deploy more resources. Recommended requirements for such use-cases are 24 vCPUs and 96GB of memory.


Scaling a MachineSet manually

If we must add or remove an instance of a machine in a MachineSet, we can manually scale the MachineSet.

Prerequisites

  • Install an OpenShift cluster and the oc command line.
  • Log on to oc as a user with cluster-admin permission.

    Procedure

    1. View the MachineSets that are in the cluster:

        $ oc get machinesets -n openshift-machine-api

      The MachineSets are listed in the form of <clusterid>-worker-<aws-region-az>.

    2. Scale the MachineSet:

        $ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api

      Or:

        $ oc edit machineset <machineset> -n openshift-machine-api

      We can scale the MachineSet up or down. It takes several minutes for the new machines to be available.

      By default, the OpenShift router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker MachineSet to 1 or 0 unless you first relocate the router pods.


  • Install Service Mesh

    An installed version of Service Mesh is required for the installation of OpenShift Serverless. For details, see the OpenShift documentation on Install Service Mesh. Note

    Use the Service Mesh documentation for Operator installation only. Once we install the Operators, use the documentation below to install the Service Mesh Control Plane and Member Roll.


    Install the ServiceMeshControlPlane

    Service Mesh is comprised of a data plane and a control plane. After installing the ServiceMesh operator, we can install the control plane. The control plane manages and configures the sidecar proxies to enforce policies and collect telemetry. The following procedure installs a version of the ServiceMesh control plane that acts as an ingress to our applications. Note

    Install the control plane into the istio-system namespace. Other namespaces are currently not supported.

    Sample YAML file

    apiVersion: maistra.io/v1
    kind: ServiceMeshControlPlane
    metadata:
      name: basic-install
      namespace: istio-system
    spec:
      istio:
        global:
          multitenant: true
          proxy:
    autoInject: disabled
          omitSidecarInjectorConfigMap: true
          disablePolicyChecks: false
          defaultPodDisruptionBudget:
    enabled: false
        istio_cni:
          enabled: true
        gateways:
          istio-ingressgateway:
    autoscaleEnabled: false
    type: LoadBalancer
          istio-egressgateway:
    enabled: false
          cluster-local-gateway:
    autoscaleEnabled: false
    enabled: true
    labels:
      app: cluster-local-gateway
      istio: cluster-local-gateway
    ports:
      - name: status-port
    port: 15020
      - name: http2
    port: 80
    targetPort: 8080
      - name: https
    port: 443
        mixer:
          enabled: false
          policy:
    enabled: false
          telemetry:
    enabled: false
        pilot:
          autoscaleEnabled: false
          sidecar: false
        kiali:
          enabled: false
        tracing:
          enabled: false
        prometheus:
          enabled: false
        grafana:
          enabled: false
        sidecarInjectorWebhook:
          enabled: false
    Note

    Autoscaling is disabled in this version. This release is not intended for production use. Note

    Running ServiceMesh with a sidecar injection enabled with OpenShift Serverless is currently not recommended.

    Prerequisite

  • An account with cluster administrator access.
  • The ServiceMesh operator must be installed.

    Procedure

    1. Log in to the OpenShift installation as a cluster administrator.

    2. Create the istio-system namespace:
      $ oc new-project istio-system
    3. Copy the sample YAML file into a smcp.yaml file.

    4. Apply the YAML file using the command:
      $ oc apply -f smcp.yaml

    5. Run this command to watch the progress of the pods during the installation process:
      $ oc get pods -n istio-system -w

  • Install a ServiceMeshMemberRoll

    If the Service Mesh is configured for multi-tenancy, have a Service Mesh Member Roll for the control plane namespace. For applications to use the deployed control plane and ingress, their namespaces must be part of a member roll.

    A multi-tenant control plane installation only affects namespaces configured as part of the Service Mesh. Namespaces associated with the Service Mesh in a ServiceMeshMemberRoll resource located in the same namespace as the ServiceMeshControlPlane resource and name it default.

    ServiceMeshMemberRoll Custom Resource Example

    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system
    spec:
      members:
      - knative-serving
      - mynamespace

    Prerequisites

  • Installed Service Mesh Operator.
  • A custom resource file that defines the parameters of the Red Hat OpenShift Service Mesh control plane.

    Procedure

    1. Create a YAML file that replicates the ServiceMeshMemberRoll Custom Resource sample.

    2. Configure the YAML file to include relevant namespaces. Note

      Add all namespaces to which we want to deploy serverless applications. Ensure you retain the knative-serving namespace in the member roll.

    3. Copy the configured YAML into a file smmr.yaml and apply it using:
      $ oc apply -f smmr.yaml

  • Install the OpenShift Serverless Operator

    The OpenShift Serverless Operator can be installed using the OpenShift instructions for installing Operators.

    We can install the OpenShift Serverless Operator in the host cluster by following the OpenShift instructions on installing an Operator. Note

    The OpenShift Serverless Operator only works for OpenShift versions 4.1.13 and later.

    For details, see the OpenShift documentation on adding Operators to a cluster.


    Delete the OpenShift Serverless Operator

    We can remove the OpenShift Serverless Operator from the host cluster by following the OpenShift instructions on deleting an Operator.

    For details, see the OpenShift documentation on deleting Operators from a cluster.


    Install Knative Serving

    Create a KnativeServing object to install Knative Serving using the OpenShift Serverless Operator.

    Sample serving.yaml

    apiVersion: v1
    kind: Namespace
    metadata:
     name: knative-serving
    ---
    apiVersion: serving.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
     name: knative-serving
     namespace: knative-serving

    Prerequisite

  • An account with cluster administrator access.
  • Installed OpenShift Serverless Operator.

    Procedure

    1. Copy the sample YAML file into serving.yaml and apply it using:
      $ oc apply -f serving.yaml

    2. Verify the installation is complete using the command:
      $ oc get knativeserving/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'

      Results should be similar to:

      DeploymentsAvailable=True
      InstallSucceeded=True
      Ready=True

  • Get started with Knative services

    Knative services are Kubernetes services that a user creates to deploy a serverless application. Each Knative service is defined by a route and a configuration, contained in a .yaml file.


    Create a Knative service

    To create a service, create the service.yaml file.

    We can copy the sample below. This sample will create a sample golang application called helloworld-go and allows us to specify the image for that application.

    apiVersion: serving.knative.dev/v1alpha1 # Current version of Knative
    kind: Service
    metadata:
      name: helloworld-go # The name of the app
      namespace: default # The namespace the app will use
    spec:
      template:
        spec:
          containers:
    - image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
      env:
    - name: TARGET # The environment variable printed out by the sample app
      value: "Go Sample v1"


    Deploy a serverless application

    To deploy a serverless application, we must apply the service.yaml file.

    Procedure

    1. Navigate to the directory where the service.yaml file is contained.

    2. Deploy the application by applying the service.yaml file.
      $ oc apply --filename service.yaml

    Now that service has been created and the application has been deployed, Knative will create a new immutable revision for this version of the application.

    Knative will also perform network programming to create a route, ingress, service, and load balancer for our application, and will automatically scale your pods up and down based on traffic, including inactive pods.


    Configure Knative services

    Configure the 'yaml' file for each Knative service before deploying an application.

    This section provides information on basic Knative service yaml configuration tasks, such as configuring a domain for the service.


    Configure domains for Knative services

    The Red Hat OpenShift Serverless Operator will attempt to find the public domain used by the cluster and use that same domain for Knative Services. Having a domain configured makes it possible to reach services in the cluster via HTTP over the internet directly, for example, using the curl command.

    Alternatively, we can configure a different domain for Knative services or even configure a different domain for different Knative services based on labels. Once we have completed this configuration, we can create a template for the domain configuration.

    Procedure

    1. To change the configuration for Knative services, we must edit the config-domain ConfigMap in the knative-serving namespace using oc edit.

        # oc edit cm config-domain -n knative-serving

    2. Copy the sample ConfigMap and modify the relevant fields.
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: config-domain
        namespace: knative-serving
        labels:
          serving.knative.dev/release: devel
      data:
        example.com: |

      Here is an example of the configuration for the domain example.com, which will be used for routes with the label app set to example.

      data:
        example.com: |
          selector:
            app: example


    Define a template for the configured domain

    Procedure

    1. Define a domainTemplate.

      domainTemplate specifies the golang text template string to use when constructing the Knative service's DNS name.

      The default value is {{.Name}}.{{.Namespace}}.{{.Domain}}.

      An example of this DNS name would look like myapp.mynamespace.example.com for an application called myapp with a namespace called mynamespace, and example.com as the domain.

    2. You will need to change this value if extra levels in the domain name cause issues. For example, if the wildcard certificate's domain only supports a single level domain name.
    3. To resolve these types of issues, use the format {{.Name}}-{{.Namespace}}.{{.Domain}} or remove the Namespace entirely from the template.

    4. When choosing a new value, remember that using characters such as dashes in service or namespace names can cause conflicts and should be avoided. Note

      It is strongly recommended to keep the namespace part of the template to avoid domain name clashes.

    5. Define a tagTemplate.

      tagTemplate specifies the golang text template string to use when constructing the DNS name for tags within traffic blocks.

      Tags are similar to subroutes that enable traffic blocks to specific revisions. Tags are used along with the domainTemplate to determine the full URL for the tag in the format {{.Name}}-{{.Tag}}.

      For example, if your service is called myapp, the namespace is mynamespace, the domain is example.com and the tag we want is latest, the generated host will be myapp-latest.mynamespace.example.com.

    Additional resources

    For more information about golang templates, see the Go package template documentation.


    Monitor OpenShift Serverless components

    As an OpenShift cluster administrator, we can deploy the OpenShift monitoring stack and monitor the metrics of OpenShift Serverless components.

    When using the OpenShift Serverless Operator, the required ServiceMonitor objects are created automatically for monitoring the deployed components.

    OpenShift Serverless components, such as Knative Serving, expose metrics data. Administrators can monitor this data using the web console.


    Configure cluster for application monitoring

    Before application developers can monitor their applications, the human operator of the cluster needs to configure the cluster accordingly. This procedure shows how to.

    Prerequisites

  • Log in as a user that belongs to a role with administrative privileges for the cluster.

    Procedure

    1. In the web console, navigate to Catalog -OperatorHub and install the Prometheus Operator in the namespace where our application is.

    2. Navigate to Catalog -Developer Catalog and install Prometheus, Alertmanager, Prometheus Rule, and Service Monitor in the same namespace.

  • Verify an OpenShift monitoring installation for use with Knative Serving

    Manual configuration for monitoring by an administrator is not required, but we can carry out these steps to verify that monitoring is installed correctly.

    Procedure

    1. Verify that the ServiceMonitor objects are deployed.
      $ oc get servicemonitor -n knative-serving
      NAME         AGE
      activator    11m
      autoscaler   11m
      controller   11m

    2. Verify that the openshift.io/cluster-monitoring=true label has been added to the Knative Serving namespace:
      $ oc get namespace knative-serving --show-labels
      NAME              STATUS   AGE   LABELS
      knative-serving   Active   4d    istio-injection=enabled,openshift.io/cluster-monitoring=true,serving.knative.dev/release=v0.7.0


    Monitor Knative Serving using the OpenShift monitoring stack

    This section provides example instructions for the visualization of Knative Serving Pod autoscaling metrics using the OpenShift monitoring tools.

    Prerequisites

  • Install the OpenShift monitoring stack.

    Procedure

    1. Navigate to the web console and authenticate.
    2. Navigate to Monitor -Metrics.

    3. Enter the Expression and select Run queries. To monitor Knative Serving autoscaler Pods, use this example expression.
      autoscaler_actual_pods

      You will now see monitoring information for the Knative Serving autoscaler Pods in the console.


  • Cluster logging with OpenShift Serverless


    About cluster logging

    As an OpenShift cluster administrator, we can deploy cluster logging to aggregate logs for a range of OpenShift services.

    The cluster logging components are based upon Elasticsearch, Fluentd, and Kibana (EFK). The collector, Fluentd, is deployed to each node in the OpenShift cluster. It collects all node and container logs and writes them to Elasticsearch (ES). Kibana is the centralized, web UI where users and administrators can create rich visualizations and dashboards with the aggregated data.

    OpenShift cluster administrators can deploy cluster logging using a few CLI commands and the web console to install the Elasticsearch Operator and Cluster Logging Operator. When the operators are installed, create a Cluster Logging Custom Resource (CR) to schedule cluster logging pods and other resources necessary to support cluster logging. The operators are responsible for deploying, upgrading, and maintaining cluster logging.

    We can configure cluster logging by modifying the Cluster Logging Custom Resource (CR), named instance. The CR defines a complete cluster logging deployment that includes all the components of the logging stack to collect, store and visualize logs. The Cluster Logging Operator watches the ClusterLogging Custom Resource and adjusts the logging deployment accordingly.

    Administrators and application developers can view the logs of the projects for which they have view access.


    Deploy and configure cluster logging

    OpenShift cluster logging is designed to be used with the default configuration, which is tuned for small to medium sized OpenShift clusters.

    The installation instructions that follow include a sample Cluster Logging Custom Resource (CR), which we can use to create a cluster logging instance and configure the cluster logging deployment.

    To use the default cluster logging install, we can use the sample CR directly.

    To customize your deployment, make changes to the sample CR as needed. The following describes the configurations we can make when installing the cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications we can make outside of the Cluster Logging Custom Resource.


    Configure and Tuning Cluster Logging

    We can configure the cluster logging environment by modifying the Cluster Logging Custom Resource deployed in the openshift-logging project.

    We can modify any of the following components upon install or after install

    Management state

    The Cluster Logging Operator and Elasticsearch Operator can be in a Managed or Unmanaged state.

    In managed state, the Cluster Logging Operator (CLO) responds to changes in the Cluster Logging Custom Resource (CR) and attempts to update the cluster to match the CR.

    In order to modify certain components managed by the Cluster Logging Operator or the Elasticsearch Operator, set the operator to the unmanaged state.

    In Unmanaged state, the operators do not respond to changes in the CRs. The administrator assumes full control of individual component configurations and upgrades when in unmanaged state. Note

    The OpenShift documentation indicates in a prerequisite step when set the cluster to Unmanaged.

      spec:
        managementState: "Managed"

    The OpenShift documentation indicates in a prerequisite step when set the cluster to Unmanaged. Important

    An unmanaged deployment will not receive updates until the ClusterLogging custom resource is placed back into a managed state.

    Memory and CPU

    We can adjust both the CPU and memory limits for each component by modifying the resources block with valid memory and CPU values:
    spec:
      logStore:
        elasticsearch:
          resources:
    limits:
      cpu:
      memory:
    requests:
      cpu: 1
      memory: 16Gi
          type: "elasticsearch"
      collection:
        logs:
          fluentd:
    resources:
      limits:
    cpu:
    memory:
      requests:
    cpu:
    memory:
    type: "fluentd"
      visualization:
        kibana:
          resources:
    limits:
      cpu:
      memory:
    requests:
      cpu:
      memory:
         type: kibana
      curation:
        curator:
          resources:
    limits:
      memory: 200Mi
    requests:
      cpu: 200m
      memory: 200Mi
          type: "curator"

    Elasticsearch storage

    We can configure a persistent storage class and size for the Elasticsearch cluster using the storageClass name and size parameters. The Cluster Logging Operator creates a PersistentVolumeClaim for each data node in the Elasticsearch cluster based on these parameters.
      spec:
        logStore:
          type: "elasticsearch"
          elasticsearch:
    storage:
      storageClassName: "gp2"
      size: "200G"

    This example specifies each data node in the cluster will be bound to a PersistentVolumeClaim that requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica. Note

    Omitting the storage block results in a deployment that includes ephemeral storage only.

      spec:
        logStore:
          type: "elasticsearch"
          elasticsearch:
    storage: {}

    Elasticsearch replication policy

    We can set the policy that defines how Elasticsearch shards are replicated across data nodes in the cluster:

  • FullRedundancy. The shards for each index are fully replicated to every data node.
  • MultipleRedundancy. The shards for each index are spread over half of the data nodes.
  • SingleRedundancy. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist.
  • ZeroRedundancy. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails.
  • Curator schedule

    You specify the schedule for Curator in the [cron format](https://en.wikipedia.org/wiki/Cron).
      spec:
        curation:
        type: "curator"
        resources:
        curator:
          schedule: "30 3 * * *"


    Sample modified Cluster Logging Custom Resource

    The following is an example of a Cluster Logging Custom Resource modified using the options previously described.

    Sample modified Cluster Logging Custom Resource

    apiVersion: "logging.openshift.io/v1alpha1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
      namespace: "openshift-logging"
    spec:
      managementState: "Managed"
      logStore:
        type: "elasticsearch"
        elasticsearch:
          nodeCount: 2
          resources:
    limits:
      memory: 2Gi
    requests:
      cpu: 200m
      memory: 2Gi
          storage: {}
          redundancyPolicy: "SingleRedundancy"
      visualization:
        type: "kibana"
        kibana:
          resources:
    limits:
      memory: 1Gi
    requests:
      cpu: 500m
      memory: 1Gi
          replicas: 1
      curation:
        type: "curator"
        curator:
          resources:
    limits:
      memory: 200Mi
    requests:
      cpu: 200m
      memory: 200Mi
          schedule: "*/5 * * * *"
      collection:
        logs:
          type: "fluentd"
          fluentd:
    resources:
      limits:
    memory: 1Gi
      requests:
    cpu: 200m
    memory: 1Gi


    Use cluster logging to find logs for Knative Serving components

    Procedure

    1. To open the Kibana UI, the visualization tool for Elasticsearch, use the following command to get the Kibana route:
      $ oc -n openshift-logging get route kibana
    2. Use the route's URL to navigate to the Kibana dashboard and log in.
    3. Ensure the index is set to .all. If the index is not set to .all, only the OpenShift system logs will be listed.

    4. We can filter the logs using the knative-serving namespace. Enter kubernetes.namespace_name:knative-serving in the search box to filter results. Note

      Knative Serving uses structured logging by default. We can enable the parsing of these logs by customizing the cluster logging Fluentd settings. This makes the logs more searchable and enables filtering on the log level to quickly identify issues.


    Use cluster logging to find logs for services deployed with Knative Serving

    With OpenShift Cluster Logging, the logs that our applications write to the console are collected in Elasticsearch. The following procedure outlines how to apply these capabilities to applications deployed by using Knative Serving.

    Procedure

    1. Find the URL to Kibana:

        $ oc -n cluster-logging get route kibana`

    2. Enter the URL in your browser to open the Kibana UI.

    3. Ensure the index is set to .all. If the index is not set to .all, only the OpenShift system logs will be listed.

    4. Filter the logs using the Kubernetes namespace your service is deployed in. Add a filter to identify the service itself: kubernetes.namespace_name:default AND kubernetes.labels.serving_knative_dev\/service:{SERVICE_NAME}. Note

      We can also filter by using /configuration or /revision.

    5. We can narrow your search by using kubernetes.container_name:<user-container> to only display the logs generated by our application. Otherwise, you will see logs from the queue-proxy. Note

      Use JSON-based structured logging in our application to allow for the quick filtering of these logs in production environments.


    Configure Knative Serving autoscaling

    OpenShift Serverless provides capabilities for automatic Pod scaling, including scaling inactive Pods to zero, by enabling the Knative Serving autoscaling system in an OpenShift cluster.

    To enable autoscaling for Knative Serving, configure concurrency and scale bounds in the revision template. Note

    Any limits or targets set in the revision template are measured against a single instance of our application. For example, setting the target annotation to 50 will configure the autoscaler to scale the application so that each instance of it will handle 50 requests at a time.


    Configure concurrent requests for Knative Serving autoscaling

    We can specify the number of concurrent requests that should be handled by each instance of an application (revision container) by adding the target annotation or the containerConcurrency field in the revision template.

    Here is an example of target being used in a revision template:

    apiVersion: serving.knative.dev/v1alpha1
    kind: Service
    metadata:
      name: myapp
    spec:
      template:
        metadata:
          annotations:
    autoscaling.knative.dev/target: 50
        spec:
          containers:
          - image: myimage

    Here is an example of containerConcurrency being used in a revision template:

    apiVersion: serving.knative.dev/v1alpha1
    kind: Service
    metadata:
      name: myapp
    spec:
      template:
        metadata:
          annotations:
        spec:
          containerConcurrency: 100
          containers:
          - image: myimage

    Adding a value for both target and containerConcurrency will target the target number of concurrent requests, but impose a hard limit of the containerConcurrency number of requests.

    For example, if the target value is 50 and the containerConcurrency value is 100, the targeted number of requests will be 50, but the hard limit will be 100.

    If the containerConcurrency value is less than the target value, the target value will be tuned down, since there is no need to target more requests than the number that can actually be handled. Note

    containerConcurrency should only be used if there is a clear need to limit how many requests reach the application at a given time. Using containerConcurrency is only advised if the application needs to have an enforced constraint of concurrency.


    Configure concurrent requests using the target annotation

    The default target for the number of concurrent requests is 100, but we can override this value by adding or modifying the autoscaling.knative.dev/target annotation value in the revision template.

    Here is an example of how this annotation is used in the revision template to set the target to 50.

    autoscaling.knative.dev/target: 50


    Configure concurrent requests using the containerConcurrency field

    containerConcurrency sets a hard limit on the number of concurrent requests handled.

    containerConcurrency: 0 | 1 | 2-N

    0

    allows unlimited concurrent requests.

    1

    guarantees that only one request is handled at a time by a given instance of the revision container.

    2 or more

    will limit request concurrency to that value.
    Note

    If there is no target annotation, autoscaling is configured as if target is equal to the value of containerConcurrency.


    Configure scale bounds Knative Serving autoscaling

    The minScale and maxScale annotations can be used to configure the minimum and maximum number of Pods that can serve applications. These annotations can be used to prevent cold starts or to help control computing costs.

    minScale

    If the minScale annotation is not set, Pods will scale to zero (or to 1 if enable-scale-to-zero is false per the ConfigMap).

    maxScale

    If the maxScale annotation is not set, there will be no upper limit for the number of Pods created.

    minScale and maxScale can be configured as follows in the revision template:

    spec:
      template:
        metadata:
          autoscaling.knative.dev/minScale: "2"
          autoscaling.knative.dev/maxScale: "10"

    Using these annotations in the revision template will propagate this confguration to PodAutoscaler objects. Note

    These annotations apply for the full lifetime of a revision. Even when a revision is not referenced by any route, the minimal Pod count specified by minScale will still be provided. Keep in mind that non-routeable revisions may be garbage collected, which enables Knative to reclaim the resources.


    Use Knative Client

    Knative Client (kn) is the Knative command line interface (CLI). The CLI exposes commands for managing our applications, as well as lower level tools to interact with components of OpenShift. With kn, we can create applications and manage projects from the terminal.

    Knative client does not have its own log in mechanism. To log in to the cluster we must install the oc CLI and use the oc login. Installation options for the CLI vary depending on your operating system.


    Install the OpenShift Command-line Interface

    If we installed an earlier version of oc, we cannot use it to complete all of the commands in OpenShift 4.1. Download and install the new version of oc.

    1. From the OpenShift Infrastructure Providers page, click Download Command-line Tools.

    2. From the site that is displayed, download the compressed file for your operating system. Note

      We can install oc on Linux, Windows, or macOS.

    3. Extract the compressed file and place it in a directory that is on your PATH.


    Install the kn CLI for Linux

    For Linux distributions, we can download the CLI directly as a tar.gz archive.

    Procedure

    1. Download the CLI.

    2. Unpack the archive:
      $ tar -xf <file>
    3. Move the kn binary to a directory on your PATH.

    4. To check your path, run:
      $ echo $PATH
      Note

      If we do not use RHEL or Fedora, ensure that libc is installed in a directory on your library path. If libc is not available, we might see the following error when we run CLI commands:

      $ kn: No such file or directory


    Install the kn CLI for macOS

    kn for macOS is provided as a tar.gz archive.

    Procedure

    1. Download the CLI.
    2. Unpack and unzip the archive.
    3. Move the kn binary to a directory on your PATH.

    4. To check your PATH, open a terminal window and run:
      $ echo $PATH


    Install the kn CLI for Windows

    The CLI for Windows is provided as a zip archive.

    Procedure

    1. Download the CLI.
    2. Unzip the archive with a ZIP program.
    3. Move the kn binary to a directory on your PATH.

    4. To check your PATH, open the Command Prompt and run the command:
      C:\> path


    Log in to the CLI

    We can log in to the oc CLI to access and manage the cluster.

    Prerequisites

  • Access to an OpenShift cluster.
  • Installed CLI.

    Procedure

    1. Log in to the CLI using the oc login command and enter the required information when prompted.
      $ oc login
      Server [https://localhost:8443]: https://openshift.example.com:6443 1
      The server uses a certificate signed by an unknown authority.
      We can bypass the certificate check, but any data you send to the server could be intercepted by others.
      Use insecure connections? (y/n): y 2
      
      Authentication required for https://openshift.example.com:6443 (openshift)
      Username: user1 3
      Password: 4
      Login successful.
      
      We don't have any projects. We can try to create a new project, by running
      
          oc new-project <projectname>
      
      Welcome! See 'oc help' to get started.

      1 Enter the OpenShift server URL.

      2 Enter whether to use insecure connections.

      3 Enter the user name to log in as.

      4 Enter the user's password.

      We can now create a project or issue other commands for managing the cluster.


  • Basic workflow using Knative Client

    Use this basic workflow to create, read, update, delete (CRUD) operations on a service. The following example deploys a simple Hello World service that reads the environment variable TARGET and prints its output.

    Procedure

    1. Create a service in the default namespace from an image.
      $ kn service create hello --image gcr.io/knative-samples/helloworld-go --env TARGET=Knative
      
      Service 'hello' successfully created in namespace 'default'.
      Waiting for service 'hello' to become ready ... OK
      
      Service URL:
      http://hello.default.apps-crc.testing

    2. List the service.
      $ kn service list
      
      NAME    URL                                     GENERATION   AGE   CONDITIONS   READY   REASON
      hello   http://hello.default.apps-crc.testing   1            85s   3 OK / 3     True

    3. Check if the service is working using the curl service endpoint command:
      $ curl http://hello.default.apps-crc.testing
      
      Hello Knative!

    4. Update the service.
      $ kn service update hello --env TARGET=Kn
      
      Waiting for service 'hello' to become ready ... OK
      Service 'hello' updated in namespace 'default'.

      The service's environment variable TARGET is now set to Kn.

    5. Describe the service.
      $ kn service describe hello
      
      Name:         hello
      Namespace:    default
      URL:          http://hello.default.apps-crc.testing
      Address:      http://hello.default.svc.cluster.local
      Annotations:  serving.knative.dev/creator=kube:admin, serving.knative.dev/lastModifier=kube:admin
      Age:          34m
      
      Revisions:
        100%  Name:    hello-fszsh-2 [2] (35s)
      Image:   gcr.io/knative-samples/helloworld-go (pinned to 5ea96ba4b872)
      Env:     TARGET=Kn
      Memory:  100M ... 200M
      CPU:     400m ... 1
      
      Conditions:
        OK TYPE                   AGE REASON
        ++ ConfigurationsReady    22s
        ++ Ready                  22s
        ++ RoutesReady            22s ribe hello

    6. Delete the service.
      $ kn service delete hello
      
      Service 'hello' successfully deleted in namespace 'default'.

      We can then verify that the 'hello' service is deleted by attempting to list it.

      $ kn service list hello
      
      No services found.


    Autoscaling workflow using Knative Client

    We can access autoscaling capabilities by using kn to modify Knative services without editing YAML files directly.

    Use the service create and service update commands with the appropriate flags to configure the autoscaling behavior.
    Flag Description
    --concurrency-limit int Hard limit of concurrent requests to be processed by a single replica.
    --concurrency-target int Recommendation for when to scale up based on the concurrent number of incoming requests. Defaults to --concurrency-limit.
    --max-scale int Maximum number of replicas.
    --min-scale int Minimum number of replicas.


    Traffic splitting using Knative Client

    kn helps you control which revisions get routed traffic on your Knative service.

    Knative service allows for traffic mapping, which is the mapping of revisions of the service to an allocated portion of traffic. It offers the option to create unique URLs for particular revisions and has the ability to assign traffic to the latest revision.

    With every update to the configuration of the service, a new revision is created with the service route pointing all the traffic to the latest ready revision by default.

    We can change this behavior by defining which revision gets a portion of the traffic.

    Procedure

  • Use the kn service update command with the --traffic flag to update the traffic. Note

    --traffic RevisionName=Percent uses the following syntax:

  • The --traffic flag requires two values separated by separated by an equals sign (=).
  • The RevisionName string refers to the name of the revision.
  • Percent integer denotes the traffic portion assigned to the revision.
  • Use identifier @latest for the RevisionName to refer to the latest ready revision of the service. We can use this identifier only once with the --traffic flag.
  • If the service update command updates the configuration values for the service along with traffic flags, the @latest reference will point to the created revision to which the updates are applied.
  • --traffic flag can be specified multiple times and is valid only if the sum of the Percent values in all flags totals 100. Note

    For example, to route 10% of traffic to the new revision before putting all traffic on, use the following command:

    $ kn service update svc --traffic @latest=10 --traffic svc-vwxyz=90


    Assigning tag revisions

    A tag in a traffic block of service creates a custom URL, which points to a referenced revision. A user can define a unique tag for an available revision of a service which creates a custom URL using the format http(s)://TAG-SERVICE.DOMAIN.

    A given tag must be unique to its traffic block of the service. kn supports assigning and unassigning custom tags for revisions of services as part of the kn service update command. Note

    If we have assigned a tag to a particular revision, a user can reference the revision by its tag in the --traffic flag as --traffic Tag=Percent.

    Procedure

  • Use the following command:
    $ kn service update svc --tag @latest=candidate --tag svc-vwxyz=current
    Note

    --tag RevisionName=Tag uses the following syntax:

  • --tag flag requires two values separated by a =.
  • RevisionName string refers to name of the Revision.
  • Tag string denotes the custom tag to be given for this Revision.
  • Use the identifier @latest for the RevisionName to refer to the latest ready revision of the service. We can use this identifier only once with the --tag flag.
  • If the service update command is updating the configuration values for the Service (along with tag flags), @latest reference will be pointed to the created Revision after applying the update.
  • --tag flag can be specified multiple times.
  • --tag flag may assign different tags to the same revision.

  • Unassigning tag revisions

    Tags assigned to revisions in a traffic block can be unassigned. Unassigning tags removes the custom URLs. Note

    If a revision is untagged and it is assigned 0% of the traffic, it is removed from the traffic block entirely.

    Procedure

  • A user can unassign the tags for revisions using the kn service update command:
    $ kn service update svc --untag candidate
    Note

    --untag Tag uses the following syntax:

  • The --untag flag requires one value.
  • The tag string denotes the unique tag in the traffic block of the service which needs to be unassigned. This also removes the respective custom URL.
  • The --untag flag can be specified multiple times.

  • Traffic flag operation precedence

    All traffic-related flags can be specified using a single kn service update command. kn defines the precedence of these flags. The order of the flags specified when using the command is not taken into account.

    The precedence of the flags as they are evaluated by kn are:

    1. --untag: All the referenced revisions with this flag are removed from the traffic block.
    2. --tag: Revisions are tagged as specified in the traffic block.
    3. --traffic: The referenced revisions are assigned a portion of the traffic split.


    Traffic splitting flags

    kn supports traffic operations on the traffic block of a service as part of the kn service update command.

    The following table displays a summary of traffic splitting flags, value formats, and the operation the flag performs. The "Repetition" column denotes whether repeating the particular value of flag is allowed in a kn service update command.
    Flag Value(s) Operation Repetition
    --traffic RevisionName=Percent Gives Percent traffic to RevisionName Yes
    --traffic Tag=Percent Gives Percent traffic to the Revision having Tag Yes
    --traffic @latest=Percent Gives Percent traffic to the latest ready Revision No
    --tag RevisionName=Tag Gives Tag to RevisionName Yes
    --tag @latest=Tag Gives Tag to the latest ready Revision No
    --untag Tag Removes Tag from Revision Yes


  • Serverless Release Notes


    Get support

    If you experience difficulty with a procedure described in this documentation, visit the Customer Portal to learn more about support for Technology Preview features.


    New features in Technology Preview 1

    This release of OpenShift Serverless introduces the OpenShift Serverless Operator, which supports Knative Serving and is tested for OpenShift Service Mesh 1.0.


    Known issues

    The following limitations exist in OpenShift Serverless at this time:

    • The Knative Serving Operator should wait for ServiceMeshMemberRoll to include the knative-serving namespace. The installation procedure recommends creating the knative-serving namespace and then installing the operator. Istio does not consider the knative-serving namespace to be in the ServiceMeshMemberRoll when the Knative Serving Pods are being created. Consequently, the sidecars are not injected.

    • Knative service returns a 503 status code when the cluster is running for a long time. The Knative Serving Pods do not show any errors. Restarting the istio-pilot Pod temporarily fixes the issue.

    • The gRPC and HTTP2 do not work against routes. This is a known limitation of OpenShift routes.



    Quick Links



    Help



    Site Info


    Related Sites


    About