+

Search Tips   |   Advanced Search


Update Clusters

  1. Update a cluster to a minor version from the web console
    1. OpenShift update service
    2. Update a cluster using the web console
  2. Update a cluster to a minor version using the CLI
    1. OpenShift update service
    2. Update a cluster using the CLI
  3. Update a cluster that includes RHEL compute machines
    1. OpenShift update service
    2. Update a cluster using the web console
    3. (Optional) Add hooks to perform Ansible tasks on RHEL machines
    4. About Ansible hooks for upgrades
    5. Configure the Ansible inventory file to use hooks
    6. Available hooks for RHEL compute machines
    7. Update RHEL compute machines in the cluster


Update a cluster to a minor version from the web console

We can update an OpenShift cluster to a minor version using the web console.


OpenShift update service

The update service provides over-the-air updates to both OpenShift and Red Hat Enterprise Linux CoreOS (RHCOS). The update service provides a graph that contains vertices and edges. Edges show which versions we can safely update. Vertices are update payloads that specify the intended state of the managed cluster components.

The Cluster Version Operator (CVO) in the cluster checks with the OpenShift update service to see the valid updates and update paths based on current component versions and information in the graph. When requesting an update, the OpenShift CVO uses the release image for that update to upgrade the cluster. The release artifacts are hosted in Quay as container images.

To allow the OpenShift update service to provide only compatible updates, a release verification pipeline exists to drive automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift update service notifies you that it is available.

During continuous update mode, two controllers run. One continuously updates the payload manifests, applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether they are available, upgrading, or failed. The second controller polls the OpenShift update service to determine if updates are available.

Reverting the cluster to a previous version, or a rollback, is not supported. Only upgrading to a newer version is supported.


Update a cluster using the web console

If updates are available, we can update the cluster from the web console.

We can find information about available OpenShift advisories and updates in the errata section of the Customer Portal.

Prerequisites

  • Have access to the web console as a user with admin privileges.

Procedure

  1. From the web console, click Administration > Cluster Settings and review the contents of the Overview tab.

    1. For production clusters, ensure that the CHANNEL is set to stable-4.1.

      For production clusters, we must subscribe to the stable-4.1 channel.

    2. If the UPDATE STATUS is not Updates Available, we cannot upgrade the cluster.
    3. The DESIRED VERSION indicates the cluster version that the cluster is running or is updating to.

  2. Click Updates Available, select a version to update to, and click Update. The UPDATE STATUS changes to Updating, and we can review the progress of the Operator upgrades on the Cluster Operators tab.


Update a cluster to a minor version using the CLI

We can update, or upgrade, an OpenShift cluster using the OpenShift CLI (oc).


OpenShift update service

The OpenShift update service provides over-the-air updates to both OpenShift and Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram that contain vertices and the edges that connect them, of component Operators. The edges in the graph show which versions we can safely update to, and the vertices are update payloads that specify the intended state of the managed cluster components.

The Cluster Version Operator (CVO) in the cluster checks with the OpenShift update service to see the valid updates and update paths based on current component versions and information in the graph. When requesting an update, the OpenShift CVO uses the release image for that update to upgrade the cluster. The release artifacts are hosted in Quay as container images.

To allow the OpenShift update service to provide only compatible updates, a release verification pipeline exists to drive automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift update service notifies you that it is available.

During continuous update mode, two controllers run. One continuously updates the payload manifests, applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether they are available, upgrading, or failed. The second controller polls the OpenShift update service to determine if updates are available.

Reverting the cluster to a previous version, or a rollback, is not supported. Only upgrading to a newer version is supported.


Update a cluster using the CLI

If updates are available, we can update the cluster using the OpenShift CLI (oc).

We can find information about available OpenShift advisories and updates in the errata section of the Customer Portal.

Prerequisites

  • Install the version of the OpenShift Command-line Interface (CLI), commonly known as oc, that matches the version for your updated version.
  • Log in to the cluster as user with cluster-admin privileges.
  • Install the jq package.

Procedure

  1. Ensure that the cluster is available:
    $ oc get clusterversion
    
    NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
    version   4.1.0     True        False         158m    Cluster version is 4.1.0

  2. Review the current update channel information and confirm that your channel is set to stable-4.1:
    $ oc get clusterversion -o json|jq ".items[0].spec"
    
    {
      "channel": "stable-4.1",
      "clusterID": "990f7ab8-109b-4c95-8480-2bd1deec55ff",
      "upstream": "https://api.openshift.com/api/upgrades_info/v1/graph"
    }

    For production clusters, we must subscribe to the stable-4.1 channel.

  3. View the available updates and note the version number of the update that we want to apply:
    $ oc adm upgrade
    
    Cluster version is 4.1.0
    
    Updates:
    
    VERSION IMAGE
    4.1.2   quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b

  4. Apply an update:

    • To update to the latest version:
      $ oc adm upgrade --to-latest=true 1

    • To update to a specific version:
      $ oc adm upgrade --to=<version> 1

      1 1 <version> is the update version obtained from the output of the previous command.

  5. Review the status of the Cluster Version Operator:
    $ oc get clusterversion -o json|jq ".items[0].spec"
    
    {
      "channel": "stable-4.1",
      "clusterID": "990f7ab8-109b-4c95-8480-2bd1deec55ff",
      "desiredUpdate": {
        "force": false,
        "image": "quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b",
        "version": "4.1.2" 1
      },
      "upstream": "https://api.openshift.com/api/upgrades_info/v1/graph"
    }

    1 If the version number in the desiredUpdate stanza matches the value that you specified, the update is in progress.

  6. Review the cluster version status history to monitor the status of the update. It might take some time for all the objects to finish updating.
    $ oc get clusterversion -o json|jq ".items[0].status.history"
    
    [
      {
        "completionTime": null,
        "image": "quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b",
        "startedTime": "2019-06-19T20:30:50Z",
        "state": "Partial",
        "verified": true,
        "version": "4.1.2"
      },
      {
        "completionTime": "2019-06-19T20:30:50Z",
        "image": "quay.io/openshift-release-dev/ocp-release@sha256:b8307ac0f3ec4ac86c3f3b52846425205022da52c16f56ec31cbe428501001d6",
        "startedTime": "2019-06-19T17:38:10Z",
        "state": "Completed",
        "verified": false,
        "version": "4.1.0"
      }
    ]

    The history contains a list of the most recent versions applied to the cluster. This value is updated when the CVO applies an update. The list is ordered by date, where the newest update is first in the list. Updates in the history have state Completed if the rollout completed and Partial if the update failed or did not complete.

    If an upgrade fails, the Operator stops and reports the status of the failing component. Rolling the cluster back to a previous version is not supported. If your upgrade fails, contact Red Hat support.

  7. After the update completes, we can confirm that the cluster version has updated to the new version:
    $ oc get clusterversion
    
    NAME      VERSION     AVAILABLE   PROGRESSING   SINCE     STATUS
    version   4.1.2       True        False         2m        Cluster version is 4.1.2


Update a cluster that includes RHEL compute machines

We can update, or upgrade, an OpenShift cluster. If the cluster contains Red Hat Enterprise Linux (RHEL) machines, perform more steps to update those machines.

Prerequisites


OpenShift update service

The OpenShift update service provides over-the-air updates to both OpenShift and Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram that contain vertices and the edges that connect them, of component Operators. The edges in the graph show which versions we can safely update to, and the vertices are update payloads that specify the intended state of the managed cluster components.

The Cluster Version Operator (CVO) in the cluster checks with the OpenShift update service to see the valid updates and update paths based on current component versions and information in the graph. When requesting an update, the OpenShift CVO uses the release image for that update to upgrade the cluster. The release artifacts are hosted in Quay as container images.

To allow the OpenShift update service to provide only compatible updates, a release verification pipeline exists to drive automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift update service notifies you that it is available.

During continuous update mode, two controllers run. One continuously updates the payload manifests, applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether they are available, upgrading, or failed. The second controller polls the OpenShift update service to determine if updates are available.

Reverting the cluster to a previous version, or a rollback, is not supported. Only upgrading to a newer version is supported.


Update a cluster using the web console

If updates are available, we can update the cluster from the web console.

We can find information about available OpenShift advisories and updates in the errata section of the Customer Portal.

Prerequisites

  • Have access to the web console as a user with admin privileges.

Procedure

  1. From the web console, click Administration > Cluster Settings and review the contents of the Overview tab.

    1. For production clusters, ensure that the CHANNEL is set to stable-4.1.

      For production clusters, we must subscribe to the stable-4.1 channel.

    2. If the UPDATE STATUS is not Updates Available, we cannot upgrade the cluster.
    3. The DESIRED VERSION indicates the cluster version that the cluster is running or is updating to.
  2. Click Updates Available, select a version to update to, and click Update. The UPDATE STATUS changes to Updating, and we can review the progress of the Operator upgrades on the Cluster Operators tab.


(Optional) Add hooks to perform Ansible tasks on RHEL machines

We can use hooks to run Ansible tasks on the RHEL compute machines during the OpenShift update.


About Ansible hooks for upgrades

When you update OpenShift, we can run custom tasks on your Red Hat Enterprise Linux (RHEL) nodes during specific operations by using hooks. Hooks allow you to provide files that define tasks to run before or after specific update tasks. We can use hooks to validate or modify custom infrastructure when you update the RHEL compute nodes in you OpenShift cluster.

Because when a hook fails, the operation fails, we must design hooks that are idempotent, or can run multiple times and provide the same results.

Hooks have the following important limitations: - Hooks do not have a defined or versioned interface. They can use internal openshift-ansible variables, but it is possible that the variables will be modified or removed in future OpenShift releases. - Hooks do not have error handling, so an error in a hook halts the update process. If we get an error, we must address the problem and then start the upgrade again.


Configure the Ansible inventory file to use hooks

You define the hooks to use when you update the Red Hat Enterprise Linux (RHEL) compute, or worker, machines in the hosts inventory file under the all:vars section.

Prerequisites

  • You have access to the machine used to add the RHEL compute machines cluster. We must have access to the hosts Ansible inventory file that defines the RHEL machines.

Procedure

  1. After you design the hook, create a YAML file that defines the Ansible tasks for it. This file must be a set of tasks and cannot be a playbook, as shown in the following example:
    ---
    # Trivial example forcing an operator to acknowledge the start of an upgrade
    # file=/home/user/openshift-ansible/hooks/pre_compute.yml
    
    - name: note the start of a compute machine update
      debug:
          msg: "Compute machine upgrade of {{ inventory_hostname }} is about to start"
    
    - name: require the user agree to start an upgrade
      pause:
          prompt: "Press Enter to start the compute machine update"

  2. Modify the hosts Ansible inventory file to specify the hook files. The hook files are specified as parameter values in the the [all:vars] section, as shown:

    Example hook definitions in an inventory file

    [all:vars]
    openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml
    openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml

    To avoid ambiguity in the paths to the hook, use absolute paths instead of a relative paths in their definitions.


Available hooks for RHEL compute machines

We can use the following hooks when you update the Red Hat Enterprise Linux (RHEL) compute machines in the OpenShift cluster.

Hook name Description
openshift_node_pre_cordon_hook

  • Runs before each node is cordoned.
  • This hook runs against each node in serial.
  • If a task must run against a different host, the task must use delegate_to or local_action.
openshift_node_pre_upgrade_hook

  • Runs after each node is cordoned but before it is updated.
  • This hook runs against each node in serial.
  • If a task must run against a different host, the task must use delegate_to or local_action.
openshift_node_pre_uncordon_hook

  • Runs after each node is updated but before it is uncordoned.
  • This hook runs against each node in serial.
  • If a task must run against a different host, they task must use delegate_to or local_action.
openshift_node_post_upgrade_hook

  • Runs after each node uncordoned. It is the last node update action.
  • This hook runs against each node in serial.
  • If a task must run against a different host, the task must use delegate_to or local_action.


Update RHEL compute machines in the cluster

After you update the cluster, we must update the Red Hat Enterprise Linux (RHEL) compute machines in the cluster.

Prerequisites

  • You updated the cluster.

    Because the RHEL machines require assets that are generated by the cluster to complete the update process, we must update the cluster before you update the RHEL compute machines in it.

  • You have access to the machine used to add the RHEL compute machines cluster. We must have access to the hosts Ansible inventory file that defines the RHEL machines and the upgrade playbook.

Procedure

  1. Review your Ansible inventory file at /<path>/inventory/hosts and ensure that all of the compute, or worker, machines are listed in the [workers] section, as shown in the following example:
    [all:vars]
    ansible_user=root
    #ansible_become=True
    
    openshift_kubeconfig_path="~/.kube/config"
    openshift_pull_secret_path="~/pull-secret.txt"
    
    [workers]
    mycluster-worker-0.example.com
    mycluster-worker-1.example.com
    mycluster-worker-2.example.com
    mycluster-worker-3.example.com

    If all of the RHEL compute machines are not listed in the [workers] section, we must move them to that section.

  2. Change to the openshift-ansible directory and run the upgrade playbook:
    $ cd /usr/share/ansible/openshift-ansible
    $ ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1

    1 For <path>, path to the Ansible inventory file that createdd.


Quick Links


Help


Site Info


Related Sites



About