+

Search Tips   |   Advanced Search

Machine Management

  1. Machine API overview
  2. Sample YAML for a MachineSet Custom Resource
  3. Create a MachineSet
  4. Manually scaling a MachineSet
  5. Modify a MachineSet
  6. Apply autoscaling to an OpenShift cluster
    1. About the ClusterAutoscaler
    2. About the MachineAutoscaler
    3. Configure the ClusterAutoscaler
    4. Configure the MachineAutoscalers
    5. Additional resources
    6. Create infrastructure MachineSets
    7. Create infrastructure MachineSets for production environments
    8. Move resources to infrastructure MachineSets
    9. Add RHEL compute machines to an OpenShift cluster
    10. About adding RHEL compute nodes to a cluster
    11. System requirements for RHEL compute nodes
    12. Prepare the machine to run the playbook
    13. Prepare a RHEL compute node
    14. Add a RHEL compute machine to the cluster
    15. Approving the CSRs for the machines
    16. Required parameters for the Ansible hosts file
  7. Add more RHEL compute machines to an OpenShift cluster
    1. About adding RHEL compute nodes to a cluster
    2. System requirements for RHEL compute nodes
    3. Prepare a RHEL compute node
    4. Add more RHEL compute machines to the cluster
    5. Approving the CSRs for the machines
    6. Required parameters for the Ansible hosts file
  8. Deploy machine health checks
    1. About MachineHealthChecks
    2. Sample MachineHealthCheck resource
    3. Create a MachineHealthCheck resource


Machine management


Machine API overview

The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift resources.

For clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift 4.1 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.

The two primary resources are:

    Machines

    A fundamental unit that describes the host for a Node. A machine has a providerSpec, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.

    MachineSets

    Groups of machines. MachineSets are to machines as ReplicaSets are to Pods. If you need more machines or must scale them down, you change the replicas field on the MachineSet to meet your compute need.

The following custom resources add more capabilities to the cluster:

    MachineAutoscaler

    This resource automatically scales machines in a cloud. We can set the minimum and maximum scaling boundaries for nodes in a specified MachineSet, and the MachineAutoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator.

    ClusterAutoscaler

    This resource is based on the upstream ClusterAutoscaler project. In the OpenShift implementation, it is integrated with the Machine API by extending the MachineSet API. We can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. We can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. We can also set the ScalingPolicy so we can scale up nodes but not scale them down.

    MachineHealthCheck

    This resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.

    In version 4.1, MachineHealthChecks is a Technology Preview feature

Each MachineSet is scoped to a single zone, so the installation program sends out MachineSets across availability zones. In the case of a zone failure, we have a zone for when we must rebalance machines. The autoscaler provides best-effort balancing over the life of a cluster.


Sample YAML for a MachineSet Custom Resource

This sample YAML defines a MachineSet that runs in the us-east-1a Amazon Web Services (AWS) region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""

In this sample, <clusterID> is the cluster ID that you set when we provisioned the cluster and <role> is the node label to add.

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <clusterID> 1
      name: <clusterID>-<role>-us-east-1a 2
      namespace: openshift-machine-api
    spec:
      replicas: 1
      selector:
        matchLabels:
          machine.openshift.io/cluster-api-cluster: <clusterID> 3
          machine.openshift.io/cluster-api-machineset: <clusterID>-<role>-us-east-1a 4
      template:
        metadata:
          labels:
    machine.openshift.io/cluster-api-cluster: <clusterID> 5
    machine.openshift.io/cluster-api-machine-role: <role> 6
    machine.openshift.io/cluster-api-machine-type: <role> 7
    machine.openshift.io/cluster-api-machineset: <clusterID>-<role>-us-east-1a 8
        spec:
          metadata:
    labels:
      node-role.kubernetes.io/<role>: "" 9
          providerSpec:
    value:
      ami:
    id: ami-046fe691f52a953f9 10
      apiVersion: awsproviderconfig.openshift.io/v1beta1
      blockDevices:
    - ebs:
        iops: 0
        volumeSize: 120
        volumeType: gp2
      credentialsSecret:
    name: aws-cloud-credentials
      deviceIndex: 0
      iamInstanceProfile:
    id: <clusterID>-worker-profile 11
      instanceType: m4.large
      kind: AWSMachineProviderConfig
      placement:
    availabilityZone: us-east-1a
    region: us-east-1
      securityGroups:
    - filters:
        - name: tag:Name
          values:
    - <clusterID>-worker-sg 12
      subnet:
    filters:
      - name: tag:Name
        values:
          - <clusterID>-private-us-east-1a 13
      tags:
    - name: kubernetes.io/cluster/<clusterID> 14
      value: owned
      userDataSecret:
    name: worker-user-data

    1 3 5 11 12 13 14 Specify the the cluster ID that you set when we provisioned the cluster.

    2 4 8 Specify the cluster ID and node label.

    6 7 9 Node label to add.

    10 Specify a valid RHCOS AMI for your Amazon Web Services (AWS) zone for the OpenShift nodes.


Create a MachineSet

In addition to to the ones created by the installation program, we can create our own MachineSets to dynamically manage the machine compute resources for specific workloads of the choice.

Prerequisites

Procedure

  1. Create a new YAML file containing the MachineSet Custom Resource sample, as shown, and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

    1. If we are not sure about which value to set for an specific field, we can check an existing MachineSet from the cluster.

        $ oc get machinesets -n openshift-machine-api
        
        NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
        agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
        agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
        agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
        agl030519-vplxk-worker-us-east-1d   0         0                             55m
        agl030519-vplxk-worker-us-east-1e   0         0                             55m
        agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. Check values of an specific MachineSet:

        $ oc get machineset <machineset_name> -n \
             openshift-machine-api -o yaml
        
        ....
        
        template:
            metadata:
              labels:
        machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1
        machine.openshift.io/cluster-api-machine-role: worker 2
        machine.openshift.io/cluster-api-machine-type: worker
        machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a

        1 The cluster ID.

        2 A default node label.

  2. Create the new MachineSet:

  3. View the list of MachineSets:

      $ oc get machineset -n openshift-machine-api
      
      
      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new MachineSet is available, the DESIRED and CURRENT values match. If the MachineSet is not available, wait a few minutes and run the command again.

  4. After the new MachineSet is available, check status of the machine and the node that it references:

      $ oc get machine -n openshift-machine-api
      
      status:
        addresses:
        - address: 10.0.133.18
          type: InternalIP
        - address: ""
          type: ExternalDNS
        - address: ip-10-0-133-18.ec2.internal
          type: InternalDNS
        lastUpdated: "2019-05-03T10:38:17Z"
        nodeRef:
          kind: Node
          name: ip-10-0-133-18.ec2.internal
          uid: 71fb8d75-6d8f-11e9-9ff3-0e3f103c7cd8
        providerStatus:
          apiVersion: awsproviderconfig.openshift.io/v1beta1
          conditions:
          - lastProbeTime: "2019-05-03T10:34:31Z"
            lastTransitionTime: "2019-05-03T10:34:31Z"
            message: machine successfully created
            reason: MachineCreationSucceeded
            status: "True"
            type: MachineCreation
          instanceId: i-09ca0701454124294
          instanceState: running
          kind: AWSMachineProviderStatus

  5. View the new node and confirm that the new node has the label that you specified:

      $ oc get node <node_name> --show-labels

    Review the command output and confirm that node-role.kubernetes.io/<your_label> is in the LABELS list.

Any change to a MachineSet is not applied to existing machines owned by the MachineSet. For example, labels edited or added to an existing MachineSet are not propagated to existing machines and Nodes associated with the MachineSet.

Next steps

If you need MachineSets in other availability zones, repeat this process to create more MachineSets.


Manually scaling a MachineSet

We can add or remove an instance of a machine in a MachineSet.

To modify aspects of a MachineSet outside of scaling, see Modify a MachineSet.


Scale a MachineSet manually

If we must add or remove an instance of a machine in a MachineSet, we can manually scale the MachineSet.

Prerequisites

Procedure

  1. View the MachineSets that are in the cluster:

      $ oc get machinesets -n openshift-machine-api

    The MachineSets are listed in the form of <clusterid>-worker-<aws-region-az>.

  2. Scale the MachineSet:

      $ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api

    Or:

      $ oc edit machineset <machineset> -n openshift-machine-api

    We can scale the MachineSet up or down. It takes several minutes for the new machines to be available.

    By default, the OpenShift router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker MachineSet to 1 or 0 unless you first relocate the router pods.


Modify a MachineSet

We can make changes to a MachineSet, such as adding labels, changing the instance type, or changing block storage.

To scale a MachineSet without making other changes, see Manually scale a MachineSet.


Modify a MachineSet

To make changes to a MachineSet, edit the MachineSet YAML. Then remove all machines associated with the MachineSet by deleting or scaling down the machines to 0. Then scale the replicas back to the desired number. Changes you make to a MachineSet do not affect existing machines.

To scale a MachineSet without making other changes, we do not need to delete the machines.

By default, the OpenShift router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker MachineSet to 0 unless you first relocate the router pods.

Prerequisites

Procedure

  1. Edit the MachineSet:

      $ oc edit machineset <machineset> -n openshift-machine-api

  2. Scale down the MachineSet to 0:

      $ oc scale --replicas=0 machineset <machineset> -n openshift-machine-api

    Or:

      $ oc edit machineset <machineset> -n openshift-machine-api

    Wait for the machines to be removed.

  3. Scale up the MachineSet as needed:

      $ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api

    Or:

      $ oc edit machineset <machineset> -n openshift-machine-api

    Wait for the machines to start. The new Machines contain changes you made to the Machineset.


Apply autoscaling to an OpenShift cluster

Applying autoscaling to an OpenShift cluster involves deploying a ClusterAutoscaler and then deploying MachineAutoscalers for each Machine type in the cluster.


About the ClusterAutoscaler

The ClusterAutoscaler adjusts the size of an OpenShift cluster to meet the current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The ClusterAutoscaler has a cluster scope, and is not associated with a particular namespace.

The ClusterAutoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The ClusterAutoscaler does not increase the cluster resources beyond the limits specified.

The ClusterAutoscaler decreases the size of the cluster when some nodes are consistently not needed for a significant period, such as when it has low resource use and all of its important pods can fit on other nodes.

If the following types of pods are present on a node, the ClusterAutoscaler will not remove the node:

  • Pods with restrictive PodDisruptionBudgets (PDBs).
  • Kube-system pods that do not run on the node by default.
  • Kube-system pods that do not have a PDBB or have a PDB that is too restrictive.
  • Pods that are not backed by a controller object such as a Deployment, ReplicaSet, or StatefulSet.
  • Pods with local storage.
  • Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on.
  • Unless they also have an annoation...

      cluster-autoscaler.kubernetes.io/safe-to-evict: "true"

    Pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation.

If you configure the ClusterAutoscaler, additional usage restrictions apply:

  • Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods.
  • Specify requests for your pods.
  • If we have to prevent pods from being deleted too quickly, configure appropriate PDBs.
  • Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure.
  • Do not run additional node group autoscalers, especially the ones offered by your cloud provider.

The Horizontal Pod Autoscaler (HPA) and the ClusterAutoscaler modify cluster resources in different ways. The HPA changes the deployment's or ReplicaSet’s number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the ClusterAutoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the ClusterAutoscaler deletes the unnecessary nodes.

The ClusterAutoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the ClusterAutoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the ClusterAutoscaler inclues a priority cutoff function. We can use this cutoff to schedule "best-effort" pods, which do not cause the ClusterAutoscaler to increase resources but instead run only when spare resources are available.

Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources.


About the MachineAutoscaler

The MachineAutoscaler adjusts the number of Machines in the MachineSets that you deploy in an OpenShift cluster. We can scale both the default worker MachineSet and any other MachineSets that created. The MachineAutoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the MachineSet they target.

To scale machines, deploy a MachineAutoscaler for the ClusterAutoscaler. The ClusterAutoscaler uses the annotations on MachineSets that the MachineAutoscaler sets to determine the resources that it can scale. If you define a ClusterAutoscaler without also defining MachineAutoscalers, the ClusterAutoscaler will never scale the cluster.


Configure the ClusterAutoscaler

First, deploy the ClusterAutoscaler to manage automatic resource scaling in the OpenShift cluster.

Because the ClusterAutoscaler is scoped to the entire cluster, we can make only one ClusterAutoscaler for the cluster.


ClusterAutoscaler resource definition

This ClusterAutoscaler resource definition shows the parameters and sample values for the ClusterAutoscaler.

    apiVersion: "autoscaling.openshift.io/v1"
    kind: "ClusterAutoscaler"
    metadata:
      name: "default"
    spec:
      podPriorityThreshold: -10 1
      resourceLimits:
        maxNodesTotal: 24 2
        cores:
          min: 8 3
          max: 128 4
        memory:
          min: 4 5
          max: 256 6
        gpus:
          - type: nvidia.com/gpu 7
    min: 0 8
    max: 16 9
          - type: amd.com/gpu 10
    min: 0 11
    max: 4 12
      scaleDown: 13
        enabled: true 14
        delayAfterAdd: 10m 15
        delayAfterDelete: 5m 16
        delayAfterFailure: 30s 17
        unneededTime: 60s 18

    1 Priority that a pod must exceed to cause the ClusterAutoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod.

    2 Maximum number of nodes to deploy.

    3 Minimum number of cores to deploy.

    4 Maximum number of cores to deploy.

    5 Minimum amount of memory, in GiB, per node.

    6 Maximum amount of memory, in GiB, per node.

    7 10 Optionally, specify the type of GPU node to deploy. Only nvidia.com/gpu and amd.com/gpu are valid types.

    8 11 Minimum number of GPUs to deploy.

    9 12 Maximum number of GPUs to deploy.

    13 In this section, we can specify the period to wait for each action by using any valid ParseDuration interval, including ns, us, ms, s, m, and h.

    14 Specify whether the ClusterAutoscaler can remove unnecessary nodes.

    15 Optionally, specify the period to wait before deleting a node after a node has recently been added. If we do not specify a value, the default value of 10m is used.

    16 Period to wait before deleting a node after a node has recently been deleted. If we do not specify a value, the default value of 10s is used.

    17 Period to wait before deleting a node after a scale down failure occurred. If we do not specify a value, the default value of 3m is used.

    18 Period before an unnecessary node is eligible for deletion. If we do not specify a value, the default value of 10m is used.


Deploy the ClusterAutoscaler

To deploy the ClusterAutoscaler, created an instance of the ClusterAutoscaler resource.

Procedure

  1. Create a YAML file for the ClusterAutoscaler resource that contains the customized resource definition.

  2. Create the resource in the cluster:

      1 <filename> is the name of the resource file that you customized.

Next steps

  • After configuring the ClusterAutoscaler, configure at least one MachineAutoscaler.


Configure the MachineAutoscalers

After deploying the ClusterAutoscaler, deploy MachineAutoscaler resources that reference the MachineSets that are used to scale the cluster.

Deploy at least one MachineAutoscaler resource after you deploy the ClusterAutoscaler resource.

Configure separate resources for each MachineSet. Remember that MachineSets are different in each AWS region, so consider whether we want to enable machine scaling in multiple regions.



MachineAutoscaler resource definition

This MachineAutoscaler resource definition shows the parameters and sample values for the MachineAutoscaler.

    apiVersion: "autoscaling.openshift.io/v1beta1"
    kind: "MachineAutoscaler"
    metadata:
      name: "worker-us-east-1a" 1
      namespace: "openshift-machine-api"
    spec:
      minReplicas: 1 2
      maxReplicas: 12 3
      scaleTargetRef: 4
        apiVersion: machine.openshift.io/v1beta1
        kind: MachineSet 5
        name: worker-us-east-1a 6

    1 Specify the MachineAutoscaler name. To make it easier to identify which MachineSet this MachineAutoscaler scales, specify or include the name of the MachineSet to scale. The MachineSet name takes the following form: <clusterid>-<machineset>-<aws-region-az>

    2 Minimum number Machines of the specified type to deploy in the specified AWS zone.

    3 Maxiumum number Machines of the specified type to deploy in the specified AWS zone.

    4 In this section, provide values that describe the existing MachineSet to scale.

    5 The kind parameter value is always MachineSet.

    6 The name value must match the name of an existing MachineSet, as shown in the metadata.name parameter value.


Deploy the MachineAutoscaler

To deploy the MachineAutoscaler, created an instance of the MachineAutoscaler resource.

Procedure

  1. Create a YAML file for the MachineAutoscaler resource that contains the customized resource definition.

  2. Create the resource in the cluster:

      1 <filename> is the name of the resource file that you customized.


Additional resources


Create infrastructure MachineSets

We can create a MachineSet to host only infrastructure components. You apply specific Kubernetes labels to these Machines and then update the infrastructure components to run on only those Machines. These infrastructure nodes are not counted toward the total number of subscriptions required to run the environment.

Unlike earlier versions of OpenShift, we cannot move the infrastructure components to the master Machines. To move the components, create a new MachineSet.


OpenShift infrastructure components

The following OpenShift components are infrastructure components:

Any node that runs any other container, pod, or component is a worker node that your subscription must cover.


Create infrastructure MachineSets for production environments

In a production deployment, deploy at least three MachineSets to hold infrastructure components. Both the logging aggregation solution and the service mesh deploy ElasticSearch, and ElasticSearch requires three instances that are installed on different nodes. For high availability, install deploy these nodes to different availability zones. Since you need different MachineSets for each availability zone, create at least three MachineSets.


Sample YAML for a MachineSet Custom Resource

This sample YAML defines a MachineSet that runs in the us-east-1a Amazon Web Services (AWS) region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""

In this sample, <clusterID> is the cluster ID that you set when we provisioned the cluster and <role> is the node label to add.

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <clusterID> 1
      name: <clusterID>-<role>-us-east-1a 2
      namespace: openshift-machine-api
    spec:
      replicas: 1
      selector:
        matchLabels:
          machine.openshift.io/cluster-api-cluster: <clusterID> 3
          machine.openshift.io/cluster-api-machineset: <clusterID>-<role>-us-east-1a 4
      template:
        metadata:
          labels:
    machine.openshift.io/cluster-api-cluster: <clusterID> 5
    machine.openshift.io/cluster-api-machine-role: <role> 6
    machine.openshift.io/cluster-api-machine-type: <role> 7
    machine.openshift.io/cluster-api-machineset: <clusterID>-<role>-us-east-1a 8
        spec:
          metadata:
    labels:
      node-role.kubernetes.io/<role>: "" 9
          providerSpec:
    value:
      ami:
    id: ami-046fe691f52a953f9 10
      apiVersion: awsproviderconfig.openshift.io/v1beta1
      blockDevices:
    - ebs:
        iops: 0
        volumeSize: 120
        volumeType: gp2
      credentialsSecret:
    name: aws-cloud-credentials
      deviceIndex: 0
      iamInstanceProfile:
    id: <clusterID>-worker-profile 11
      instanceType: m4.large
      kind: AWSMachineProviderConfig
      placement:
    availabilityZone: us-east-1a
    region: us-east-1
      securityGroups:
    - filters:
        - name: tag:Name
          values:
    - <clusterID>-worker-sg 12
      subnet:
    filters:
      - name: tag:Name
        values:
          - <clusterID>-private-us-east-1a 13
      tags:
    - name: kubernetes.io/cluster/<clusterID> 14
      value: owned
      userDataSecret:
    name: worker-user-data

    1 3 5 11 12 13 14 Specify the the cluster ID that you set when we provisioned the cluster.

    2 4 8 Specify the cluster ID and node label.

    6 7 9 Node label to add.

    10 Specify a valid RHCOS AMI for your Amazon Web Services (AWS) zone for the OpenShift nodes.


Create a MachineSet

In addition to to the ones created by the installation program, we can create our own MachineSets to dynamically manage the machine compute resources for specific workloads of the choice.

Prerequisites

Procedure

  1. Create a new YAML file containing the MachineSet Custom Resource sample, as shown, and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

    1. If we are not sure about which value to set for an specific field, we can check an existing MachineSet from the cluster.

        $ oc get machinesets -n openshift-machine-api
        
        NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
        agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
        agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
        agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
        agl030519-vplxk-worker-us-east-1d   0         0                             55m
        agl030519-vplxk-worker-us-east-1e   0         0                             55m
        agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. Check values of an specific MachineSet:

        $ oc get machineset <machineset_name> -n \
             openshift-machine-api -o yaml
        
        ....
        
        template:
            metadata:
              labels:
        machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1
        machine.openshift.io/cluster-api-machine-role: worker 2
        machine.openshift.io/cluster-api-machine-type: worker
        machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a

        1 The cluster ID.

        2 A default node label.

  2. Create the new MachineSet:

  3. View the list of MachineSets:

      $ oc get machineset -n openshift-machine-api
      
      
      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new MachineSet is available, the DESIRED and CURRENT values match. If the MachineSet is not available, wait a few minutes and run the command again.

  4. After the new MachineSet is available, check status of the machine and the node that it references:

      $ oc get machine -n openshift-machine-api
      
      status:
        addresses:
        - address: 10.0.133.18
          type: InternalIP
        - address: ""
          type: ExternalDNS
        - address: ip-10-0-133-18.ec2.internal
          type: InternalDNS
        lastUpdated: "2019-05-03T10:38:17Z"
        nodeRef:
          kind: Node
          name: ip-10-0-133-18.ec2.internal
          uid: 71fb8d75-6d8f-11e9-9ff3-0e3f103c7cd8
        providerStatus:
          apiVersion: awsproviderconfig.openshift.io/v1beta1
          conditions:
          - lastProbeTime: "2019-05-03T10:34:31Z"
            lastTransitionTime: "2019-05-03T10:34:31Z"
            message: machine successfully created
            reason: MachineCreationSucceeded
            status: "True"
            type: MachineCreation
          instanceId: i-09ca0701454124294
          instanceState: running
          kind: AWSMachineProviderStatus

  5. View the new node and confirm that the new node has the label that you specified:

      $ oc get node <node_name> --show-labels

    Review the command output and confirm that node-role.kubernetes.io/<your_label> is in the LABELS list.

Any change to a MachineSet is not applied to existing machines owned by the MachineSet. For example, labels edited or added to an existing MachineSet are not propagated to existing machines and Nodes associated with the MachineSet.

Next steps

If you need MachineSets in other availability zones, repeat this process to create more MachineSets.


Move resources to infrastructure MachineSets

Some of the infrastructure resources are deployed in the cluster by default. We can move them to the infrastructure MachineSets that createdd.


Move the router

We can deploy the router Pod to a different MachineSet. By default, the Pod is displayed to a worker node.

Prerequisites

  • Configure additional MachineSets in the OpenShift cluster.

Procedure

  1. View the IngressController Custom Resource for the router Operator:

      $ oc get ingresscontroller default -n openshift-ingress-operator -o yaml

    The command output resembles the following text:

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        creationTimestamp: 2019-04-18T12:35:39Z
        finalizers:
        - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
        generation: 1
        name: default
        namespace: openshift-ingress-operator
        resourceVersion: "11341"
        selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
        uid: 79509e05-61d6-11e9-bc55-02ce4781844a
      spec: {}
      status:
        availableReplicas: 2
        conditions:
        - lastTransitionTime: 2019-04-18T12:36:15Z
          status: "True"
          type: Available
        domain: apps.<cluster>.example.com
        endpointPublishingStrategy:
          type: LoadBalancerService
        selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default

  2. Edit the ingresscontroller resource and change the nodeSelector to use the infra label:

      $ oc edit ingresscontroller default -n openshift-ingress-operator -o yaml

    Add the nodeSelector stanza that references the infra label to the spec section, as shown:

       spec:
          nodePlacement:
            nodeSelector:
      matchLabels:
        node-role.kubernetes.io/infra: ""

  3. Confirm that the router pod is running on the infra node.

    1. View the list of router pods and note the node name of the running pod:

        $ oc get pod -n openshift-ingress -o wide
        
        AME                              READY     STATUS        RESTARTS   AGE       IP           NODE                           NOMINATED NODE   READINESS GATES
        router-default-86798b4b5d-bdlvd   1/1      Running       0          28s       10.130.2.4   ip-10-0-217-226.ec2.internal   <none>           <none>
        router-default-955d875f4-255g8    0/1      Terminating   0          19h       10.129.2.4   ip-10-0-148-172.ec2.internal   <none>           <none>

      In this example, the running pod is on the ip-10-0-217-226.ec2.internal node.

    2. View the node status of the running pod:

        $ oc get node <node_name> 1
        
        NAME                           STATUS    ROLES          AGE       VERSION
        ip-10-0-217-226.ec2.internal   Ready     infra,worker   17h       v1.11.0+406fc897d8

        1 Specify the <node_name> obtained from the pod list.

      Because the role list includes infra, the pod is running on the correct node.


Move the default registry

We configure the registry Operator to deploy its pods to different nodes.

Prerequisites

  • Configure additional MachineSets in the OpenShift cluster.

Procedure

  1. View the config/instance object:

      $ oc get config/cluster -n openshift-ingress -o yaml

    The output resembles the following text:

      apiVersion: imageregistry.operator.openshift.io/v1
      kind: Config
      metadata:
        creationTimestamp: 2019-02-05T13:52:05Z
        finalizers:
        - imageregistry.operator.openshift.io/finalizer
        generation: 1
        name: cluster
        resourceVersion: "56174"
        selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
        uid: 36fd3724-294d-11e9-a524-12ffeee2931b
      spec:
        httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
        logging: 2
        managementState: Managed
        proxy: {}
        replicas: 1
        requests:
          read: {}
          write: {}
        storage:
          s3:
            bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
            region: us-east-1
      status:
      ...

  2. Edit the config/instance object:

      $ oc edit config/cluster -n openshift-ingress

  3. Add the following lines of text the spec section of the object:

       nodeSelector:
          node-role.kubernetes.io/infra: ""

    After you save and exit we can see the registry pod being moving to the infrastructure node.


Move the monitoring solution

By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, created and apply a custom ConfigMap.

Procedure

  1. Save the following ConfigMap definition as the cluster-monitoring-configmap.yaml file:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: cluster-monitoring-config
        namespace: openshift-monitoring
      data:
        config.yaml: |+
          alertmanagerMain:
            nodeSelector:
      node-role.kubernetes.io/infra: ""
          prometheusK8s:
            nodeSelector:
      node-role.kubernetes.io/infra: ""
          prometheusOperator:
            nodeSelector:
      node-role.kubernetes.io/infra: ""
          grafana:
            nodeSelector:
      node-role.kubernetes.io/infra: ""
          k8sPrometheusAdapter:
            nodeSelector:
      node-role.kubernetes.io/infra: ""
          kubeStateMetrics:
            nodeSelector:
      node-role.kubernetes.io/infra: ""
          telemeterClient:
            nodeSelector:
      node-role.kubernetes.io/infra: ""

    Running this ConfigMap forces the components of the monitoring stack to redeploy to infrastructure nodes.

  2. Apply the new ConfigMap:

      $ oc create -f cluster-monitoring-configmap.yaml

  3. Watch the monitoring Pods move to the new machines:

      $ watch 'oc get pod -n openshift-monitoring -o wide'


Move the cluster logging resources

We can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. We cannot move the Cluster Logging Operator pod from its installed location.

For example, we can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

You should set your MachineSet to use at least 6 replicas.

Prerequisites

  • Cluster logging and Elasticsearch must be installed. These features are not installed by default.

Procedure

  1. Edit the Cluster Logging Custom Resource in the openshift-logging project:

      $ oc edit ClusterLogging instance

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogging
      
      ....
      
      spec:
        collection:
          logs:
            fluentd:
      resources: null
            rsyslog:
      resources: null
            type: fluentd
        curation:
          curator:
            nodeSelector: 1
        node-role.kubernetes.io/infra: ''
            resources: null
            schedule: 30 3 * * *
          type: curator
        logStore:
          elasticsearch:
            nodeCount: 3
            nodeSelector: 2
        node-role.kubernetes.io/infra: ''
            redundancyPolicy: SingleRedundancy
            resources:
      limits:
        cpu: 500m
        memory: 4Gi
      requests:
        cpu: 500m
        memory: 4Gi
            storage: {}
          type: elasticsearch
        managementState: Managed
        visualization:
          kibana:
            nodeSelector: 3
        node-role.kubernetes.io/infra: '' 4
            proxy:
      resources: null
            replicas: 1
            resources: null
          type: kibana
      
      ....

      1 2 3 4 Add a nodeSelector parameter with the appropriate value to the component we want to move. We can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node.


Add RHEL compute machines to an OpenShift cluster

In OpenShift, we can add Red Hat Enterprise Linux (RHEL) compute, or worker, machines to a user-provisioned infrastructure cluster. We can use RHEL as the operating system on only compute machines.


About adding RHEL compute nodes to a cluster

In OpenShift 4.1, we have the option of using Red Hat Enterprise Linux (RHEL) machines as compute, or worker, machines in the cluster if we use a user-provisioned infrastructure installation. Use RHCOS machines for the control plane, or master, machines in the cluster.

As with all installations that use user-provisioned infrastructure, if we choose to use RHEL compute machines in the cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks.

Because removing OpenShift from a machine in the cluster requires destroying the operating system, use dedicated hardware for any RHEL machines that you add to the cluster.

Swap memory is disabled on all RHEL machines that you add to the OpenShift cluster. We cannot enable swap memory on these machines.

Add RHEL compute machines to the cluster after initializing the control plane.


System requirements for RHEL compute nodes

The Red Hat Enterprise Linux (RHEL) compute, or worker, machine hosts in the OpenShift environment must meet the following minimum hardware specifications and system-level requirements.

  • You must have an active OpenShift subscription on your Red Hat account. If we do not, contact your sales representative for more information.
  • Production environments must provide compute machines to support your expected workloads. As an OpenShift cluster administrator, we must calculate the expected workload and add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.

  • Each system must meet the following hardware requirements:

    • Physical or virtual system, or an instance running on a public or private IaaS.

    • Base OS: RHEL 7.6 with "Minimal" installation option.

      Only RHEL 7.6 is supported in OpenShift 4.1. Do not upgrade your compute machines to RHEL 8.

    • NetworkManager 1.0 or later.
    • 1 vCPU.
    • Minimum 8 GB RAM.
    • Minimum 15 GB hard disk space for the file system containing /var/.
    • Minimum 1 GB hard disk space for the file system containing /usr/local/bin/.
    • Minimum 1 GB hard disk space for the file system containing the system's temporary directory. The system’s temporary directory is determined according to the rules defined in the tempfile module in Python’s standard library.
  • Each system must meet any additional requirements for your system provider. For example, if we installed the cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set.


Prepare the machine to run the playbook

Before we can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift 4.1 cluster, we must prepare a machine to run the playbook from. This machine is not part of the cluster but must be able to access it.

Prerequisites

  • Install the OpenShift Command-line Interface (CLI) on the machine that you run the playbook on.
  • Log in as a user with cluster-admin permission.

Procedure

  1. Ensure that the kubeconfig file for the cluster and the installation program used to install the cluster are on the machine. One way to accomplish this is to use the same machine used to install the cluster.
  2. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. We can use any method that our company allows, including a bastion with an SSH proxy or a VPN.

  3. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.

    If you use SSH key-based authentication, manage the key with an SSH agent.

  4. If we have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it:

    1. Register the machine with RHSM:

        # subscription-manager register --username=<user_name> --password=<password>

    2. Pull the latest subscription data from RHSM:

        # subscription-manager refresh

    3. List the available subscriptions:

        # subscription-manager list --available --matches '*OpenShift*'

    4. In the output for the previous command, find the pool ID for an OpenShift subscription and attach it:

        # subscription-manager attach --pool=<pool_id>

  5. Enable the repositories required by OpenShift 4.1:

      # subscription-manager repos \
          --enable="rhel-7-server-rpms" \
          --enable="rhel-7-server-extras-rpms" \
          --enable="rhel-7-server-ansible-2.7-rpms" \
          --enable="rhel-7-server-ose-4.1-rpms"

  6. Install the required packages, including Openshift-Ansible:

      # yum install openshift-ansible openshift-clients jq

    The openshift-ansible package provides installation program utilities and pulls in other packages required to add a RHEL compute node to the cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line.


Prepare a RHEL compute node

Before you add a Red Hat Enterprise Linux (RHEL) machine to the OpenShift cluster, we must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift subscription, and enable the required repositories.

  1. On each host, register with RHSM:

      # subscription-manager register --username=<user_name> --password=<password>

  2. Pull the latest subscription data from RHSM:

      # subscription-manager refresh

  3. List the available subscriptions:

      # subscription-manager list --available --matches '*OpenShift*'

  4. In the output for the previous command, find the pool ID for an OpenShift subscription and attach it:

      # subscription-manager attach --pool=<pool_id>

  5. Disable all yum repositories:

    1. Disable all the enabled RHSM repositories:

        # subscription-manager repos --disable="*"

    2. List the remaining yum repositories and note their names under repo id, if any:

          # yum repolist

    3. Use yum-config-manager to disable the remaining yum repositories:

          # yum-config-manager --disable <repo_id>

      Alternatively, disable all repositories:

           yum-config-manager --disable \*

      Note that this might take a few minutes if we have a large number of available repositories

  6. Enable only the repositories required by OpenShift 4.1:

        # subscription-manager repos \
            --enable="rhel-7-server-rpms" \
            --enable="rhel-7-server-extras-rpms" \
            --enable="rhel-7-server-ose-4.1-rpms"


Add a RHEL compute machine to the cluster

We can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift 4.1 cluster.

Prerequisites

  • Install packages on playbook machine.
  • You prepared the RHEL hosts for installation.

    Procedure

    Perform the following steps on the machine that you prepared to run the playbook:

    1. Extract the pull secret for the cluster:

        oc -n openshift-config get -o jsonpath='{.data.\.dockerconfigjson}' secret pull-secret | base64 -d | jq .

    2. Save the pull secret in a file named pull-secret.txt.

    3. Create an Ansible inventory file named /<path>/inventory/hosts that defines your compute machine hosts and required variables:

        [all:vars]
        ansible_user=root 1
        #ansible_become=True 2
        
        openshift_kubeconfig_path="~/.kube/config" 3
        openshift_pull_secret_path="~/pull-secret.txt" 4
        
        [new_workers] 5
        mycluster-worker-0.example.com
        mycluster-worker-1.example.com

        1 Specify the user name that runs the Ansible tasks on the remote compute machines.

        2 If we do not specify root for the ansible_user, set ansible_become to True and assign the user sudo permissions.

        3 Path to the kubeconfig file for the cluster.

        4 Path to the file containing the pull secret for the image registry for the cluster.

        5 List each RHEL machine to add to the cluster. Provide the fully-qualified domain name for each host. This name is the host name that the cluster uses to access the machine, so set the correct public or private name to access the machine.

    4. Run the playbook:

        $ cd /usr/share/ansible/openshift-ansible
        $ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1

        1 For <path>, path to the Ansible inventory file that createdd.

  • Approving the CSRs for the machines

    When adding machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. Confirm that these CSRs are approved or, if necessary, approve them ourselves.

    Prerequisites

    • You added machines to the cluster.
    • Install the jq package.

    Procedure

    1. Confirm that the cluster recognizes the machines:

        $ oc get nodes
        
        NAME      STATUS    ROLES   AGE  VERSION
        master-0  Ready     master  63m  v1.13.4+b626c2fe1
        master-1  Ready     master  63m  v1.13.4+b626c2fe1
        master-2  Ready     master  64m  v1.13.4+b626c2fe1
        worker-0  NotReady  worker  76s  v1.13.4+b626c2fe1
        worker-1  NotReady  worker  70s  v1.13.4+b626c2fe1

      The output lists all of the machines that createdd.

    2. Review the pending certificate signing requests (CSRs) and ensure that you see a client and server request with Pending or Approved status for each machine that you added to the cluster:

        $ oc get csr
        
        NAME        AGE     REQUESTOR                                                                   CONDITION
        csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending 1
        csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
        csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending 2
        csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
        ...

        1 A client request CSR.

        2 A server request CSR.

      In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

    3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for the cluster machines:

      Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If we do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. Approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager. Implement a method of automatically approving the kubelet serving certificate requests.

      • To approve them individually, run for each valid CSR:

          $ oc adm certificate approve <csr_name> 1

          1 <csr_name> is the name of a CSR from the list of current CSRs.

      • If all the CSRs are valid, approve them all:

          $ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve


    Required parameters for the Ansible hosts file

    Define the following parameters in the Ansible hosts file before adding Red Hat Enterprise Linux (RHEL) compute machines to the cluster.
    Paramter Description Values
    ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then manage the key with an SSH agent. A user name on the system. The default value is root.
    ansible_become If the values of ansible_user is not root, set ansible_become to True, and the user specified as the ansible_user must be configured for passwordless sudo access. True. If the value is not True, do not specify and define this parameter.
    openshift_kubeconfig_path A path to a local directory containing the kubeconfig file for the cluster. The path and name of the configuration file.
    openshift_pull_secret_path Path to the text file containing the pull secret to the image registry for the cluster. Use the pull secret obtained from the OpenShift Infrastructure Providers page. This pull secret allows us to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift components. The path and name of the pull secret file.


    Remove RHCOS compute machines from a cluster

    After you add the Red Hat Enterprise Linux (RHEL) compute machines to the cluster, we can remove the the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines.

    Prerequisites

    • You have added RHEL compute machines to the cluster.

    Procedure

    1. View the list of machines and record the node names of the RHCOS compute machines:

        $ oc get nodes -o wide

    2. For each RHCOS compute machine, delete the node:

      1. Mark the node as unschedulable by running the oc adm cordon command:

          1 Node name of one of the RHCOS compute machines.

      2. Drain all the pods from the node:

          $ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets 1

          1 Node name of the RHCOS compute machine that you isolated.

      3. Delete the node:

          $ oc delete nodes <node_name> 1

          1 Node name of the RHCOS compute machine that you drained.

    3. Review the list of compute machines to ensure that only the RHEL nodes remain:

        $ oc get nodes -o wide
    4. Remove the RHCOS machines from the load balancer for the cluster's compute machines. We can delete the Virtual Machines or reimage the physical hardware for the RHCOS compute machines.


    Add more RHEL compute machines to an OpenShift cluster

    If the OpenShift cluster already includes Red Hat Enterprise Linux (RHEL) compute, or worker, machines, we can add more RHEL compute machines to it.


    About adding RHEL compute nodes to a cluster

    In OpenShift 4.1, we have the option of using Red Hat Enterprise Linux (RHEL) machines as compute, or worker, machines in the cluster if we use a user-provisioned infrastructure installation. Use RHCOS machines for the control plane, or master, machines in the cluster.

    As with all installations that use user-provisioned infrastructure, if we choose to use RHEL compute machines in the cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks.

    Because removing OpenShift from a machine in the cluster requires destroying the operating system, use dedicated hardware for any RHEL machines that you add to the cluster.

    Swap memory is disabled on all RHEL machines that you add to the OpenShift cluster. We cannot enable swap memory on these machines.

    We must add RHEL compute machines to the cluster after you initialize the control plane.


    System requirements for RHEL compute nodes

    The Red Hat Enterprise Linux (RHEL) compute, or worker, machine hosts in the OpenShift environment must meet the following minimum hardware specifications and system-level requirements.

    • We must have an active OpenShift subscription on your Red Hat account. If we do not, contact your sales representative for more information.
    • Production environments must provide compute machines to support your expected workloads. As an OpenShift cluster administrator, we must calculate the expected workload and add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.

    • Each system must meet the following hardware requirements:

      • Physical or virtual system, or an instance running on a public or private IaaS.

      • Base OS: RHEL 7.6 with "Minimal" installation option.

        Only RHEL 7.6 is supported in OpenShift 4.1. We must not upgrade your compute machines to RHEL 8.

      • NetworkManager 1.0 or later.
      • 1 vCPU.
      • Minimum 8 GB RAM.
      • Minimum 15 GB hard disk space for the file system containing /var/.
      • Minimum 1 GB hard disk space for the file system containing /usr/local/bin/.
      • Minimum 1 GB hard disk space for the file system containing the system's temporary directory. The system’s temporary directory is determined according to the rules defined in the tempfile module in Python’s standard library.
    • Each system must meet any additional requirements for your system provider. For example, if we installed the cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set.


    Prepare a RHEL compute node

    Before you add a Red Hat Enterprise Linux (RHEL) machine to the OpenShift cluster, we must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift subscription, and enable the required repositories.

    1. On each host, register with RHSM:

        # subscription-manager register --username=<user_name> --password=<password>

    2. Pull the latest subscription data from RHSM:

        # subscription-manager refresh

    3. List the available subscriptions:

        # subscription-manager list --available --matches '*OpenShift*'

    4. In the output for the previous command, find the pool ID for an OpenShift subscription and attach it:

        # subscription-manager attach --pool=<pool_id>

    5. Disable all yum repositories:

      1. Disable all the enabled RHSM repositories:

          # subscription-manager repos --disable="*"

      2. List the remaining yum repositories and note their names under repo id, if any:

          # yum repolist

      3. Use yum-config-manager to disable the remaining yum repositories:

          # yum-config-manager --disable <repo_id>

        Alternatively, disable all repositories:

           yum-config-manager --disable \*

        Note that this might take a few minutes if we have a large number of available repositories

    6. Enable only the repositories required by OpenShift 4.1:

        # subscription-manager repos \
            --enable="rhel-7-server-rpms" \
            --enable="rhel-7-server-extras-rpms" \
            --enable="rhel-7-server-ose-4.1-rpms"


    Add more RHEL compute machines to the cluster

    We can add more compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift 4.1 cluster.

    Prerequisites

    • Your OpenShift cluster already contains RHEL compute nodes.
    • The hosts and pull-secret.txt files used to add the first RHEL compute machines to the cluster are on the machine that we use the run the playbook.
    • The machine that you run the playbook on must be able to access all of the RHEL hosts. We can use any method that our company allows, including a bastion with an SSH proxy or a VPN.
    • The kubeconfig file for the cluster and the installation program used to install the cluster are on the machine that we use the run the playbook.
    • We must prepare the RHEL hosts for installation.
    • Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.
    • If you use SSH key-based authentication, manage the key with an SSH agent.
    • Install the OpenShift Command-line Interface (CLI) on the machine that you run the playbook on.

    Procedure

    1. Open the Ansible inventory file at /<path>/inventory/hosts that defines your compute machine hosts and required variables.
    2. Rename the [new_workers] section of the file to [workers].

    3. Add a [new_workers] section to the file and define the fully-qualified domain names for each new host. The file resembles the following example:

        [all:vars]
        ansible_user=root
        #ansible_become=True
        
        openshift_kubeconfig_path="~/.kube/config"
        openshift_pull_secret_path="~/pull-secret.txt"
        
        [workers]
        mycluster-worker-0.example.com
        mycluster-worker-1.example.com
        
        [new_workers]
        mycluster-worker-2.example.com
        mycluster-worker-3.example.com

      In this example, the mycluster-worker-0.example.com and mycluster-worker-1.example.com machines are in the cluster and you add the mycluster-worker-2.example.com and mycluster-worker-3.example.com machines.

    4. Run the scaleup playbook:

        $ cd /usr/share/ansible/openshift-ansible
        $ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1

        1 For <path>, path to the Ansible inventory file that createdd.


    Approving the CSRs for the machines

    When adding machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. We must confirm that these CSRs are approved or, if necessary, approve them ourselves.

    Prerequisites

    • You added machines to the cluster.
    • Install the jq package.

    Procedure

    1. Confirm that the cluster recognizes the machines:

        $ oc get nodes
        
        NAME      STATUS    ROLES   AGE  VERSION
        master-0  Ready     master  63m  v1.13.4+b626c2fe1
        master-1  Ready     master  63m  v1.13.4+b626c2fe1
        master-2  Ready     master  64m  v1.13.4+b626c2fe1
        worker-0  NotReady  worker  76s  v1.13.4+b626c2fe1
        worker-1  NotReady  worker  70s  v1.13.4+b626c2fe1

      The output lists all of the machines that createdd.

    2. Review the pending certificate signing requests (CSRs) and ensure that you see a client and server request with Pending or Approved status for each machine that you added to the cluster:

        $ oc get csr
        
        NAME        AGE     REQUESTOR                                                                   CONDITION
        csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending 1
        csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
        csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending 2
        csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
        ...

        1 A client request CSR.

        2 A server request CSR.

      In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

    3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for the cluster machines:

      Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If we do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. Approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager. Implement a method of automatically approving the kubelet serving certificate requests.

      • To approve them individually, run for each valid CSR:

          $ oc adm certificate approve <csr_name> 1

          1 <csr_name> is the name of a CSR from the list of current CSRs.

      • If all the CSRs are valid, approve them all:

          $ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve


    Required parameters for the Ansible hosts file

    We must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to the cluster.
    Paramter Description Values
    ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then manage the key with an SSH agent. A user name on the system. The default value is root.
    ansible_become If the values of ansible_user is not root, set ansible_become to True, and the user specified as the ansible_user must be configured for passwordless sudo access. True. If the value is not True, do not specify and define this parameter.
    openshift_kubeconfig_path A path to a local directory containing the kubeconfig file for the cluster. The path and name of the configuration file.
    openshift_pull_secret_path A path to the text file containing the pull secret to the image registry for the cluster. Use the pull secret obtained from the OpenShift Infrastructure Providers page. This pull secret allows us to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift components. The path and name of the pull secret file.


    Deploy machine health checks

    We can configure and deploy a machine health check to automatically repair damaged machines in a machine pool.

    Machine health checks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    Prerequistes

    • Enable a FeatureGate so we can access Technology Preview features.

      Turning on Technology Preview features cannot be undone and prevents upgrades.


    About MachineHealthChecks

    MachineHealthChecks automatically repairs unhealthy Machines in a particular MachinePool.

    To monitor machine health, created a resource to define the configuration for a controller. You set a condition to check for, such as staying in the NotReady status for 15 minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor.

    We cannot apply a MachineHealthCheck to a machine with the master role.

    The controller that observes a MachineHealthCheck resource checks for the status that you defined. If a machine fails the health check, it is automatically deleted and a new one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time.

    To stop the check, you remove the resource.


    Sample MachineHealthCheck resource

    The MachineHealthCheck resource resembles the following YAML file:

    MachineHealthCheck

      apiVersion: healthchecking.openshift.io/v1alpha1
      kind: MachineHealthCheck
      metadata:
       name: example 1
       namespace: openshift-machine-api
      Spec:
        Selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <cluster_name> 2
            machine.openshift.io/cluster-api-machine-role: <label> 3
            machine.openshift.io/cluster-api-machine-type: <label> 4
            machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<AWS-zone> 5

      1 Name of the MachineHealthCheck to deploy. Include the name of the MachinePool to track.

      2 Name of the cluster.

      3 4 Specify a label for the MachinePool that we want to check.

      5 Specify the MachineSet to track in <cluster_name>-<label>-<AWS-zone> format. For example, prod-node-us-east-1a.


    Create a MachineHealthCheck resource

    We can create a MachineHealthCheck resource for all MachinePools in the cluster except the master pool.

    Prerequisites

    Procedure

    1. Create a healthcheck.yml file containing the definition of the MachineHealthCheck.

    2. Apply the healthcheck.yml file to the cluster:


    Quick Links


    Help


    Site Info


    Related Sites


    About