+

Search Tips   |   Advanced Search

Creating OpenShift clusters

Create a cluster in Red Hat OpenShift on IBM Cloud.

After getting started, we might want to create a cluster that is customized to your use case and different public and private cloud environments. Consider the following steps to create the right cluster each time.

  1. Prepare your account to create clusters. This step includes creating a billable account, setting up an API key with infrastructure permissions, making sure that we have Administrator access in IBM Cloud IAM, planning resource groups, and setting up account networking.
  2. Decide on the cluster setup. This step includes planning cluster network and HA setup, estimating costs, and if applicable, allowing network traffic through a firewall.
  3. Create your VPC Gen2 or classic cluster by following the steps in the IBM Cloud console or CLI.

OpenShift version 3.11 is deprecated, and becomes unsupported in June 2022 (date subject to change). Instead, we can create a version 4 cluster.



Sample commands

Classic clusters:

  • Classic cluster, shared virtual machine:

      ibmcloud oc cluster create classic --name my_cluster --version 4.4_openshift --zone dal10 --flavor b3c.4x16 --hardware shared --workers 3 --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID>

  • Classic cluster, bare metal:

      ibmcloud oc cluster create classic --name my_cluster --version 4.4_openshift --zone dal10 --flavor mb2c.4x32 --hardware dedicated --workers 3 --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID>

  • Classic cluster with an IBM Cloud Pak entitlement for a default worker pool of 3 worker nodes with 4 cores and 16 memory each:

      ibmcloud oc cluster create classic --name cloud_pak_cluster --version 4.4_openshift --zone dal10 --flavor b3c.4x16 --hardware dedicated --workers 3 --entitlement cloud_pak --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID>

  • For a classic multizone cluster, after you created the cluster in a multizone metro, add zones:

      ibmcloud oc zone add classic --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --private-vlan <private_VLAN_ID> --public-vlan <public_VLAN_ID>

VPC clusters:

  • VPC Generation 2 compute cluster:

      ibmcloud oc cluster create vpc-gen2 --name my_cluster --version 4.4_openshift --zone us-east-1 --vpc-id <VPC_ID> --subnet-id <VPC_SUBNET_ID> --cos-instance <COS_ID>--flavor b2.4x16 --workers 3

  • For a VPC multizone cluster, after you created the cluster in a multizone metro, add zones:

      ibmcloud oc zone add vpc-gen2 --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --subnet-id <VPC_SUBNET_ID>



Preparing to create clusters at the account level

Prepare your IBM Cloud account for IBM Cloud Kubernetes Service. After the account administrator makes these preparations, we might not need to change them each time that you create a cluster. However, each time that you create a cluster, you still want to verify that the current account-level state is what we need it to be.

  1. Create or upgrade your account to a billable account (IBM Cloud Pay-As-You-Go or Subscription).

  2. Set up an API key for Red Hat OpenShift on IBM Cloud in the region and resource groups where we want to create clusters. Assign the API key with the required service and infrastructure permissions to create clusters.

  3. Verify that you as a user (not just the API key) have the required permissions to create clusters.

    1. From the IBM Cloud console menu bar, click Manage > Access (IAM).
    2. Click the Users page, and then from the table, select yourself.
    3. From the Access policies tab, confirm that you have the required permissions to create clusters.

      Make sure that your account administrator does not assign you the Administrator platform role at the same time as scoping the access policy to a namespace.

  4. If your account uses multiple resource groups, figure out your account's strategy for managing resource groups.

    • The cluster is created in the resource group that you target when you log in to IBM Cloud. If you do not target a resource group, the default resource group is automatically targeted. Free clusters are created in the default resource group.
    • To create a cluster in a different resource group than the default, we need at least the Viewer role for the resource group. If you do not have any role for the resource group, the cluster is created in the default resource group.
    • We cannot change a cluster's resource group. Furthermore, if we need to use the ibmcloud oc cluster service bind command to integrate with an IBM Cloud service, that service must be in the same resource group as the cluster. Services that do not use resource groups like IBM Cloud Container Registry or that do not need service binding like IBM Log Analysis with LogDNA work even if the cluster is in a different resource group.
    • Consider giving clusters unique names across resource groups and regions in your account to avoid naming conflicts. We cannot rename a cluster.

  5. VPC clusters only: Set up your IBM Cloud infrastructure networking to allow worker-to-master and user-to-master communication. Your VPC clusters are created with a public and a private service endpoint by default.

    1. Enable VRF in your IBM Cloud account.
    2. Enable your IBM Cloud account to use service endpoints.
    3. Optional: If we want your VPC clusters to communicate with classic clusters over the private network interface, we can choose to set up classic infrastructure access from the VPC that the cluster is in. Note that we can set up classic infrastructure access for only one VPC per region and Virtual Routing and Forwarding (VRF) is required in your IBM Cloud account. For more information, see Set up access to your Classic Infrastructure from VPC.



Deciding on the cluster setup

After you set up your account to create clusters, decide on the setup for the cluster. You must make these decisions every time that you create a cluster. We can click on the options in the following decision tree image for more information, such as comparisons of free and standard, Kubernetes and OpenShift , or VPC and classic clusters.

OpenShift clusters are available only as standard clusters. We cannot get a free OpenShift cluster.



Creating a standard classic cluster

Use the IBM Cloud CLI or the IBM Cloud console to create a fully-customizable standard cluster with your choice of hardware isolation and access to features like multiple worker nodes for a highly available environment.


Creating a standard classic cluster in the console

Create your single zone or multizone classic OpenShift cluster by using the IBM Cloud console.

  1. Make sure that you complete the prerequisites to prepare your account and decide on your cluster setup.
  2. From the OpenShift clusters console, click Create cluster.
  3. Configure the cluster environment.
    1. From the OpenShift drop-down list, select the version that we want to use in the cluster, such as 4.4.26.
    2. Optional: For the OCP entitlement section, we can select an entitlement for a worker pool, if we have one. In most cases, leave the value set to Purchase additional licenses for this worker pool. If we have an IBM Cloud Pak with an OpenShift entitlement that we want to use, we can select Apply my Cloud Pak OCP entitlement to this worker pool. Later, when you configure the worker pool, make sure to select only the flavor and number of worker nodes that your entitlement permits.
  4. Configure the Location details for the cluster.
    1. Select the Resource group that we want to create the cluster in.
      • A cluster can be created in only one resource group, and after the cluster is created, we cannot change its resource group.
      • To create clusters in a resource group other than the default, we must have at least the Viewer role for the resource group.

    2. Select a Geography to create the cluster in, such as North America. The geography helps filter the Availability and Metro values that we can select.
    3. Select the Availability that we want for the cluster, Single zone or Multizone. In a multizone cluster, the OpenShift master is deployed in a multizone-capable zone and three replicas of our master are spread across zones.
    4. Enter the Metro and Worker zones details, depending on the availability that you selected for the cluster.
      • Multizone clusters:
        1. Select a Metro location. For the best performance, select the metro location that is physically closest to you. Your choices might be limited by geography.
        2. Select the specific Worker zones within the metro to host the cluster. You must select at least one zone but we can select as many as you like. If you select more than one zone, the worker nodes are spread across the zones that you choose which gives you higher availability. If you select only one zone, we can add zones to the cluster after the cluster is created.
      • Single zone clusters: Select a Worker zone in which we want to host the cluster. For the best performance, select the data center that is physically closest to you. Your choices might be limited by geography.

    5. For each of the selected zones, choose your public and private VLANs. We can change the pre-selected VLANs by clicking the Edit VLANs pencil icon. The first time that you create a cluster in a zone, public and private VLANs are automatically created for you. Worker nodes communicate with each other by using the private VLAN, and can communicate with the OpenShift master by using the public or the private VLAN. If you do not have a public or private VLAN in this zone, a public and a private VLAN are automatically created for you. We can use the same VLAN for multiple clusters.
  5. Configure your Worker pool setup. Worker pools are groups of worker nodes that share the same configuration. We can always add more worker pools to the cluster later.
    1. If we want a larger size for the worker nodes, click Change flavor. The flavor defines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to the containers. Available bare metal and virtual machines types vary by the zone in which we deploy the cluster. For more information, see Plan your worker node setup. After creating the cluster, we can add different flavors by adding a worker pool to the cluster.
      • Default: The default flavor is Virtual - shared, Ubuntu 18, which comes with 4 vCPUs of computing power and 16 GB of memory. This virtual flavor is billed hourly. Other types of flavors include the following.
      • Bare metal: Bare metal servers are provisioned manually by IBM Cloud infrastructure after you order, and can take more than one business day to complete. Bare metal is best suited for high-performance applications that need more resources and host control. Be sure that we want to provision a bare metal machine. Because it is billed monthly, if we cancel it immediately after an order by mistake, we are still charged the full month.
      • Virtual - shared: Infrastructure resources, such as the hypervisor and physical hardware, are shared across you and other IBM customers, but each worker node is accessible only by you. Although this option is less expensive and sufficient in most cases, we might want to verify your performance and infrastructure requirements with your company policies. Virtual machines are billed hourly.
      • Virtual - dedicated: Your worker nodes are hosted on infrastructure that is devoted to your account. Your physical resources are completely isolated. Virtual machines are billed hourly.

    2. Set how many worker nodes to create per zone, such as 3. For example, if you selected 2 zones and want to create 3 worker nodes, a total of 6 worker nodes are provisioned in the cluster with 3 worker nodes in each zone. You must set at least 2 worker nodes. For more information, see What is the smallest size cluster that I can make?.
    3. Toggle disk encryption. By default, worker nodes feature AES 256-bit disk encryption.
  6. If you do not have the required infrastructure permissions to create a cluster, the Infrastructure permissions checker lists the missing permissions. Ask your account owner to set up the API key with the required permissions.

  7. Complete the Resource details to customize the unique cluster name and any tags that we want to use to organize your IBM Cloud resources, such as the team or billing department.

  8. In the Summary pane, review your order summary and then click Create. A worker pool is created with the number of workers that you specified. We can see the progress of the worker node deployment in the Worker nodes tab.

    • Your cluster might take some time to provision the OpenShift master and all worker nodes and enter a Normal state. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress secrets or registry image pull secrets, might still be in process. Before you continue, wait until the cluster is ready by checking that the Ingress subdomain follows a pattern of <cluster_name>.<region>.containers.appdomain.cloud.
    • Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the OpenShift master from managing the cluster.

      Is the cluster not in a Normal state? Check out the Debugging clusters guide for help. For example, if the cluster is provisioned in an account that is protected by a firewall gateway appliance, we must configure the firewall settings to allow outgoing traffic to the appropriate ports and IP addresses.

  9. After the cluster is created, we can begin working with the cluster by configuring your CLI session. For more possibilities, review the Next steps.



Creating a standard classic cluster in the CLI

Create your single zone or multizone classic cluster by using the IBM Cloud CLI.

Before beginning:


To create a classic cluster from the CLI:

  1. Log in to the IBM Cloud CLI.

    1. Log in and enter your IBM Cloud credentials when prompted. If we have a federated ID, use ibmcloud login --sso to log in to the IBM Cloud CLI.

        ibmcloud login [--sso]

    2. If we have multiple IBM Cloud accounts, select the account where we want to create your Kubernetes cluster.

    3. To create clusters in a resource group other than default, target that resource group.

      • A cluster can be created in only one resource group, and after the cluster is created, we can't change its resource group.
      • You must have at least the Viewer role for the resource group.

        ibmcloud target -g <resource_group_name>

  2. Review the zones where we can create the cluster. In the output of the following command, zones have a Location Type of dc. To span the cluster across zones, we must create the cluster in a multizone-capable zone. Multizone-capable zones have a metro value in the Multizone Metro column. To create a multizone cluster, we can use the IBM Cloud console, or add more zones to the cluster after the cluster is created.

      ibmcloud oc locations

    When you select a zone that is located outside your country, keep in mind that we might require legal authorization before data can be physically stored in a foreign country.

  3. Review the worker node flavors that are available in that zone. The flavor determines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to our apps. Worker nodes in classic clusters can be created as virtual machines on shared or dedicated infrastructure, or as bare metal machines that are dedicated to you. For more information, see Plan your worker node setup. After creating the cluster, we can add different flavors by adding a worker pool.

    Before you create a bare metal machine, be sure that we want to provision one. Bare metal machines are billed monthly. If you order a bare metal machine by mistake, we are charged for the entire month, even if we cancel the machine immediately.

      ibmcloud oc flavors --zone <zone>

  4. Check if we have existing VLANs in the zones that we want to include in the cluster, and note the ID of the VLAN. If you do not have a public or private VLAN in one of the zones that we want to use in the cluster, IBM Cloud Kubernetes Service automatically creates these VLANs for you when you create the cluster.

      ibmcloud oc vlan ls --zone <zone>

    Example output:

    ID        Name   Number   Type      Router
    1519999   vlan   1355     private   bcr02a.dal10
    1519898   vlan   1357     private   bcr02a.dal10
    1518787   vlan   1252     public    fcr02a.dal10
    1518888   vlan   1254     public    fcr02a.dal10
    

    If a public and private VLAN already exist, note the matching routers. Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). When you create a cluster and specify the public and private VLANs, the number and letter combination after those prefixes must match. In the example output, any private VLAN can be used with any public VLAN because the routers all include 02a.dal10.

  5. Create your standard cluster.

      ibmcloud oc cluster create classic --zone <zone> --flavor <flavor> --hardware <shared_or_dedicated> --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID> --workers <number> --name <cluster_name> --version <major.minor.patch>_openshift --public-service-endpoint [--private-service-endpoint] [--pod-subnet] [--service-subnet] [--disable-disk-encrypt]


    Parameter Description
    --zone <zone> Specify the IBM Cloud zone ID that you chose earlier and that we want to use to create the cluster.
    --flavor <flavor> Specify the flavor for the worker node that you chose earlier.
    --hardware <shared_or_dedicated> Specify with the level of hardware isolation for the worker node. Use dedicated to have available physical resources dedicated to you only, or shared to allow physical resources to be shared with other IBM customers. The default is shared. This value is optional for VM standard clusters. For bare metal flavors, specify dedicated.
    --public-vlan <public_vlan_id> If you already have a public VLAN set up in your IBM Cloud infrastructure account for that zone, enter the ID of the public VLAN that you retrieved earlier. If you do not have a public VLAN in your account, do not specify this option. IBM Cloud Kubernetes Service automatically creates a public VLAN for you.

    Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). When you create a cluster and specify the public and private VLANs, the number and letter combination after those prefixes must match.

    --private-vlan <private_vlan_id> If you already have a private VLAN set up in your IBM Cloud infrastructure account for that zone, enter the ID of the private VLAN that you retrieved earlier. If you do not have a private VLAN in your account, do not specify this option. IBM Cloud Kubernetes Service automatically creates a private VLAN for you.

    Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). When you create a cluster and specify the public and private VLANs, the number and letter combination after those prefixes must match.

    --name <name> Specify a name for the cluster. The name must start with a letter, can contain letters, numbers, periods (.), and hyphen (-), and must be 35 characters or fewer. Use a name that is unique across regions. The cluster name and the region in which the cluster is deployed form the fully qualified domain name for the Ingress subdomain. To ensure that the Ingress subdomain is unique within a region, the cluster name might be truncated and appended with a random value within the Ingress domain name.
    --workers <number> Specify the number of worker nodes to include in the cluster. If you do not specify this option, a cluster with the minimum value of 2 is created. For more information, see What is the smallest size cluster that I can make?.
    --version <major.minor.patch> The OpenShift version for the cluster master node. This value is required. When the version is not specified, the cluster is created with the default supported Kubernetes version. If you do not specify a supported OpenShift version, the cluster is created as a community Kubernetes cluster. To see available versions, run ibmcloud oc versions.
    --public-service-endpoint Enable the public service endpoint so that the OpenShift master can be accessed over the public network, for example to run oc commands from your terminal, and so that the OpenShift master and the worker nodes can communicate over the public VLAN. You must enable the public service endpoint, and cannot later disable it.

    After creating the cluster, we can get the endpoint by running ibmcloud oc cluster get --cluster <cluster_name_or_ID>.
    --private-service-endpoint For OpenShift 3.11 clusters only, in VRF-enabled and service endpoint-enabled accounts: Enable the private service endpoint so that the OpenShift master and the worker nodes can communicate over the private VLAN. In addition, enable the public service endpoint by using the --public-service-endpoint flag to access the cluster over the internet. After you enable a private service endpoint, we cannot later disable it.

    After creating the cluster, we can get the endpoint by running ibmcloud oc cluster get --cluster <cluster_name_or_ID>.
    --pod-subnet All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range by default. If we plan to connect the cluster to on-premises networks through IBM Cloud Direct Link or a VPN service, we can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for the pods.

    When you choose a subnet size, consider the size of the cluster that we plan to create and the number of worker nodes that we might add in the future. The subnet must have a CIDR of at least /23, which provides enough pod IPs for a maximum of four worker nodes in a cluster. For larger clusters, use /22 to have enough pod IP addresses for eight worker nodes, /21 to have enough pod IP addresses for 16 worker nodes, and so on.

    The subnet that you choose must be within one of the following ranges:

    • 172.17.0.0 - 172.17.255.255
    • 172.21.0.0 - 172.31.255.255
    • 192.168.0.0 - 192.168.254.255
    • 198.18.0.0 - 198.19.255.255

    Note that the pod and service subnets cannot overlap. The service subnet is in the 172.21.0.0/16 range by default.

    --service-subnet All services that are deployed to the cluster are assigned a private IP address in the 172.21.0.0/16 range by default. If we plan to connect the cluster to on-premises networks through IBM Cloud Direct Link or a VPN service, you can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for the services.

    The subnet must be specified in CIDR format with a size of at least /24, which allows a maximum of 255 services in the cluster, or larger. The subnet that you choose must be within one of the following ranges:

    • 172.17.0.0 - 172.17.255.255
    • 172.21.0.0 - 172.31.255.255
    • 192.168.0.0 - 192.168.254.255
    • 198.18.0.0 - 198.19.255.255

    Note that the pod and service subnets cannot overlap. The pod subnet is in the 172.30.0.0/16 range by default.

    --disable-disk-encrypt Worker nodes feature AES 256-bit disk encryption by default. To disable encryption, include this option.
    --entitlement cloud_pak Include this flag only if you use this cluster with an IBM Cloud Pakā„¢ that has an OpenShift entitlement. When you specify the number of workers (--workers) and flavor (--flavor), make sure to specify only the number and size of worker nodes that we are entitled to use in IBM Passport Advantage. After the cluster is created, we are not charged the OpenShift license fee for the entitled worker nodes in the default worker pool.

    Do not exceed your entitlement. Keep in mind that the OpenShift Container Platform entitlements can be used with other cloud providers or in other environments. To avoid billing issues later, make sure that you use only what you are entitled to use. For example, we might have an entitlement for the OCP licenses for two worker nodes of 4 CPU and 16 GB memory, and you create this worker pool with two worker nodes of 4 CPU and 16 GB memory. You used your entire entitlement, and we cannot use the same entitlement for other worker pools, cloud providers, or environments.

  6. Verify that the creation of the cluster was requested. For virtual machines, it can take a few minutes for the worker node machines to be ordered, and for the cluster to be set up and provisioned in your account. Bare metal physical machines are provisioned by manual interaction with IBM Cloud infrastructure, and can take more than one business day to complete.

      ibmcloud oc cluster ls

    When the provisioning of our OpenShift master is completed, the State of the cluster changes to deployed. After the OpenShift master is ready, the provisioning of our worker nodes is initiated.

    Name         ID                         State      Created          Workers    Zone      Version     Resource Group Name   Provider
    mycluster    blrs3b1d0p0p2f7haq0g       deployed   20170201162433   3          dal10     4.4.26_xxxx_openshift      Default             classic
    

    Is the cluster not in a deployed state? Check out the Debugging clusters guide for help. For example, if the cluster is provisioned in an account that is protected by a firewall gateway appliance, we must configure the firewall settings to allow outgoing traffic to the appropriate ports and IP addresses.

  7. Check the status of the worker nodes.

      ibmcloud oc worker ls --cluster <cluster_name_or_ID>

    When the worker nodes are ready, the worker node state changes to normal and the status changes to Ready. When the node status is Ready, we can then access the cluster. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress secrets or registry image pull secrets, might still be in process. Note that if you created the cluster with a private VLAN only, no Public IP addresses are assigned to your worker nodes.

    ID                                                     Public IP        Private IP     Flavor              State    Status   Zone    Version
    kube-blrs3b1d0p0p2f7haq0g-mycluster-default-000001f7   169.xx.xxx.xxx  10.xxx.xx.xxx   u3c.2x4.encrypted   normal   Ready    dal10   1.18.9
    

    Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the OpenShift master from managing the cluster.

  8. Optional: If you created the cluster in a multizone metro location, we can spread the default worker pool across zones to increase the cluster's availability.

  9. After the cluster is created, we can begin working with the cluster by configuring your CLI session.

Your cluster is ready for the workloads! We might also want to add a tag to the cluster, such as the team or billing department that uses the cluster, to help manage IBM Cloud resources. For more ideas of what to do with the cluster, review the Next steps.




Creating a standard VPC Gen 2 compute cluster

Use the IBM Cloud CLI or the IBM Cloud console to create a standard VPC Generation 2 compute cluster, and customize the cluster to meet the high availability and security requirements of our apps.


Creating a standard VPC Gen 2 compute cluster in the console

Create your single zone or multizone VPC Generation 2 compute cluster by using the IBM Cloud console.

Your VPC cluster is created with both a public and a private service endpoint. Want to create a VPC cluster with no public service endpoint and only a private service endpoint? Create the cluster in the CLI instead, and include the --disable-public-service-endpoint flag.

  1. Make sure that you complete the prerequisites to prepare your account and decide on your cluster setup.
  2. Create a Virtual Private Cloud (VPC) on generation 2 compute with a subnet that is located in the VPC zone where we want to create the cluster.
    • Verify that the banner at the beginning of the VPC page is set to Gen 2 compute. If Gen 1 compute is set, click Switch to Gen 2 compute.
    • During the VPC creation, we can create one subnet only. Subnets are specific to a zone. To create a multizone cluster, create the subnet in one of the multizone-capable zones that we want to use. Later, you manually create the subnets for the remaining zones that we want to include in the cluster.

      Do not delete the subnets that you attach to the cluster during cluster creation or when you add worker nodes in a zone. If you delete a VPC subnet that the cluster used, any load balancers that use IP addresses from the subnet might experience issues, and we might be unable to create new load balancers.

    • To run default OpenShift components such as the web console or OperatorHub, attach a public gateway to one or more subnets.
    • For more information, see Creating a VPC using the IBM Cloud console and Overview of VPC networking in Red Hat OpenShift on IBM Cloud: Subnets.

  3. To create a multizone cluster, create the subnets for all of the remaining zones that we want to include in the cluster. You must have one VPC subnet in all of the zones where we want to create your multizone cluster.
    1. From the VPC subnet dashboard, click New subnet.
    2. Verify that the banner at the beginning of the new subnet page is set to Gen 2 compute. If Gen 1 compute is set, click Switch to Gen 2 compute.
    3. Enter a name for the subnet and select the name of the VPC that you created.
    4. Select the location and zone where we want to create the subnet.
    5. Specify the number of IP addresses to create. VPC subnets provide IP addresses for the worker nodes and load balancer services in the cluster, so create a VPC subnet with enough IP addresses, such as 256. We cannot change the number of IPs that a VPC subnet has later. If you enter a specific IP range, do not use the following reserved ranges: 172.16.0.0/16, 172.18.0.0/16, 172.19.0.0/16, and 172.20.0.0/16.
    6. Attach a public network gateway to your subnet. A public network gateway is required when we want the cluster to access public endpoints, such as default OpenShift components like the web console and OperatorHub, or an IBM Cloud service that supports public service endpoints only. Make sure to review the VPC networking basics to understand when a public network gateway is required and how we can set up the cluster to limit public access to one or more subnets only.
    7. Click Create subnet.
  4. From the OpenShift clusters console, click Create cluster.
  5. Configure the cluster environment.
    1. Select the Standard cluster plan.
    2. From the OpenShift drop-down list, select the version that we want to use in the cluster. You must choose OpenShift 4.3 or later.
    3. Optional: For the OCP entitlement section, we can select an entitlement for a worker pool, if we have one. In most cases, leave the value set to Purchase additional licenses for this worker pool. If we have an IBM Cloud Pak with an OpenShift entitlement that we want to use, we can select Apply my Cloud Pak OCP entitlement to this worker pool. Later, when you configure the worker pool, make sure to select only the flavor and number of worker nodes that your entitlement permits.
    4. Select VPC infrastructure.
    5. From the Virtual private cloud drop-down menu, select the Gen 2 VPC that you created earlier.
    6. From the Cloud Object Storage drop-down menu, select a standard IBM Cloud Object Storage instance to use for the internal OpenShift container registry, or create a standard IBM Cloud Object Storage instance to use.
  6. Configure the Location details for the cluster.
    1. Select the Resource group that we want to create the cluster in.
      • A cluster can be created in only one resource group, and after the cluster is created, we cannot change its resource group.
      • To create clusters in a resource group other than the default, we must have at least the Viewer role for the resource group.
      • The cluster can be in a different resource group than the VPC.

    2. Select the zones to create the cluster in.
      • The zones are filtered based on the VPC that you selected, and include the VPC subnets that you previously created.
      • To create a single zone cluster, select one zone only. If you select only one zone, we can add zones to the cluster after the cluster is created.
      • To create a multizone cluster, select multiple zones.

  7. Configure your Worker pool setup. Worker pools are groups of worker nodes that share the same configuration. We can always add more worker pools to the cluster later.
    1. If we want a larger size for the worker nodes, click Change flavor. The flavor defines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to the containers. Available bare metal and virtual machines types vary by the zone in which we deploy the cluster. For more information, see Plan your worker node setup. After creating the cluster, we can add different flavors by adding a worker pool to the cluster.
      • Default: The default flavor is Virtual - shared, Ubuntu 18, which comes with 4 vCPUs of computing power and 16 GB of memory. This virtual flavor is billed hourly. Other types of flavors include the following.
      • Bare metal: Bare metal servers are provisioned manually by IBM Cloud infrastructure after you order, and can take more than one business day to complete. Bare metal is best suited for high-performance applications that need more resources and host control. Be sure that we want to provision a bare metal machine. Because it is billed monthly, if we cancel it immediately after an order by mistake, we are still charged the full month.
      • Virtual - shared: Infrastructure resources, such as the hypervisor and physical hardware, are shared across you and other IBM customers, but each worker node is accessible only by you. Although this option is less expensive and sufficient in most cases, we might want to verify your performance and infrastructure requirements with your company policies. Virtual machines are billed hourly.
      • Virtual - dedicated: Your worker nodes are hosted on infrastructure that is devoted to your account. Your physical resources are completely isolated. Virtual machines are billed hourly.

    2. Set how many worker nodes to create per zone, such as 3. For example, if you selected 2 zones and want to create 3 worker nodes, a total of 6 worker nodes are provisioned in the cluster with 3 worker nodes in each zone. You must set at least 2 worker nodes. For more information, see What is the smallest size cluster that I can make?.
    3. Toggle disk encryption. By default, worker nodes feature AES 256-bit disk encryption.
  8. If you do not have the required infrastructure permissions to create a cluster, the Infrastructure permissions checker lists the missing permissions. Ask your account owner to set up the API key with the required permissions.
  9. Complete the Resource details to customize the unique cluster name and any tags that we want to use to organize your IBM Cloud resources, such as the team or billing department.
  10. In the Summary pane, review the order summary and then click Create. A worker pool is created with the number of workers that you specified. We can see the progress of the worker node deployment in the Worker nodes tab.
    • Your cluster might take some time to provision the OpenShift master and all worker nodes and enter a Normal state. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress secrets or registry image pull secrets, might still be in process. Before you continue, wait until the cluster is ready by checking that the Ingress subdomain follows a pattern of <cluster_name>.<region>.containers.appdomain.cloud.
    • Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the OpenShift master from managing the cluster.

      Is the cluster not in a Normal state? Check out the Debugging clusters guide for help. For example, if the cluster is provisioned in an account that is protected by a firewall gateway appliance, we must configure the firewall settings to allow outgoing traffic to the appropriate ports and IP addresses.

  11. After the cluster is created, we can begin working with the cluster by configuring your CLI session. For more possibilities, review the Next steps.
  12. OpenShift version 4.4 or earlier only: To allow any traffic requests to apps that we deploy on the worker nodes, modify the VPC's default security group.
    1. From the Virtual private cloud dashboard, click the name of the Default Security Group for the VPC that you created.
    2. In the Inbound rules section, click New rule.
    3. Choose the TCP protocol, enter 30000 for the Port min and 32767 for the Port max, and leave the Any source type selected.
    4. Click Save.
    5. If you require VPC VPN access or classic infrastructure access into this cluster, repeat these steps to add a rule that uses the UDP protocol, 30000 for the Port min, 32767 for the Port max, and the Any source type.


Creating standard VPC Gen 2 compute clusters from the CLI

Create your single zone or multizone VPC Generation 2 compute cluster by using the IBM Cloud CLI.

Before beginning:


To create a VPC cluster from the CLI:

  1. In your terminal, log in to your IBM Cloud account and target the IBM Cloud region and resource group where we want to create your VPC cluster. For supported regions, see Creating a VPC in a different region. The cluster's resource group can differ from the VPC resource group. Enter your IBM Cloud credentials when prompted. If we have a federated ID, use the --sso flag to log in.

      ibmcloud login -r <region> [--sso]

  2. Target the IBM Cloud infrastructure generation 2 for VPC.

      ibmcloud is target --gen 2

  3. Create a Gen 2 VPC in the same region where we want to create the cluster.

  4. Create a Gen 2 subnet for the VPC.
    • To create a multizone cluster, repeat this step to create additional subnets in all of the zones that we want to include in the cluster.
    • VPC subnets provide IP addresses for the worker nodes and load balancer services in the cluster, so create a VPC subnet with enough IP addresses, such as 256. We cannot change the number of IPs that a VPC subnet has later.
    • Do not use the following reserved ranges: 172.16.0.0/16, 172.18.0.0/16, 172.19.0.0/16, and 172.20.0.0/16.
    • To run default OpenShift components such as the web console or OperatorHub, attach a public gateway to one or more subnets.
    • Important: Do not delete the subnets that you attach to the cluster during cluster creation or when you add worker nodes in a zone. If you delete a VPC subnet that the cluster used, any load balancers that use IP addresses from the subnet might experience issues, and we might be unable to create new load balancers.
    • For more information, see Overview of VPC networking in Red Hat OpenShift on IBM Cloud: Subnets.

  5. Create the cluster in your VPC. We can use the ibmcloud oc cluster create vpc-gen2 command to create a single zone cluster in your VPC with worker nodes that are connected to one VPC subnet only. To create a multizone cluster, we can use the IBM Cloud console, or add more zones to the cluster after the cluster is created. The cluster takes a few minutes to provision.

      ibmcloud oc cluster create vpc-gen2 --name <cluster_name> --zone <vpc_zone> --vpc-id <vpc_ID> --subnet-id <vpc_subnet_ID> --flavor <worker_flavor> --version 4.4_openshift --cos-instance <cos_ID> [--workers <number_workers_per_zone>] [--pod-subnet] [--service-subnet] [--disable-public-service-endpoint]


    Parameter Description
    --name <cluster_name> Specify a name for the cluster. The name must start with a letter, can contain letters, numbers, periods (.), and hyphen (-), and must be 35 characters or fewer. Use a name that is unique across regions. The cluster name and the region in which the cluster is deployed form the fully qualified domain name for the Ingress subdomain. To ensure that the Ingress subdomain is unique within a region, the cluster name might be truncated and appended with a random value within the Ingress domain name.
    --zone <zone> Specify the IBM Cloud zone where we want to create the cluster. Make sure that you use a zone that matches the metro city location that you selected when you created your VPC and that we have an existing VPC subnet for that zone. For example, if you created your VPC in the Dallas metro city, your zone must be set to us-south-1, us-south-2, or us-south-3. To list available VPC cluster zones, run ibmcloud oc zone ls --provider vpc-gen2. Note that when you select a zone outside of our country, we might require legal authorization before data can be physically stored in a foreign country.
    --vpc-id <vpc_ID> Enter the ID of the VPC that you created earlier. To retrieve the ID of our VPC, run ibmcloud oc vpcs.
    --subnet-id <subnet_ID> Enter the ID of the VPC subnet that you created earlier. When you create a VPC cluster from the CLI, we can initially create the cluster in one zone with one subnet only. To create a multizone cluster, add more zones with the subnets that you created earlier to the cluster after the cluster is created. To list the IDs of our subnets, run ibmcloud oc subnets --provider vpc-gen2 --vpc-id &lt,VPC_ID> --zone <subnet_zone> .
    --flavor <worker_flavor> Enter the worker node flavor that we want to use. The flavor determines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to our apps. VPC Gen 2 worker nodes can be created as virtual machines on shared infrastructure only. Bare metal or software-defined storage machines are not supported. For more information, see Plan your worker node setup. To view available flavors, first list available VPC zones with ibmcloud oc zone ls --provider vpc-gen2, and then use the zone to list supported flavors by running ibmcloud oc flavors --zone <VPC_zone> --provider vpc-gen2. After creating the cluster, we can add different flavors by adding a worker node or worker pool to the cluster.
    --version 4.4_openshift VPC Gen 2 clusters are supported for OpenShift version 4 only.
    --provider <vpc-gen2> Enter the generation of IBM Cloud infrastructure that we want to use. To create a VPC Generation 2 compute cluster, we must enter vpc-gen2.
    --cos-instance <cos_ID> Include the CRN ID of a standard IBM Cloud Object Storage instance to back up the internal registry of the cluster. To list the CRN of existing instances, run ibmcloud resource service-instances --long and find the ID of our object storage instance. To create a standard object storage instance, run ibmcloud resource service-instance-create cloud-object-storage standard global and note its ID.
    --workers <number> Specify the number of worker nodes to include in the cluster. If you do not specify this option, a cluster with the minimum value of 2 is created. For more information, see What is the smallest size cluster that I can make?. This value is optional.
    --pod-subnet In the first cluster that you create in a Gen 2 VPC, the default pod subnet is 172.17.0.0/18. In the second cluster that you create in that VPC, the default pod subnet is 172.17.64.0/18. In each subsequent cluster, the pod subnet range is the next available, non-overlapping /18 subnet. If we plan to connect the cluster to on-premises networks through IBM Cloud Direct Link or a VPN service, we can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for the pods.

    When you choose a subnet size, consider the size of the cluster that we plan to create and the number of worker nodes that we might add in the future. The subnet must have a CIDR of at least /23, which provides enough pod IPs for a maximum of four worker nodes in a cluster. For larger clusters, use /22 to have enough pod IP addresses for eight worker nodes, /21 to have enough pod IP addresses for 16 worker nodes, and so on.

    The subnet that you choose must be within one of the following ranges:

    • 172.17.0.0 - 172.17.255.255
    • 172.21.0.0 - 172.31.255.255
    • 192.168.0.0 - 192.168.254.255
    • 198.18.0.0 - 198.19.255.255

    Note that the pod and service subnets cannot overlap. If you use custom-range subnets for the worker nodes, we must ensure that your worker node subnets do not overlap with the cluster's pod subnet.

    --service-subnet All services that are deployed to the cluster are assigned a private IP address in the 172.21.0.0/16 range by default. If we plan to connect the cluster to on-premises networks through IBM Cloud Direct Link or a VPN service, you can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for the services.

    The subnet must be specified in CIDR format with a size of at least /24, which allows a maximum of 255 services in the cluster, or larger. The subnet that you choose must be within one of the following ranges:

    • 172.17.0.0 - 172.17.255.255
    • 172.21.0.0 - 172.31.255.255
    • 192.168.0.0 - 192.168.254.255
    • 198.18.0.0 - 198.19.255.255

    Note that the pod and service subnets cannot overlap.

    --disable-public-service-endpoint Include this option in your command to create your VPC cluster with a private service endpoint only. If you do not include this option, the cluster is set up with a public and a private service endpoint. The service endpoint determines how the OpenShift master and the worker nodes communicate, how the cluster access other IBM Cloud services and apps outside the cluster, and how your users connect to the cluster. For more information, see Plan the cluster network setup.

    If you include this flag, the cluster is created with routers and Ingress controllers that expose our apps on the private network only by default. If you later want to expose apps to a public network, we must manually create public routers and Ingress controllers.

  6. Verify that the creation of the cluster was requested. It can take a few minutes for the worker node machines to be ordered, and for the cluster to be set up and provisioned in your account.

      ibmcloud oc cluster ls

    When the provisioning of our OpenShift master is completed, the status of the cluster changes to deployed. After the OpenShift master is ready, your worker nodes are set up.

     Name         ID                                   State      Created          Workers    Zone      Version     Resource Group Name   Provider
     mycluster    aaf97a8843a29941b49a598f516da72101   deployed   20170201162433   3          mil01     1.18.9      Default               vpc-classic
    

    Is the cluster not in a deployed state? Check out the Debugging clusters guide for help. For example, if the cluster is provisioned in an account that is protected by a firewall gateway appliance, we must configure the firewall settings to allow outgoing traffic to the appropriate ports and IP addresses.

  7. Check the status of the worker nodes.

      ibmcloud oc worker ls --cluster <cluster_name_or_ID>

    When the worker nodes are ready, the worker node State changes to deployed and the Status changes to Ready. When the node Status changes to Ready, we can access the cluster. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress secrets or registry image pull secrets, might still be in process.

    ID                                                     Public IP        Private IP     Flavor              State    Status   Zone    Version
    kube-blrs3b1d0p0p2f7haq0g-mycluster-default-000001f7   169.xx.xxx.xxx  10.xxx.xx.xxx   u3c.2x4.encrypted   normal   Ready    dal10   1.18.9
    

    Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the OpenShift master from managing the cluster.

  8. Optional: If you created the cluster in a multizone metro location, we can spread the default worker pool across zones to increase the cluster's availability.

  9. After the cluster is created, we can begin working with the cluster by configuring your CLI session.

  10. OpenShift version 4.4 or earlier only: To allow any traffic requests to apps that we deploy on the worker nodes, modify the VPC's default security group.

    1. List your security groups. For the VPC that you created, note the ID of the default security group.

        ibmcloud is security-groups

      Example output with only the default security group of a randomly generated name, preppy-swimmer-island-green-refreshment:

      ID                                     Name                                       Rules   Network interfaces         Created                     VPC                      Resource group
      1a111a1a-a111-11a1-a111-111111111111   preppy-swimmer-island-green-refreshment    4       -                          2019-08-12T13:24:45-04:00   <vpc_name>(bbbb222b-.)   c3c33cccc33c333ccc3c33cc3c333cc3
      
    2. Add a security group rule to allow inbound TCP traffic on ports 30000-32767.

        ibmcloud is security-group-rule-add <security_group_ID> inbound tcp --port-min 30000 --port-max 32767

    3. If you require VPC VPN access or classic infrastructure access into this cluster, add a security group rule to allow inbound UDP traffic on ports 30000-32767.

        ibmcloud is security-group-rule-add <security_group_ID> inbound udp --port-min 30000 --port-max 32767

Your cluster is ready for the workloads! We might also want to add a tag to the cluster, such as the team or billing department that uses the cluster, to help manage IBM Cloud resources. For more ideas of what to do with the cluster, review the Next steps.



Next steps

When the cluster is up and running, we can check out the following cluster administration tasks:

Then, we can check out the following network configuration steps for the cluster setup: