Creating Red Hat OpenShift on IBM Cloud clusters
Create a cluster with worker nodes that come installed with OpenShift container orchestration platform.
With Red Hat OpenShift on IBM Cloud, you can create highly available clusters with virtual or bare metal worker nodes that come installed with the Red Hat OpenShift on IBM Cloud Container Platform orchestration software. You get all the advantages of a managed offering for your cluster infrastructure environment, while using the OpenShift tooling and catalog that runs on Red Hat Enterprise Linux for your app deployments.
OpenShift worker nodes are available for paid accounts and standard clusters only. In this tutorial, you create a cluster that runs version 4.5. The operating system is Red Hat Enterprise Linux 7.
Objectives
In the tutorial lessons, you create a standard Red Hat OpenShift on IBM Cloud cluster, open the OpenShift console, access built-in OpenShift components, deploy an app in an OpenShift project, and expose the app on an OpenShift route so that external users can access the service.
Audience
This tutorial is for cluster administrators who want to learn how to create a Red Hat OpenShift on IBM Cloud cluster for the first time by using the CLI.
Prerequisites
- Ensure that we have the following IBM Cloud IAM access policies.
- The Administrator platform role for IBM Cloud Kubernetes Service
- The Writer or Manager service role for IBM Cloud Kubernetes Service
- The Administrator platform role for IBM Cloud Container Registry
- Make sure that the API key for the IBM Cloud region and resource group is set up with the correct infrastructure permissions, Super User, or the minimum roles to create a cluster.
Step 1: Creating a Red Hat OpenShift on IBM Cloud cluster
Create a Red Hat OpenShift on IBM Cloud cluster. To learn about what components are set up when you create a cluster, see the Service architecture. OpenShift is available for only standard clusters. You can learn more about the price of standard clusters in the frequently asked questions.
- Install the command-line tools.
- Log in to the account and resource group where we want to create OpenShift clusters. If we have a federated account, include the --sso flag.
ibmcloud login [-g default] [--sso]- Create a cluster with a unique name. The following command creates a version 4.5 cluster in Washington, DC with the minimum configuration of 2 worker nodes that have at least 4 cores and 16 GB memory so that default OpenShift components can deploy. If we have existing VLANs that we want to use, get the VLAN IDs by running ibmcloud oc vlan ls --zone <zone>. For more information, see Creating a standard classic cluster in the CLI.
ibmcloud oc cluster create classic --name my_openshift --location wdc04 --version 4.5_openshift --flavor b3c.4x16.encrypted --workers 2 --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID> --public-service-endpoint- List your cluster details. Review the cluster State, check the Ingress Subdomain, and note the Master URL.
Your cluster creation might take some time to complete. After the cluster state shows Normal, the cluster network and router components take about 10 more minutes to deploy and update the cluster domain that you use for the OpenShift web console and other routes. Before you continue, wait until the cluster is ready by checking that the Ingress Subdomain follows a pattern of <cluster_name>.<globally_unique_account_HASH>-0001.<region>.containers.appdomain.cloud.
ibmcloud oc cluster get --cluster <cluster_name_or_ID>- Download and add the kubeconfig configuration file for your cluster to the existing kubeconfig in ~/.kube/config or the last file in the KUBECONFIG environment variable.
ibmcloud oc cluster config --cluster <cluster_name_or_ID>- In your browser, navigate to the address of your Master URL and append /console. For example, https://c0.containers.cloud.ibm.com:23652/console.
- From the OpenShift web console menu bar, click your profile IAM#user.name@email.com > Copy Login Command. Display and copy the oc login token command into your terminal to authenticate via the CLI.
Save your cluster master URL to access the OpenShift console later. In future sessions, you can skip the cluster config step and copy the login command from the console instead.
Verify that the oc commands run properly with your cluster by checking the version.
oc versionExample output:
Client Version: v4.5.0 Kubernetes Version: v1.18.2If you cannot perform operations that require Administrator permissions, such as listing all the worker nodes or pods in a cluster, download the TLS certificates and permission files for the cluster administrator by running the ibmcloud oc cluster config --cluster <cluster_name_or_ID> --admin command.
Step 2: Navigating the OpenShift console
Red Hat OpenShift on IBM Cloud comes with built-in services that you can use to help operate your cluster, such as the OpenShift console.
- From the Red Hat OpenShift on IBM Cloud console, select your OpenShift cluster, then click OpenShift web console.
Explore the different areas of the OpenShift web console, as described in the following tabbed table.
OCP 4OCP 3Scroll for moreScroll for more
Scroll for moreScroll for more
Area Location in console Description Administrator perspective Side navigation menu perspective switcher. From the Administrator perspective, you can manage and set up the components that your team needs to run your apps, such as projects for the workloads, networking, and operators for integrating IBM, Red Hat, 3rd party, and custom services into the cluster. For more information, see Viewing cluster information in the OpenShift documentation. Developer perspective Side navigation menu perspective switcher. From the Developer perspective, you can add apps to your cluster in a variety of ways, such as from Git repositories,container images, drag-and-drop or uploaded YAML files, operator catalogs, and more. The Topology view presents a unique way to visualize the workloads that run in a project and navigate their components from sidebars that aggregate related resources, including pods, services, routes, and metadata. For more information, see Developer perspective in the OpenShift documentation.
Area Location in console Description Service Catalog Dropdown menu in the OpenShift Container Platform menu bar. Browse the catalog of built-in services that you can deploy on OpenShift . For example, if you already have a node.js app that is hosted on GitHub, you can click the Languages tab and deploy a JavaScript app. The My Projects pane provides a quick view of all the projects that we have access to, and clicking on a project takes you to the Application Console. For more information, see the OpenShift Web Console Walkthrough in the OpenShift documentation. Application Console Dropdown menu in the OpenShift Container Platform menu bar. For each project that we have access to, you can manage your OpenShift resources such as pods, services, routes, builds, images or persistent volume claims. You can also view and analyze logs for these resources, or add services from the catalog to the project. For more information, see the OpenShift Web Console Walkthrough in the OpenShift documentation. Cluster Console Dropdown menu in the OpenShift Container Platform menu bar. For cluster-wide administrators across all the projects in the cluster, you can manage projects, service accounts,RBAC roles, role bindings, and resource quotas. You can also see the status and events for resources within the cluster in a combined view. For more information, see the OpenShift Web Console Walkthrough in the OpenShift documentation. - To work with your cluster in the CLI, click your profile IAM#user.name@email.com > Copy Login Command. Display and copy the oc login token command into your terminal to authenticate via the CLI.
Step 3: Deploying an app to your OpenShift cluster
With Red Hat OpenShift on IBM Cloud, you can create a new app and expose your app service via an OpenShift router for external users to use.If you took a break from the last lesson and started a new terminal, make sure that you log back in to your cluster. Open your OpenShift web console at https://<master_URL>/console. For example, https://c0.containers.cloud.ibm.com:23652/console. Then from the menu bar, click your profile IAM#user.name@email.com > Copy Login Command. Display and copy the oc login token command into your terminal to authenticate via the CLI.
- Create a project for your Hello World app. A project is an OpenShift version of a Kubernetes namespace with additional annotations.
oc new-project hello-world- Build the sample app from the source code. With the OpenShift new-app command, you can refer to a directory in a remote repository that contains the Dockerfile and app code to build your image. The command builds the image, stores the image in the local Docker registry, and creates the app deployment configurations (dc) and services (svc). For more information about creating new apps, see the OpenShift docs.
oc new-app --name hello-world https://github.com/IBM/container-service-getting-started-wt --context-dir="Lab 1"Verify that the sample Hello World app components are created.
List the hello-world services and note the service name. Your app listens for traffic on these internal cluster IP addresses unless you create a route for the service so that the router can forward external traffic requests to the app.
oc get svc -n hello-worldExample output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world ClusterIP 172.21.xxx.xxx <none> 8080/TCP 31mList the pods. Pods with build in the name are jobs that Completed as part of the new app build process. Make sure that the hello-world pod status is Running.
oc get pods -n hello-worldExample output:
NAME READY STATUS RESTARTS AGE hello-world-1-9cv7d 1/1 Running 0 30m hello-world-1-build 0/1 Completed 0 31m hello-world-1-deploy 0/1 Completed 0 31m- Set up a route so that you can publicly access the hello world service. By default, the hostname is in the format of <service_name>-<project>.<cluster_name>-<random_ID>.<region>.containers.appdomain.cloud. If we want to customize the hostname, include the --hostname=<hostname> flag. Note: The hostname that is assigned to your route is different than the Ingress subdomain that is assigned by default to your cluster. Your route does not use the Ingress subdomain.
- Create a route for the hello-world service.
oc create route edge --service=hello-world -n hello-world- Get the route hostname address from the Host/Port output.
oc get route -n hello-worldExample output:NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-world hello-world-hello.world.<cluster_name>-<random_ID>.<region>.containers.appdomain.cloud hello-world 8080-tcp edge/Allow NoneAccess your app. Be sure to append https:// to your route hostname.
curl https://hello-world-hello-world.<cluster_name>-<random_ID>.<region>.containers.appdomain.cloudExample output:
Hello world from hello-world-9cv7d! Your app is up and running in a cluster!- Optional To clean up the resources that you created in this lesson, you can use the labels that are assigned to each app.
- List all the resources for each app in the hello-world project.
oc get all -l app=hello-world -o name -n hello-worldExample output:pod/hello-world-1-dh2ff replicationcontroller/hello-world-1 service/hello-world deploymentconfig.apps.openshift.io/hello-world buildconfig.build.openshift.io/hello-world build.build.openshift.io/hello-world-1 imagestream.image.openshift.io/hello-world imagestream.image.openshift.io/node route.route.openshift.io/hello-world- Delete all the resources that you created.
oc delete all -l app=hello-world -n hello-world
What's next?
For more information about working with your apps, see the OpenShift developer activities documentation.
Install two popular Red Hat OpenShift on IBM Cloud add-ons: IBM Log Analysis with LogDNA and IBM Cloud Monitoring with Sysdig.