+

Search Tips   |   Advanced Search

Set up an image registry

Red Hat OpenShift on IBM Cloud clusters include an internal registry to build, deploy, and manage container images locally. For a private registry to manage and control access to images across your enterprise, we can also set up the cluster to use IBM Cloud Container Registry.


Choosing an image registry solution

Your app's images must be stored in a container registry that the cluster can access to deploy apps into the cluster. We can choose to use the built-in registry of our OpenShift cluster, a private registry with access restricted to select users, or a public registry. Review the following table to decide which option is suited for the use case.

Registry Description
Internal OpenShift Container Registry (OCR) Your cluster is set up with the internal OpenShift Container Registry so that OpenShift can automatically build, deploy, and manage the application lifecycle from within the cluster. Images are stored in a backing IBM Cloud classic file storage device that is provisioned at cluster creation time. For more storage, we can resize the device.

Use cases:
  • OpenShift -native image stream, build, and app deployment process on a per cluster basis.
  • Images can be shared across all projects in the cluster, with access that is controlled through RBAC roles.
  • Integrating the internal registry with other Red Hat products like CloudForms for extended features such as vulnerability scanning.
  • Option to expose the internal registry with a route so that users can pull images from the registry over the public network.
  • Option to set up the internal registry to pull images from or push images to a private registry such as IBM Cloud Container Registry.

For more information, see Using the internal registry.
Private registry Private registries are a good choice to protect your images from being used and changed by unauthorized users. Private registries must be set up by the cluster administrator to make sure that access, storage quotas, image trust and other features work as intended.

By default, your OpenShift clusters are integrated with the private IBM Cloud Container Registry through image pull secrets that are set up in the default project. IBM Cloud Container Registry is a highly available, multi-tenant private registry to store your own images. We can also pull IBM-provided images from the global icr.io registry, and licensed software from the entitled registry. With IBM Cloud Container Registry, we can manage images for multiple clusters with seamless integration with IBM Cloud IAM and billing.

Advantages of using IBM Cloud Container Registry with the internal registry:
  • Local image caching for faster builds via the internal registry.
  • Deployments in other projects can refer to the image stream so that you do not need to copy pull secrets to each project.
  • Sharing images across multiple clusters without needing to push images to multiple registries.
  • Automatically scanning the vulnerability of images.
  • Controlling access through IBM Cloud IAM policies and separate regional registries.
  • Retaining images without requiring storage space in the cluster or an attached storage device. We can also set policies to manage the quantity of images to prevent them from taking up too much space.
  • Version 4 clusters on VPC infrastructure: Using the private registry service endpoint so that clusters that use only a private service endpoint can still access the registry.
  • Set storage and image pull traffic quotas to better control image storage, usage, and billing.
  • Pulling licensed IBM content from the entitled registry.

To get started, see the following topics:
Public registry Public registries such as Docker Hub are an easy way to share images across teams, companies, clusters, or cloud providers. Some public registries might also offer a private registry component.

Use cases:
  • Pushing and pulling images on the public network.
  • Quick testing of a container across multiple cloud providers.
  • Do not need enterprise-grade features such as vulnerability scanning or access management.

For more information, see the public registry's documentation, such as Quay or Docker Hub .


Storing images in the internal registry

OpenShift clusters are set up by default with an internal registry. The images in the internal registry are backed up, but vary depending on the infrastructure provider of our Red Hat OpenShift on IBM Cloud cluster.

  • Classic clusters: Your OpenShift cluster is set up by default with an internal registry that uses classic IBM Cloud File Storage as the backing storage. When you delete the cluster, the internal registry and its images are also deleted. To persist your images, consider using a private registry such as IBM Cloud Container Registry, backing up your images to persistent storage such as Object Storage, or creating a separate, stand-alone OpenShift container registry (OCR) cluster. For more information, see the OpenShift docs.
  • VPC clusters (version 4 only): The internal registry of our OpenShift cluster backs up your images to a bucket that is automatically created in an IBM Cloud Object Storage instance in your account. Any data that is stored in the object storage bucket remains even if you delete the cluster.


VPC: Backing up the OpenShift internal image registry to IBM Cloud Object Storage

Your images in the OpenShift cluster internal registry are automatically backed up to an IBM Cloud Object Storage bucket. Any data that is stored in the object storage bucket remains even if you delete the cluster.

The internal registry is backed up to IBM Cloud Object Storage only for Red Hat OpenShift on IBM Cloud clusters that run version 4 on VPC generation 2 compute infrastructure.

However, if the bucket fails to create when you create the cluster, we must manually create a bucket and set up the cluster to use the bucket. In the meantime, the internal registry uses an emptyDir Kubernetes volume that stores your container images on the secondary disk of our worker node. The emptyDir volumes are not considered persistent highly available storage, and if you delete the pods that use the image, the image is automatically deleted.

To manually create a bucket for the internal registry, see Cluster create error about cloud object storage bucket.

For clusters that run OpenShift version 4.3 or 4.4, we might need to modify the default configuration so that external sources outside the VPC, such as a CI/CD process, can push images to the internal registry.


Classic: Storing images in the internal registry

By default, the OpenShift cluster's internal registry uses an IBM Cloud File Storage volume to store the registry images. We can review the default size of the storage volume, or update the volume size.

To view volume details including the storage class and size, we can describe the persistent volume claim.

  • Version 4:
    oc describe pvc -n openshift-image-registry image-registry-storage
    
  • Version 3:
    oc describe pvc registry-backing -n default
    

Example output:

Name:          image-registry-storage
Namespace:     openshift-image-registry
StorageClass:  ibmc-file-gold
Status:        Bound
Volume:        pvc-<ID_string>
Labels:        billingType=hourly
      region=us-south
      zone=dal10
Annotations:   ibm.io/provisioning-status: complete
      imageregistry.openshift.io: true
      pv.kubernetes.io/bind-completed: yes
      pv.kubernetes.io/bound-by-controller: yes
      volume.beta.kubernetes.io/storage-provisioner: ibm.io/ibmc-file
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Events:        <none>
Mounted By:    image-registry-<ID_string>

If your registry needs additional gigabytes of storage for the images, we can resize the file storage volume. For more information, see Changing the size and IOPS of our existing storage device. When you resize the volume in your IBM Cloud infrastructure account, the attached PVC description is not updated. Instead, we can log in to the openshift-image-registry (OpenShift 4) or docker-registry (OpenShift 3.11) pod that uses the registry-backing PVC to verify that the volume is resized.



Set up a secure external route for the internal registry

By default, the OpenShift cluster has an internal registry that is available through a service with an internal IP address. To make the internal registry available on the public network, we can set up a secure re-encrypt route. For example, we might set up the cluster's internal registry to act as a public registry for deployments in other projects or clusters.

Before beginning:

For OpenShift 3.11 clusters, the internal registry is in the default project and uses the docker-registry service. The following steps are specific for version 4 clusters. To set up the route in a 3.11 cluster, replace the openshift-image-registry project with default and the image-registry service with docker-registry.

To use the internal registry, set up a public route to access the registry. Then, create an image pull secret that includes the credentials to access the registry so that deployments in other projects can pull images from this registry.

  1. From the openshift-image-registry project, make sure that the image-registry service exists for the internal registry.

    oc get svc
    

    Example output:

    NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
    image-registry            ClusterIP      172.21.xxx.xxx    <none>          5000/TCP                     36d
    image-registry-operator   ClusterIP      None             <none>          60000/TCP                     36d
    
  2. Create a secured route for the image-registry service that uses reencrypt TLS termination. With re-encryption, the router terminates the TLS connection with a certificate, and then re-encrypts the connection to the internal registry with a different certificate. With this approach, the full path of the connection between the user and the internal registry is encrypted. To provide your own custom domain name, include the --hostname flag.
    oc create route reencrypt --service=image-registry
    
  3. Retrieve the hostname (HOST/PORT) and the PORT that were assigned to the image-registry route.
    oc get route image-registry
    
    Example output:
    NAME              HOST/PORT                                                                                                  PATH      SERVICES          PORT       TERMINATION   WILDCARD
    image-registry   image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud             image-registry   5000-tcp   reencrypt     None
    
  4. Edit the route to set the load balancing strategy to source so that the same client IP address reaches the same server, as in a passthrough route setup. We can set the strategy by adding an annotation in the metadata.annotations section: haproxy.router.openshift.io/balance: source. You can edit the configuration file from the OpenShift Application Console or in your terminal by running the following command.

    oc edit route image-registry
    

    Annotation to add:

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
    annotations:
        haproxy.router.openshift.io/balance: source
    ...
    
  5. If corporate network policies prevent access from the local system to public endpoints via proxies or firewalls, allow access to the route subdomain that you create for the internal registry in the following steps.

  6. Log in to the internal registry by using the route as the hostname.

    docker login -u $(oc whoami) -p $(oc whoami -t) image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud
    
  7. Now that we are logged in, try pushing a sample hello-world app to the internal registry.

    1. Pull the hello-world image from DockerHub, or build an image on the local machine.
      docker pull hello-world
      
    2. Tag the local image with the hostname of our internal registry, the project that we want to deploy the image to, and the image name and tag.
      docker tag hello-world:latest image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud/<project>/<image_name>:<tag>
      
    3. Push the image to the cluster's internal registry.
      docker push image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud/<project>/<image_name>:<tag>
      
    4. Verify that the image is added to the OpenShift image stream.

      oc get imagestream
      

      Example output:

      NAME          DOCKER REPO                                                            TAGS      UPDATED
      hello-world   image-registry-openshift-image-registry.svc:5000/default/hello-world   latest    7 hours ago
      
  8. To enable deployments in the project to pull images from the internal registry, create an image pull secret in the project that holds the credentials to access your internal registry. Then, add the image pull secret to the default service account for each project.

    1. List the image pull secrets that the default service account uses, and note the secret that begins with default-dockercfg.
      oc describe sa default
      
      Example output:
      ...
      Image pull secrets:
          all-icr-io
          default-dockercfg-mpcn4
      ...
      
    2. Get the encoded secret information from the data field of the configuration file.
      oc get secret <default-dockercfg-name> -o yaml
      
      Example output:
      apiVersion: v1
      data:
        .dockercfg: ey...=
      
    3. Decode the value of the data field.
      echo "<ey...=>" | base64 -D
      
      Example output:
      {"172.21.xxx.xxx:5000":{"username":"serviceaccount","password":"eyJ...
      
    4. Create a new image pull secret for the internal registry.

      • secret_name: Give your image pull secret a name, such as internal-registry.
      • --namespace: Enter the project to create the image pull secret in, such as default.
      • --docker-server: Instead of the internal service IP address (172.21.xxx.xxx:5000), enter the hostname of the image-registry route with the port (image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud:5000).
      • --docker-username: Copy the "username" from the previous image pull secret, such as serviceaccount.
      • --docker-password: Copy the "password" from the previous image pull secret.
      • --docker-email: If we have one, enter your Docker email address. If you do not, enter a fictional email address, such as a@b.c. This email is required to create a Kubernetes secret, but is not used after creation.
      oc create secret image-registry internal-registry --namespace default --docker-server image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud:5000 --docker-username serviceaccount --docker-password <eyJ...> --docker-email a@b.c
      
    5. Add the image pull secret to the default service account of our project.
      oc patch -n <namespace_name> serviceaccount/default --type='json' -p='[{"op":"add","path":"/imagePullSecrets/-","value":{"name":"<image_pull_secret_name>"}}]'
      
    6. Repeat these steps for each project that we want to pull images from the internal registry.

Now that you set up the internal registry with an accessible route, we can log in, push, and pull images to the registry. For more information, see the OpenShift documentation.



Importing images from IBM Cloud Container Registry into the internal registry image stream

By default, your Red Hat OpenShift on IBM Cloud cluster is set up to pull images from the remote, private IBM Cloud Container Registry icr.io domains in the default project. We can import an image from IBM Cloud Container Registry into the internal registry of our OpenShift cluster by tagging the image as an image stream. With this setup, we can deploy apps from the image by using the local cache of the internal registry, which can make the app deployments build faster. Also, deployments in other projects can refer to the image stream so that you do not have to create image pull secret credentials to IBM Cloud Container Registry in each project.

If you update your image in IBM Cloud Container Registry, the image is not pulled automatically into the internal registry of our OpenShift cluster. Instead, configure periodic importing, or repeat these steps to tag the image. Depending on the image pull policy that you use in the deployment, we might also need to restart the deployment.

Want to learn more about how builds, image streams, and the internal registry work together? Read the OpenShift docs, or check out this blog on managing container images.

  1. Access the OpenShift cluster.
  2. Switch to the default project to pull your image into the image stream. The default project is already set up with credentials to access the icr.io registries.
    oc project default
    
  3. List the available images in your IBM Cloud Container Registry. Note the Repository and Tag of the image that we want to pull into the internal registry of our OpenShift cluster .
    ibmcloud cr images
    
  4. Tag the image to pull it from your IBM Cloud Container Registry namespace into the internal registry as an image stream. For more information, see the OpenShift documentation or run oc tag --help.

    oc tag <region>.icr.io/<namespace>/<image>:<tag> default/<image>:<tag> --reference-policy=local [--scheduled]
    
    Parameter Description
    <region>.icr.io/<namespace>/<image>:<tag> Use the Repository and Tag information that you previously retrieved to fill out the IBM Cloud Container Registry region, namespace, image, and tag name of the image that we want to pull.
    default/<image<:<tag< Enter the information for the internal image stream that you create from the IBM Cloud Container Registry tagged image. You create this image stream in the default project, which is also the project where the image stream is created if you do not specify a project. The values for <image>:<tag> typically match the values that you previously retrieved.
    --reference-policy=local Set this value to local so that a copy of the image from IBM Cloud Container Registry is imported into the local cache of the internal registry and made available to the cluster's projects as an image stream. If you do not include this value, the image stream refers back to IBM Cloud Container Registry when you use it in the deployments and therefore requires credentials in the project.
    --scheduled Set this optional flag to set up periodic importing of the image from IBM Cloud Container Registry into the internal registry. The default frequency is 15 minutes. For more information, see the OpenShift documentation.
  5. Verify that your image stream is created.
    oc get imagestreams
    
  6. Verify that the image stream successfully pulled the image from IBM Cloud Container Registry. In the output, check that the latest tagged from image matches your * <region>.icr.io/<namespace>/<image>@<digest> image.

    oc describe is/<imagestream>
    

    Example output:

    Name:            <imagestream>
    Namespace:        default
    Created:        2 days ago
    Labels:            <none>
    Annotations:        openshift.io/image.dockerRepositoryCheck=2020-03-31T09:41:36Z
    Image Repository:    image-registry.openshift-image-registry.svc:5000/default/ant1
    Image Lookup:        local=false
    Unique Images:        1
    Tags:                1
    
    latest
        tagged from <region>.icr.io/<namespace>/<image>:<tag>
    
        * <region>.icr.io/<namespace>/<image>@<digest>
            2 days ago
    

Now, your developers can use the image stream in an app deployment. The image successfully builds from the locally pulled image in the internal registry. You do not need to set up an image pull secret in the project to IBM Cloud Container Registry, because the image stream is local to the cluster.



Set up builds in the internal registry to push images to IBM Cloud Container Registry

When you create a build in your Red Hat OpenShift on IBM Cloud cluster, we can set up the internal registry to push the image to your external repository in IBM Cloud Container Registry. By default, the image pull secret in the default project of the cluster only has read access to pull images from IBM Cloud Container Registry. To push images, we must add a secret with write access.

  1. Access the OpenShift cluster.
  2. Switch to the default project.
    oc project default
    
  3. Follow the steps to set up an IBM Cloud IAM API key with the Reader and Writer service access roles to pull from and push images to your icr.io registries. Keep in mind that any user with access to the project can use this secret to push images to your private registry. We might want to set up logging and monitoring tools so that we can observe who does what actions in the cluster.
  4. Repeat the previous step for each icr.io region that we want to push images to.
  5. Add the secret to the build service account and refer to the secrets in the build configuration file. For more information, see the OpenShift documentation.
    1. Add the secret to the build service account by linking the secret that you just created to the builder role that all builds in the cluster use.
      oc secrets link builder <secret_name>
      
    2. List the build configurations and note the ones that we want to give push and pull access to IBM Cloud Container Registry.
      oc get bc
      
    3. Set the image push secret for the build configuration to use the secret that you just created with Writer service access to IBM Cloud Container Registry.
      oc set build-secret --push bc/<build_config_name> <secret_name>
      
    4. Set the image pull secret for the build configuration to pull from the registry that we want to pull the initial build image from. For example, we can use the secret that you just created with Reader service access to IBM Cloud Container Registry if the source image is in an IBM Cloud Container Registry repository.
      oc set build-secret --pull bc/<build_config_name> <secret_name>
      
  6. After you update the build service account and build configuration file to push to IBM Cloud Container Registry, restart your build.
    oc start-build <build_name>
    
  7. Get the name of our build pod, such as <build>-2-build.
    oc get pods
    
  8. Check the logs of the build and note where the image was pushed.

    oc logs <build_pod>
    

    Example of a successful image push log:

    ...
    Successfully pushed <region>.icr.io/<namespace>/<build_name>@sha256:<hash>
    Push successful
    
  9. Check your images in your private registry to confirm that the image is created.

    ibmcloud cr image list
    

    Example output:

    Repository                                Tag       Digest     Namespace     Created         Size     Security status   
    <region>.icr.io/<namespace>/<build_name>  latest    <digest>   <namespace>   2 minutes ago   182 MB   33 Issues
    

Your OpenShift build can now pull images from and push images to IBM Cloud Container Registry.



Using IBM Cloud Container Registry

By default, your Red Hat OpenShift on IBM Cloud cluster is set up to pull images from the remote, private IBM Cloud Container Registry icr.io domains in the default project. To use images that are stored in IBM Cloud Container Registry for other projects, we can pull the image to the internal registry in an image stream, or create image pull secrets for each global and regional registry in each project.

To import images into the internal registry: See Importing images from IBM Cloud Container Registry into the internal registry image stream.

To pull images directly from the external IBM Cloud Container Registry: See the following topics.



Understand how to authorize the cluster to pull images from a private registry

To pull images from a registry, your Red Hat OpenShift on IBM Cloud cluster uses a special type of Kubernetes secret, an imagePullSecret. This image pull secret stores the credentials to access a container registry.

The container registry can be:

  • A private namespace in your own IBM Cloud Container Registry.
  • A private namespace in IBM Cloud Container Registry that belongs to a different IBM Cloud account.
  • Any other private registry such as Docker.

However, by default, the cluster is set up to pull images from only your account's namespaces in IBM Cloud Container Registry, and deploy containers from these images to the default OpenShift project in the cluster. If you need to pull images in other projects of the cluster or from other container registries, then we must set up your own image pull secrets.


Default image pull secret setup

Generally, your Red Hat OpenShift on IBM Cloud cluster is set up to pull images from all IBM Cloud Container Registry icr.io domains from the default OpenShift project only. Review the following FAQs to learn more about how to pull images in other OpenShift projects or accounts, restrict pull access, or why the cluster might not have the default image pull secrets.

How is my cluster set up to pull images from the default OpenShift project?
When you create a cluster, the cluster has an IBM Cloud IAM service ID that is given an IAM Reader service access role policy to IBM Cloud Container Registry. The service ID credentials are impersonated in a non-expiring API key that is stored in image pull secrets in the cluster. The image pull secrets are added to the default Kubernetes namespace and the list of secrets in the default service account for this OpenShift project. By using image pull secrets, the deployments can pull images (read-only access) from the global and regional IBM Cloud Container Registry to deploy containers in the default OpenShift project.

  • The global registry securely stores public images that are provided by IBM. We can refer to these public images across the deployments instead of having different references for images that are stored in each regional registry.
  • The regional registry securely stores your own private Docker images.

What if I don't have image pull secrets in the default OpenShift project?
We can check the image pull secrets by logging in to the cluster and running oc get secrets -n default | grep "icr-io". If no icr secrets are listed, the person who created the cluster might not have had the required permissions to IBM Cloud Container Registry in IAM. See Updating existing clusters to use the API key image pull secret.

Can I restrict pull access to a certain regional registry?
Yes, we can edit the existing IAM policy of the service ID that restricts the Reader service access role to that regional registry or a registry resource such as a namespace. Before we can customize registry IAM policies, we must enable IBM Cloud IAM policies for IBM Cloud Container Registry.

Want to make your registry credentials even more secured? Ask the cluster admin to enable a key management service provider in the cluster to encrypt Kubernetes secrets in the cluster, such as the image pull secret that stores your registry credentials.

Can I pull images in a OpenShift project other than default?
Not by default. By using the default cluster setup, we can deploy containers from any image that is stored in your IBM Cloud Container Registry namespace into the default OpenShift project of the cluster. To use these images in any other OpenShift projects or other IBM Cloud accounts, we have the option to copy or create your own image pull secrets.

Can I pull images from a different IBM Cloud account?
Yes, create an API key in the IBM Cloud account that we want to use. Then, in each project of each cluster that we want to pull images from the IBM Cloud account, create a secret that holds the API key. For more information, follow along with this example that uses an authorized service ID API key.

To use a non-IBM Cloud registry such as Docker, see Accessing images that are stored in other private registries.

Does the API key need to be for a service ID? What happens if I reach the limit of service IDs for my account?
The default cluster setup creates a service ID to store IBM Cloud IAM API key credentials in the image pull secret. However, we can also create an API key for an individual user and store those credentials in an image pull secret. If you reach the IAM limit for service IDs, the cluster is created without the service ID and image pull secret and cannot pull images from the icr.io registry domains by default. You must create your own image pull secret, but by using an API key for an individual user such as a functional ID, not an IBM Cloud IAM service ID.

I see image pull secrets for the regional registry domains and all registry domains. Which one do I use?
Previously, Red Hat OpenShift on IBM Cloud created separate image pull secrets for each regional, public icr.io registry domain. Now, all the public and private icr.io registry domains for all regions are stored in a single all-icr-io image pull secret that is automatically created in the default project of our cluster.

To let the workloads pull container images from other projects, we can now copy only the all-icr-io image pull secret to that project, and specify the all-icr-io secret in your service account or deployment. You do not need to copy the image pull secret that matches the regional registry of our image anymore.

After I copy or create an image pull secret in another OpenShift project, am I done?
Not quite. Your containers must be authorized to pull images by using the secret that you created. We can add the image pull secret to the service account for the namespace, or refer to the secret in each deployment. For instructions, see Using the image pull secret to deploy containers.


Private network connection to icr.io registries

When you set up your IBM Cloud account to use service endpoints, we can use a private network connection to push images to and to pull images from IBM Cloud Container Registry. When you use the private network to pull images, your image pull traffic is not charged as public bandwidth, because the traffic is on the private network. For more information, see the IBM Cloud Container Registry private network documentation.

What do I need to do to set up my cluster to use the private connection to icr.io registries?

  1. Enable a Virtual Router Function (VRF) for the IBM Cloud infrastructure account so that we can use the IBM Cloud Container Registry private service endpoint. To enable VRF, contact your IBM Cloud infrastructure account representative. To check whether a VRF is already enabled, use the ibmcloud account show command.
  2. Enable your IBM Cloud account to use service endpoints.

Now, IBM Cloud Container Registry automatically uses the private service endpoint. You do not need to enable the private service endpoint for the Red Hat OpenShift on IBM Cloud clusters.

Do I have to use the private icr.io registry addresses for anything else?
Yes, if you sign your images for trusted content, the signatures contain the registry domain name. To use the private icr.io domain for the signed images, resign your images with the private icr.io domains.



Updating existing clusters to use the API key image pull secret

New Red Hat OpenShift on IBM Cloud clusters store an API key in image pull secrets to authorize access to IBM Cloud Container Registry. With these image pull secrets, we can deploy containers from images that are stored in the icr.io registry domains. We can add the image pull secrets to the cluster if the cluster was not created with the secrets.

Before beginning:

  • Access the OpenShift cluster.
  • Make sure that we have the following permissions:
    • IBM Cloud IAM Operator or Administrator platform role for Red Hat OpenShift on IBM Cloud. The account owner can give you the role by running:
      ibmcloud iam user-policy-create <your_user_email> --service-name containers-kubernetes --roles <(Administrator|Operator)>
      
    • IBM Cloud IAM Administrator platform role for IBM Cloud Container Registry, across all regions and resource groups. The policy cannot be scoped to a particular region or resource group. The account owner can give you the role by running:
      ibmcloud iam user-policy-create <your_user_email> --service-name container-registry --roles Administrator
      

To update the cluster image pull secret in the default Kubernetes namespace:

  1. Get the cluster ID.
    ibmcloud oc cluster ls
    
  2. Run the following command to create a service ID for the cluster and assign the service ID an IAM Reader service role for IBM Cloud Container Registry. The command also creates an API key to impersonate the service ID credentials and stores the API key in a Kubernetes image pull secret in the cluster. The image pull secret is in the default OpenShift project.

    ibmcloud oc cluster pull-secret apply --cluster <cluster_name_or_ID>
    

    When you run this command, the creation of IAM credentials and image pull secrets is initiated and can take some time to complete. We cannot deploy containers that pull an image from the IBM Cloud Container Registry icr.io domains until the image pull secrets are created.

  3. Verify that the image pull secrets are created in the cluster.

    oc get secrets | grep icr-io
    

    Example output:

    all-icr-io           kubernetes.io/dockerconfigjson        1         16d
    

    To maintain backwards compatibility, the OpenShift 3.11 clusters have a separate image pull secret for each IBM Cloud Container Registry region. However, we can copy and refer to only the all-icr-io image pull secret, which has credentials to the public and private icr.io registry domains for all regions.

  4. Update your container deployments to pull images from the icr.io domain name.

  5. Optional: If we have a firewall, make sure you allow outbound network traffic to the registry subnets for the domains that you use.

What's next?



Using an image pull secret to access images in other IBM Cloud accounts or external private registries from non-default OpenShift projects

Set up your own image pull secret in the cluster to deploy containers to OpenShift projects other than default, use images that are stored in other IBM Cloud accounts, or use images that are stored in external private registries. Further, we might create your own image pull secret to apply IAM access policies that restrict permissions to specific registry image namespaces, or actions (such as push or pull).

After creating the image pull secret, your containers must use the secret to be authorized to pull an image from the registry. We can add the image pull secret to the service account for the project, or refer to the secret in each deployment. For instructions, see Using the image pull secret to deploy containers.

Image pull secrets are valid only for the OpenShift projects that they were created for. Repeat these steps for every namespace where we want to deploy containers.

Before beginning:

  1. Set up a namespace in IBM Cloud Container Registry and push images to this namespace.
  2. Create an OpenShift cluster.
  3. Access the OpenShift cluster.


To use your own image pull secret, choose among the following options:


If you already created an image pull secret in the project that we want to use in the deployment, see Deploying containers by using the created imagePullSecret.


Copying an existing image pull secret

We can copy an image pull secret, such as the one that is automatically created for the default OpenShift project, to other projects in the cluster. To use different IBM Cloud IAM API key credentials for this project such as to restrict access to specific projects, or to pull images from other IBM Cloud accounts, create an image pull secret instead.

  1. List available OpenShift projects in the cluster, or create a project to use.

    oc get projects
    

    Example output:

    default          Active
    ibm-cert-store   Active
    ibm-system       Active
    kube-public      Active
    kube-system      Active
    

    To create a project:

    oc new-project <project_name>
    
  2. List the existing image pull secrets in the default OpenShift project for IBM Cloud Container Registry.
    oc get secrets -n default | grep icr-io
    
    Example output:
    all-icr-io          kubernetes.io/dockerconfigjson        1         16d
    
  3. Copy the all-icr-io image pull secret from the default project to the project of our choice. The new image pull secrets are named <project_name>-icr-<region>-io.
    oc get secret all-icr-io -n default -o yaml | sed 's/default/<new-project>/g' | oc create -n <new-project> -f -
    
  4. Verify that the secrets are created successfully.
    oc get secrets -n <project_name> | grep icr-io
    
  5. To deploy containers, add the image pull secret to each deployment or to the service account of the project so that any deployment in the project can pull images from the registry.


Creating an image pull secret with different IAM API key credentials for more control or access to images in other IBM Cloud accounts

We can assign IBM Cloud IAM access policies to users or a service ID to restrict permissions to specific registry image namespaces or actions (such as push or pull). Then, create an API key and store these registry credentials in an image pull secret for the cluster.

For example, to access images in other IBM Cloud accounts, create an API key that stores the IBM Cloud Container Registry credentials of a user or service ID in that account. Then, in the cluster's account, save the API key credentials in an image pull secret for each cluster and cluster project.

The following steps create an API key that stores the credentials of an IBM Cloud IAM service ID. Instead of using a service ID, we might want to create an API key for a user ID that has an IBM Cloud IAM service access policy to IBM Cloud Container Registry. However, make sure that the user is a functional ID or have a plan in case the user leaves so that the cluster can still access the registry.

  1. List available OpenShift projects in the cluster, or create a project to use where we want to deploy containers from your registry images.

    oc get projects
    

    Example output:

    default          Active
    ibm-cert-store   Active
    ibm-system       Active
    kube-public      Active
    kube-system      Active
    

    To create a project:

    oc new-project <project_name>
    
  2. Create an IBM Cloud IAM service ID for the cluster that is used for the IAM policies and API key credentials in the image pull secret. Be sure to give the service ID a description that helps you retrieve the service ID later, such as including both the cluster and project name.
    ibmcloud iam service-id-create <cluster_name>-<project>-id --description "Service ID for IBM Cloud Container Registry in OpenShift  cluster <cluster_name> project <project>"
    
  3. Create a custom IBM Cloud IAM policy for the cluster service ID that grants access to IBM Cloud Container Registry.
    ibmcloud iam service-policy-create <cluster_service_ID> --roles <service_access_role> --service-name container-registry [--region <IAM_region>] [--resource-type namespace --resource <registry_namespace>]
    
    Component Description
    <cluster_service_ID> Required. Replace with the <cluster_name>-<kube_namespace>-id service ID that you previously created for the Kubernetes cluster.
    --service-name container-registry Required. Enter container-registry so that the IAM policy is for IBM Cloud Container Registry.
    --roles <service_access_role> Required. Enter the service access role for IBM Cloud Container Registry that we want to scope the service ID access to. Possible values are Reader, Writer, and Manager.
    --region <IAM_region> Optional. To scope the access policy to certain IAM regions, enter the regions in a comma-separated list. Possible values are global and the local registry regions.
    --resource-type namespace --resource <registry_namespace> Optional. To limit access to only images in certain IBM Cloud Container Registry namespaces, enter namespace for the resource type and specify the <registry_namespace>. To list registry namespaces, run ibmcloud cr namespaces.
  4. Create an API key for the service ID. Name the API key similar to your service ID, and include the service ID that you previously created, `<cluster_name>-<kube_namespace>-id. Be sure to give the API key a description that helps you retrieve the key later.
    ibmcloud iam service-api-key-create <cluster_name>-<project>-key <cluster_name>-<project>-id --description "API key for service ID <service_id> in OpenShift  cluster <cluster_name> project <project>"
    
  5. Retrieve your API Key value from the output of the previous command.

    Please preserve the API key! It cannot be retrieved after it's created.
    
    Name          <cluster_name>-<kube_namespace>-key   
    Description   key_for_registry_for_serviceid_for_kubernetes_cluster_multizone_namespace_test   
    Bound To      crn:v1:bluemix:public:iam-identity::a/1bb222bb2b33333ddd3d3333ee4ee444::serviceid:ServiceId-ff55555f-5fff-6666-g6g6-777777h7h7hh   
    Created At    2019-02-01T19:06+0000   
    API Key       i-8i88ii8jjjj9jjj99kkkkkkkkk_k9-llllll11mmm1   
    Locked        false   
    UUID          ApiKey-222nn2n2-o3o3-3o3o-4p44-oo444o44o4o4
    
  6. Create an image pull secret to store the API key credentials in the cluster project. Repeat this step for each project of each cluster for each icr.io domain that we want to pull images from.

    oc --namespace <project> create secret docker-registry <secret_name> --docker-server=<registry_URL> --docker-username=iamapikey --docker-password=<api_key_value> --docker-email=<docker_email>
    
    Component Description
    --namespace <project> Required. Specify the OpenShift project of the cluster that you used for the service ID name.
    <secret_name> Required. Enter a name for the image pull secret.
    --docker-server <registry_URL> Required. Set the URL to the image registry where your registry namespace is set up. For available domains, see Local regions.
    --docker-username iamapikey Required. Enter the username to log in to your private registry. For IBM Cloud Container Registry, the username is set to the value iamapikey.
    --docker-password <token_value> Required. Enter the value of our API Key that you previously retrieved.
    --docker-email <docker-email> Required. If we have one, enter your Docker email address. If you do not, enter a fictional email address, such as a@b.c. This email is required to create a Kubernetes secret, but is not used after creation.
  7. Verify that the secret was created successfully. Replace <project> with the project where you created the image pull secret.

    oc get secrets --namespace <project>
    
  8. Add the image pull secret to a Kubernetes service account so that any pod in the project can use the image pull secret when we deploy a container.


Accessing images that are stored in other private registries

If you already have a private registry, we must store the registry credentials in a Kubernetes image pull secret and reference this secret from your configuration file.

Before beginning:

  1. Create an OpenShift cluster.
  2. Access the OpenShift cluster.

To create an image pull secret:

  1. Create the Kubernetes secret to store your private registry credentials.

    oc --namespace <project> create secret docker-registry <secret_name>  --docker-server=<registry_URL> --docker-username=<docker_username> --docker-password=<docker_password> --docker-email=<docker_email>
    
    Component Description
    --namespace <project> Required. The OpenShift project of the cluster where we want to use the secret and deploy containers to. To list available projects in the cluster, run oc get projects.
    <secret_name> Required. The name that we want to use for the image pull secret.
    --docker-server <registry_URL> Required. The URL to the registry where your private images are stored.
    --docker-username <docker_username> Required. The username to log in to your private registry.
    --docker-password <token_value> Required. The password to log in to your private registry, such as a token value.
    --docker-email <docker-email> Required. If we have one, enter your Docker email address. If you do not have one, enter a fictional email address, such as a@b.c. This email is required to create a Kubernetes secret, but is not used after creation.
  2. Verify that the secret was created successfully. Replace <project> with the name of the project where you created the image pull secret.

    oc get secrets --namespace <project>
    
  3. Create a pod that references the image pull secret.



Using the image pull secret to deploy containers

We can define an image pull secret in the pod deployment or store the image pull secret in your Kubernetes service account so that it is available for all deployments that do not specify a Kubernetes service account in the project.

To plan how image pull secrets are used in the cluster, choose between the following options:

  • Referring to the image pull secret in the pod deployment: Use this option if you do not want to grant access to your registry for all pods in the project by default. Developers can include the image pull secret in each pod deployment that must access your registry.
  • Storing the image pull secret in the Kubernetes service account: Use this option to grant access to images in your registry for all deployments in the selected OpenShift projects. To store an image pull secret in the Kubernetes service account, use the following steps.


Storing the image pull secret in the Kubernetes service account for the selected project

Every OpenShift project has a Kubernetes service account that is named default. Within the project, we can add the image pull secret to this service account to grant access for pods to pull images from your registry. Deployments that do not specify a service account automatically use the default service account for this OpenShift project.

  1. Check if an image pull secret already exists for the default service account.
    oc describe serviceaccount default -n <project_name>
    
    When <none> is displayed in the Image pull secrets entry, no image pull secret exists.
  2. Add the image pull secret to your default service account.
    • To add the image pull secret when no image pull secret is defined:
        oc patch -n <project_name> serviceaccount/default -p '{"imagePullSecrets":[{"name": "<image_pull_secret_name>"}]}'
      
    • To add the image pull secret when an image pull secret is already defined:
        oc patch -n <project_name> serviceaccount/default --type='json' -p='[{"op":"add","path":"/imagePullSecrets/-","value":{"name":"<image_pull_secret_name>"}}]'
      
  3. Verify that your image pull secret was added to your default service account.

    oc describe serviceaccount default -n <project_name>
    

    Example output:

    Name:                default
    Namespace:           <namespace_name>
    Labels:              <none>
    Annotations:         <none>
    Image pull secrets:  <image_pull_secret_name>
    Mountable secrets:   default-token-sh2dx
    Tokens:              default-token-sh2dx
    Events:              <none>
    

    If the Image pull secrets says <secret> (not found), verify that the image pull secret exists in the same project as your service account by running oc get secrets -n project.

  4. Create a pod configuration file that is named mypod.yaml to deploy a container from an image in your registry.

    apiVersion: v1
    kind: Pod
    metadata:
      name: mypod
    spec:
      containers:
        - name: mypod-container
          image: <region>.icr.io/<project>/<image>:<tag>
    
  5. Create the pod in the cluster by applying the mypod.yaml configuration file.

    oc apply -f mypod.yaml
    



Set up a cluster to pull entitled software

We can set up your Red Hat OpenShift on IBM Cloud cluster to pull entitled software, which is a collection of protected container images that are packaged in Helm charts that we are licensed to use by IBM. Entitled software is stored in a special IBM Cloud Container Registry cp.icr.io domain. To access this domain, we must create an image pull secret with an entitlement key for the cluster and add this image pull secret to the Kubernetes service account of each project where we want to deploy this entitled software.

Do we have older entitled software from Passport Advantage? Use the PPA importer tool instead to deploy this software in the cluster.

Before beginning: Access the OpenShift cluster.

  1. Get the entitlement key for the entitled software library.
    1. Log in to MyIBM.com and scroll to the Container software library section. Click View library.
    2. From the Access your container software > Entitlement keys page, click Copy key. This key authorizes access to all the entitled software in your container software library.
  2. In the project that we want to deploy your entitled containers, create an image pull secret so that we can access the cp.icr.io entitled registry. Use the entitlement key that you previously retrieved as the --docker-password value. For more information, see Accessing images that are stored in other private registries.
    oc create secret docker-registry entitled-cp-icr-io --docker-server=cp.icr.io --docker-username=cp --docker-password=<entitlement_key> --docker-email=<docker_email> -n <project>
    
  3. Add the image pull secret to the service account of the namespace so that any container in the project can use the entitlement key to pull entitled images. For more information, see Using the image pull secret to deploy containers.
    oc patch -n <project> serviceaccount/default --type='json' -p='[{"op":"add","path":"/imagePullSecrets/-","value":{"name":"entitled-cp-icr-io"}}]'
    
  4. Create a pod in the project that builds a container from an image in the entitled registry.
    oc run <pod_name> --image=cp.icr.io/<image_name> -n <project> --generator=run-pod/v1
    
  5. Check that your container was able to successfully build from the entitled image by verifying that the pod is in a Running status.
    oc get pod <pod_name> -n <project>
    

Wondering what to do next? We can set up the entitled Helm chart repository, where Helm charts that incorporate entitled software are stored. If you already have Helm installed in the cluster, run helm repo add entitled https://raw.githubusercontent.com/IBM/charts/master/repo/entitled.



Adding a private registry to the global pull secret

With OpenShift Container Platform, we can set up a global image pull secret that each worker node in the cluster can use to pull images from a private registry.

By default, your Red Hat OpenShift on IBM Cloud cluster has a global image pull secret for the following registries, so that default OpenShift components can be deployed.

  • cloud.openshift.com
  • quay.io
  • registry.connect.redhat.com
  • registry.redhat.io

Do not replace the global pull secret with a pull secret that does not have credentials to the default Red Hat registries. If you do, the default OpenShift components that are installed in the cluster, such as the OperatorHub, might fail because they cannot pull images from these registries.

Before beginning:

To add private registries, edit the global pull-secret in the openshift-config project.

  1. Create a secret value that holds the credentials to access your private registry and store the decoded secret value in a JSON file. When you create the secret value, the credentials are automatically encoded to base64. By using the --dry-run option, the secret value is created only and no secret object is created in the cluster. The decoded secret value is then stored in a JSON file to later use in your global pull secret.
    oc create secret docker-registry <secret_name> --docker-server=<registry_URL> --docker-username=<docker_username> --docker-password=<docker_password> --docker-email=<docker_email> --dry-run=true --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode > myregistryconfigjson
    
    Component Description
    --namespace <project> Required. The OpenShift project of the cluster where we want to use the secret and deploy containers to. To list available projects in the cluster, run oc get projects.
    <secret_name> Required. The name that we want to use for the image pull secret.
    --docker-server <registry_URL> Required. The URL to the registry where your private images are stored.
    --docker-username <docker_username> Required. The username to log in to your private registry.
    --docker-password <token_value> Required. The password to log in to your private registry, such as a token value.
    --docker-email <docker-email> Required. If we have one, enter your Docker email address. If you do not have one, enter a fictional email address, such asa@b.c. This email is required to create a Kubernetes secret, but is not used after creation.
    --dry-run=true Include this flag to create the secret value only, and not create and store the secret object in the cluster.
    --output="jsonpath={.data..dockerconfigjson}" Get only the .dockerconfigjson value from the data section of the Kubernetes secret.
    | base64 --decode > myregistryconfigjson Download the decoded secret data to a local myregistryconfigjson file.
  2. Retrieve the decoded secret value of the default global pull secret and store the value in a dockerconfigjson file.
    oc get secret pull-secret -n openshift-config --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode > dockerconfigjson
    
  3. Combine the downloaded private registry pull secret myregistryconfigjson file with the default global pull secret dockerconfigjson file.
    jq -s '.[0] * .[1]' dockerconfigjson myregistryconfigjson > dockerconfigjson-merged
    
  4. Update the global pull secret with the combined dockerconfigjson-merged file.
    oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=dockerconfigjson-merged
    
  5. Verify that the global pull secret is updated. Check that your private registry and each of the default Red Hat registries are in the output of the following command.

    oc get secret pull-secret -n openshift-config --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
    

    Example output:

    {
        "auths": {
            "cloud.openshift.com": {
       "auth": "<encoded_string>",
       "email": "email@example.com"
            },
            "quay.io": {
       "auth": "<encoded_string>",
       "email": "email@example.com"
            },
            "registry.connect.redhat.com": {
       "auth": "<encoded_string>",
       "email": "email@example.com"
            },
            "registry.redhat.io": {
       "auth": "<encoded_string>",
       "email": "email@example.com"
            },
            "<private_registry>": {
       "username": "iamapikey",
       "password": "<encoded_string>",
       "email": "email@example.com",
       "auth": "<encoded_string>"
            }
        }
    }
    
  6. To pick up the global configuration changes, reload all of the worker nodes in the cluster.
    1. Note the ID of the worker nodes in the cluster.
      ibmcloud oc worker ls -c <cluster_name_or_ID>
      
    2. Reload each worker node.
      ibmcloud oc worker reload -c <cluster_name_or_ID> -w <workerID_1> -w <workerID_2>
      
  7. After the worker node are back in a healthy state, verify that the global pull secret is updated on a worker node.
    1. Start a debugging pod to log in to a worker node. Use the Private IP that you retrieved earlier for the <node_name>.
      oc debug node/<node_name>
      
    2. Change the root directory to the host so that we can view files on the worker node.
      chroot /host
      
    3. Verify that the Docker configuration file has the registry credentials that match the global pull secret that you set.
      vi /.docker/config.json