Classic: Opening required ports and IP addresses in the firewall
This firewall information is specific to classic clusters. For VPC clusters, see Opening required ports and IP addresses in the firewall for VPC clusters.
Review these situations in which we might need to open specific ports and IP addresses in the firewalls for the Red Hat OpenShift on IBM Cloud clusters.
- Corporate firewalls: If corporate network policies prevent access from the local system to public endpoints via proxies or firewalls, we must allow access to run ibmcloud, ibmcloud oc, ibmcloud cr, oc, and calicoctl commands from the local system.
- Gateway appliance firewalls: If we have firewalls set up on the public or private network in your IBM Cloud infrastructure account, such as a VRA, we must open IP ranges, ports, and protocols to allow worker nodes to communicate with the master, with infrastructure resources, and with other IBM Cloud services. We can also open ports to allow incoming traffic to services exposing apps in the cluster.
- Calico network policies: If you use Calico network policies to act as a firewall to restrict all worker node egress, we must allow your worker nodes to access the resources that are required for the cluster to function.
- Other services or network firewalls: To allow the cluster to access services that run inside or outside IBM Cloud or in on-premises networks and that are protected by a firewall, we must add the IP addresses of our worker nodes in that firewall.
Opening ports in a corporate firewall
If corporate network policies prevent access from the local system to public endpoints via proxies or firewalls, we must allow access to run ibmcloud, ibmcloud oc, and ibmcloud cr commands, oc commands, and calicoctl commands from the local system.
Running ibmcloud, ibmcloud oc, and ibmcloud cr commands from behind a firewall
If corporate network policies prevent access from the local system to public endpoints via proxies or firewalls, to run ibmcloud, ibmcloud oc and ibmcloud cr commands, we must allow TCP access for IBM Cloud, Red Hat OpenShift on IBM Cloud, and IBM Cloud Container Registry.
- Allow access to cloud.ibm.com on port 443 in the firewall.
- Verify the connection by logging in to IBM Cloud through this API endpoint.
ibmcloud login -a https://cloud.ibm.com/
- Allow access to containers.cloud.ibm.com on port 443 in the firewall.
Verify the connection. If access is configured correctly, ships are displayed in the output.
curl https://containers.cloud.ibm.com/v1/
Example output:
)___( _______/__/_ ___ /===========| ___ ____ __ [\\\]___/____________|__[///] __ \ \_____[\\]__/___________________________\__[//]___ \ | \ / ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Allow access to the IBM Cloud Container Registry regions that we plan to use on port 443 and 4443 in the firewall. The global registry stores IBM-provided public images, and regional registries store your own private or public images. If the firewall is IP-based, we can see which IP addresses are opened when you allow access to the IBM Cloud Container Registry regional service endpoints by reviewing this table.
- Global registry: icr.io
- AP North: jp.icr.io
- AP South: au.icr.io
- EU Central: de.icr.io
- UK South: uk.icr.io
- US East, US South: us.icr.io
Verify the connection. The following is an example for the US East and US South regional registry. If access is configured correctly, a message of the day is returned in the output.
curl https://us.icr.io/api/v1/messages
Running oc commands from behind a firewall
If corporate network policies prevent access from the local system to public endpoints via proxies or firewalls, to run oc commands, we must allow TCP access for the cluster.When a cluster is created, the port in the service endpoint URLs is randomly assigned from within 20000-32767. We can either choose to open port range 20000-32767 for any cluster that might get created or we can choose to allow access for a specific existing cluster.
Before beginning, allow access to run ibmcloud oc commands.
To allow access for a specific cluster:
Log in to the IBM Cloud CLI. Enter your IBM Cloud credentials when prompted. If we have a federated account, include the --sso option.
ibmcloud login [--sso]
If the cluster is in a resource group other than default, target that resource group. To see the resource group that each cluster belongs to, run ibmcloud oc cluster ls. Note: You must have at least the Viewer role for the resource group.
ibmcloud target -g <resource_group_name>
Get the name of the cluster.
ibmcloud oc cluster ls
Retrieve the service endpoint URLs for the cluster.
- If only the Public Service Endpoint URL is populated, get this URL. Your authorized cluster users can access the master through this endpoint on the public network.
- If only the Private Service Endpoint URL is populated, get this URL. Your authorized cluster users can access the master through this endpoint on the private network.
- If both the Public Service Endpoint URL and Private Service Endpoint URL are populated, get both URLs. Your authorized cluster users can access the master through the public endpoint on the public network or the private endpoint on the private network.
ibmcloud oc cluster get --cluster <cluster_name_or_ID>
Example output:
... Public Service Endpoint URL: https://c3.<region>.containers.cloud.ibm.com:30426 Private Service Endpoint URL: https://c3-private.<region>.containers.cloud.ibm.com:31140 ...Allow access to the service endpoint URLs and ports that you got in the previous step. If the firewall is IP-based, we can see which IP addresses are opened when you allow access to the service endpoint URLs by reviewing this table.
Verify the connection.
If the public service endpoint is enabled:
curl --insecure <public_service_endpoint_URL>/version
Example command:
curl --insecure https://c3.<region>.containers.cloud.ibm.com:31142/version
Example output:
{ "major": "1", "minor": "7+", "gitVersion": "v1.7.4-2+eb9172c211dc41", "gitCommit": "eb9172c211dc4108341c0fd5340ee5200f0ec534", "gitTreeState": "clean", "buildDate": "2017-11-16T08:13:08Z", "goVersion": "go1.8.3", "compiler": "gc", "platform": "linux/amd64" }If the private service endpoint is enabled, we must be in your IBM Cloud private network or connect to the private network through a VPN connection to verify the connection to the master. Note: You must expose the master endpoint through a private load balancer so that users can access the master through a VPN or IBM Cloud Direct Link connection.
curl --insecure <private_service_endpoint_URL>/version
Example command:
curl --insecure https://c3-private.<region>.containers.cloud.ibm.com:31142/version
Example output:
{ "major": "1", "minor": "7+", "gitVersion": "v1.7.4-2+eb9172c211dc41", "gitCommit": "eb9172c211dc4108341c0fd5340ee5200f0ec534", "gitTreeState": "clean", "buildDate": "2017-11-16T08:13:08Z", "goVersion": "go1.8.3", "compiler": "gc", "platform": "linux/amd64" }
Optional: Repeat these steps for each cluster that we need to expose.
Running calicoctl commands from behind a firewall
If corporate network policies prevent access from the local system to public endpoints via proxies or firewalls, to run calicoctl commands, we must allow TCP access for the Calico commands.Before beginning, allow access to run ibmcloud commands and oc commands.
Retrieve the IP address from the master URL that you used to allow the oc commands.
Get the port for etcd.
oc get cm -n kube-system cluster-info -o yaml | grep etcd_host
Allow access for the Calico policies via the master URL IP address and the etcd port.
Opening ports in gateway appliance firewalls
If we have firewalls set up on the public network or private network in your IBM Cloud infrastructure account, such as a Virtual Router Appliance (Vyatta), we must open IP ranges, ports, and protocols to allow worker nodes to communicate with the master, with infrastructure resources, and with other IBM Cloud services.
Opening required ports in a public firewall
If we have a firewall on the public network in your IBM Cloud infrastructure account, such as a Virtual Router Appliance (Vyatta), we must open IP ranges, ports, and protocols in the firewall to allow worker nodes to communicate with the master, with infrastructure resources, and with other IBM Cloud services.
Note the public IP address for each worker node in the cluster.
ibmcloud oc worker ls --cluster <cluster_name_or_ID>
To allow worker nodes to communicate with the cluster master over the public service endpoint, allow outgoing network traffic from the source <each_worker_node_publicIP> to the destination TCP/UDP port range 20000-32767 and port 443, and the following IP addresses and network groups.
- TCP/UDP port range 20000-32767, port 443 FROM <each_worker_node_publicIP> TO <public_IPs>
- Replace <public_IPs> with the public IP addresses of the zones in the region that the cluster is located.
You must allow outgoing traffic to port 443 for all of the zones within the region to balance the load during the bootstrapping process.
Region Zone Public IP address AP North che01
hkg02
seo01
sng01
tok02, tok04, tok05169.38.70.10, 169.38.79.170
161.202.56.10, 161.202.57.34, 169.56.132.234
169.56.69.242, 169.56.96.42
119.81.222.210, 161.202.186.226
128.168.71.117, 128.168.75.194, 128.168.85.154, 135.90.69.66, 135.90.69.82, 161.202.126.210, 165.192.69.69, 165.192.80.146, 165.192.95.90, 169.56.1.162, 169.56.48.114AP South mel01
syd01, syd04, syd05168.1.71.178, 168.1.97.67
130.198.64.19, 130.198.66.26, 130.198.79.170, 130.198.83.34, 130.198.102.82, 135.90.66.2, 135.90.68.114, 135.90.89.234, 168.1.6.106, 168.1.8.195, 168.1.12.98, 168.1.39.34, 168.1.58.66EU Central ams03
mil01
osl01
par01
fra02, fra04, fra05169.50.146.82, 169.50.169.110, 169.50.184.18
159.122.150.2, 159.122.141.69
169.51.73.50, 169.51.91.162
159.8.79.250, 159.8.86.149
149.81.78.114, 149.81.104.122, 149.81.113.154, 149.81.123.18, 149.81.142.90, 158.177.138.138, 158.177.151.2, 158.177.156.178, 158.177.198.138, 161.156.65.42, 161.156.65.82, 161.156.79.26, 161.156.115.138, 161.156.120.74, 169.50.56.174UK South lon02, lon04, lon05, lon06 141.125.66.26, 141.125.77.58, 158.175.65.170, 158.175.77.178, 158.175.111.42, 158.175.125.194, 158.176.71.242, 158.176.94.26, 158.176.95.146, 158.176.123.130, 159.122.224.242, 159.122.242.78 US East mon01
tor01
wdc04, wdc06, wdc07169.54.80.106, 169.54.126.219
169.53.167.50, 169.53.171.210
52.117.88.42, 169.47.174.106, 169.60.73.142, 169.60.92.50, 169.60.101.42, 169.61.74.210, 169.61.83.62, 169.61.109.34, 169.62.9.250, 169.62.10.162, 169.63.75.82, 169.63.88.178, 169.63.88.186, 169.63.94.210, 169.63.111.82, 169.63.149.122, 169.63.158.82, 169.63.160.130US South mex01
sao01
sjc03
sjc04
dal10,dal12,dal13169.57.13.10, 169.57.100.18
169.57.151.10, 169.57.154.98
169.45.67.210, 169.45.88.98
169.62.82.197, 169.62.87.170
50.22.129.34, 52.116.231.210, 52.116.254.234, 52.117.28.138, 52.117.197.210, 52.117.232.194, 52.117.240.106, 169.46.7.238, 169.46.24.210, 169.46.27.234, 169.46.68.234, 169.46.110.218, 169.47.70.10, 169.47.71.138, 169.47.109.34, 169.47.209.66, 169.47.229.90, 169.47.232.210, 169.47.239.34, 169.48.110.250, 169.48.143.218, 169.48.161.242, 169.48.230.146, 169.48.244.66, 169.59.219.90, 169.60.128.2, 169.60.170.234, 169.61.29.194, 169.61.60.130, 169.61.175.106, 169.61.177.2, 169.61.228.138, 169.62.166.98, 169.62.189.26, 169.62.206.234, 169.63.47.250
To permit worker nodes to communicate with IBM Cloud Container Registry, allow outgoing network traffic from the worker nodes to IBM Cloud Container Registry regions:
- TCP port 443, port 4443 FROM <each_worker_node_publicIP> TO <registry_subnet>
- Replace <registry_subnet> with the registry subnet to which we want to allow traffic. The global registry stores IBM-provided public images, and regional registries store your own private or public images. Port 4443 is required for notary functions, such as Verifying image signatures.
Red Hat OpenShift on IBM Cloud region Registry address Registry public subnets Global registry across
Red Hat OpenShift on IBM Cloud regionsicr.io
Deprecated: registry.bluemix.net169.62.37.240/29
169.60.98.80/29
169.63.104.232/29AP North jp.icr.io
Deprecated: registry.au-syd.bluemix.net161.202.146.86/29
128.168.71.70/29
165.192.71.222/29AP South au.icr.io
Deprecated: registry.au-syd.bluemix.net168.1.1.240/29
130.198.88.128/29
135.90.66.48/29EU Central de.icr.io
Deprecated: registry.eu-de.bluemix.net169.50.58.104/29
161.156.93.16/29
149.81.79.152/29UK South uk.icr.io
Deprecated: registry.eu-gb.bluemix.net158.175.97.184/29
158.176.105.64/29
141.125.71.136/29US East, US South us.icr.io
Deprecated: registry.ng.bluemix.net169.61.234.224/29
169.61.135.160/29
169.61.46.80/29
Allow outgoing network traffic from your worker node to IBM Cloud Identity and Access Management (IAM). Your firewall must be Layer 7 to allow the IAM domain name. IAM does not have specific IP addresses that we can allow. If the firewall does not support Layer 7, we can allow all HTTPS network traffic on port 443.
- TCP port 443 FROM <each_worker_node_publicIP> TO https://iam.bluemix.net
- TCP port 443 FROM <each_worker_node_publicIP> TO https://iam.cloud.ibm.com
Optional: Allow outgoing network traffic from the worker nodes to Sysdig and LogDNA services:
- IBM Cloud Monitoring with Sysdig:
TCP port 443, port 6443 FROM <each_worker_node_public_IP> TO <sysdig_public_IP>Replace <sysdig_public_IP> with the Sysdig IP addresses.
- IBM Log Analysis with LogDNA:
TCP port 443, port 80 FROM <each_worker_node_public_IP> TO <logDNA_public_IP>Replace >logDNA_public_IP> with the LogDNA IP addresses.
If you use load balancer services, ensure that all traffic that uses the VRRP protocol is allowed between worker nodes on the public and private interfaces. Red Hat OpenShift on IBM Cloud uses the VRRP protocol to manage IP addresses for public and private load balancers.
Opening required ports in a private firewall
If we have a firewall on the private network in your IBM Cloud infrastructure account, such as a Virtual Router Appliance (Vyatta), we must open IP ranges, ports, and protocols in the firewall to allow worker nodes to communicate with the master, with each other, with infrastructure resources, and with other IBM Cloud services.
Allow the IBM Cloud infrastructure private IP ranges so that we can create worker nodes in the cluster.
- Allow the appropriate IBM Cloud infrastructure private IP ranges. See Backend (private) Network.
- Allow the IBM Cloud infrastructure private IP ranges for all of the zones that we are using. Note: You must add the 166.8.0.0/14 and 161.26.0.0/16 IP ranges, the IP ranges for the dal01, dal10, wdc04 zones, and if the cluster is in the Europe geography, the ams01 zone. See Service Network (on backend/private network).
Note the private IP address for each worker node in the cluster.
ibmcloud oc worker ls --cluster <cluster_name_or_ID>
To allow worker nodes to communicate with the cluster master over the private service endpoint, allow outgoing network traffic from the source <each_worker_node_privateIP> to the destination TCP/UDP port range 20000-32767 and port 443, and the following IP addresses and network groups.
- TCP/UDP port range 20000-32767, port 443 FROM <each_worker_node_privateIP> TO <private_IPs>
- Replace <private_IPs> with the private IP addresses of the zones in the region where the cluster is located.
You must allow outgoing traffic to port 443 for all of the zones within the region to balance the load during the bootstrapping process.
Region Zone Private IP address AP North che01
hkg02
seo01
sng01
tok02, tok04, tok05166.9.40.7, 166.9.60.2
166.9.40.36, 166.9.42.7, 166.9.44.3
166.9.44.5, 166.9.46.4
166.9.40.8, 166.9.42.28
166.9.40.21, 166.9.40.39, 166.9.40.6, 166.9.42.23, 166.9.42.55, 166.9.42.6, 166.9.44.15, 166.9.44.4, 166.9.44.47AP South mel01
syd01, syd04, syd05166.9.54.3, 166.9.54.10
166.9.52.14, 166.9.52.15, 166.9.52.23, 166.9.52.30, 166.9.52.31, 166.9.54.11, 166.9.54.12, 166.9.54.13, 166.9.54.21, 166.9.54.32, 166.9.54.33, 166.9.56.16, 166.9.56.24, 166.9.56.36EU Central ams03
mil01
osl01
par01
fra02, fra04, fra05166.9.28.17, 166.9.28.95, 166.9.30.11, 166.9.32.26
166.9.28.20, 166.9.30.12, 166.9.32.27
166.9.32.8, 166.9.32.28
166.9.28.19, 166.9.28.22, 166.9.28.24
166.9.28.23, 166.9.28.43, 166.9.28.59, 166.9.28.92, 166.9.28.94, 166.9.30.13, 166.9.30.22, 166.9.30.43, 166.9.30.55, 166.9.30.56, 166.9.32.20, 166.9.32.45, 166.9.32.53, 166.9.32.56, 166.9.32.9UK South lon02, lon04, lon05, lon06 166.9.34.5, 166.9.34.6, 166.9.34.17, 166.9.34.42, 166.9.34.50, 166.9.36.10, 166.9.36.11, 166.9.36.12, 166.9.36.13, 166.9.36.23, 166.9.36.54, 166.9.36.65, 166.9.38.6, 166.9.38.7, 166.9.38.18, 166.9.38.47, 166.9.38.54 US East mon01
tor01
wdc04, wdc06, wdc07166.9.20.11, 166.9.24.22
166.9.22.8, 166.9.24.19
166.9.20.116, 166.9.20.117, 166.9.20.12, 166.9.20.13, 166.9.20.38, 166.9.20.80, 166.9.22.10, 166.9.22.26, 166.9.22.43, 166.9.22.52, 166.9.22.54, 166.9.22.9, 166.9.24.19, 166.9.24.35, 166.9.24.4, 166.9.24.46, 166.9.24.47, 166.9.24.5US South hou02
mex01
sao01
sjc03
sjc04
dal10,dal12,dal13166.9.15.74
166.9.15.76, 166.9.16.38
166.9.12.143, 166.9.16.5
166.9.12.144, 166.9.16.39
166.9.15.75, 166.9.12.26
166.9.12.140, 166.9.12.141, 166.9.12.142, 166.9.12.151, 166.9.12.193, 166.9.12.196, 166.9.12.99, 166.9.13.31, 166.9.13.93, 166.9.13.94, 166.9.14.122, 166.9.14.125, 166.9.14.202, 166.9.14.204, 166.9.14.205, 166.9.14.95, 166.9.15.130, 166.9.15.69, 166.9.15.70, 166.9.15.71, 166.9.15.72, 166.9.15.73, 166.9.16.113, 166.9.16.137, 166.9.16.149, 166.9.16.183, 166.9.16.184, 166.9.16.185, 166.9.17.2, 166.9.17.35, 166.9.17.37, 166.9.17.39, 166.9.48.50, 166.9.48.76, 166.9.51.16, 166.9.51.54, 166.9.58.11, 166.9.58.16
Open the following ports that are necessary for worker nodes to function properly.
- Allow outbound TCP and UDP connections from the workers to ports 80 and 443 to allow worker node updates and reloads.
- Allow outbound TCP and UDP to port 2049 to allow mounting file storage as volumes.
- Allow outbound TCP and UDP to port 3260 for communication to block storage.
- Allow inbound TCP and UDP connections to port 10250 for the OpenShift dashboard and commands such as oc logs and oc exec.
- Allow inbound and outbound connections to TCP and UDP port 53 and port 5353 for DNS access.
Enable worker-to-worker communication by allowing all TCP, UDP, VRRP, and IPEncap traffic between worker nodes on the public and private interfaces. Red Hat OpenShift on IBM Cloud uses the VRRP protocol to manage IP addresses for private load balancers and the IPEncap protocol to permit pod to pod traffic across subnets.
- Optional: To send logging and metric data, set up firewall rules for the IBM Log Analysis with LogDNA and IBM Cloud Monitoring with Sysdig services.
Opening ports in a public or private firewall for inbound traffic to NodePort, load balancer, and Ingress services, and OpenShift routes
We can allow incoming access to NodePort, load balancer, and Ingress services, and OpenShift routes.
- NodePort service
- Open the port that you configured when we deployed the service to the public or private IP addresses for all of the worker nodes to allow traffic to. To find the port, run oc get svc. The port is in the 20000-32000 range.
- Load balancer service
- Open the port that you configured when we deployed the service to the load balancer service's public or private IP address.
- Ingress
- Open port 80 for HTTP and port 443 for HTTPS to the public or private IP address for the Ingress application load balancer.
- Route
- Open port 80 for HTTP and port 443 for HTTPS to the router's public IP address.
Allowing the cluster to access resources through Calico network policies
Instead of setting up a gateway firewall device, we can choose to use Calico network policies to act as a cluster firewall on the public or private network. For more information, see the following topics.
Allowing traffic to the cluster in other services' firewalls or in on-premises firewalls
To access services that run inside or outside IBM Cloud or on-premises and that are protected by a firewall, we can add the IP addresses of our worker nodes in that firewall to allow outbound network traffic to the cluster. For example, we might want to read data from an IBM Cloud database that is protected by a firewall, or specify your worker node subnets in an on-premises firewall to allow network traffic from the cluster.
Get the worker node subnets or the worker node IP addresses.
Worker node subnets: If you anticipate changing the number of worker nodes in the cluster frequently, such as if you enable the cluster autoscaler, we might not want to update the firewall for each new worker node. Instead, we can add the VLAN subnets that the cluster uses. Keep in mind that the VLAN subnet might be shared by worker nodes in other clusters.
List the worker nodes in the cluster.
ibmcloud oc worker ls --cluster <cluster_name_or_ID>
From the output of the previous step, note all the unique network IDs (first three octets) of the Public IP for the worker nodes in the cluster. In the following output, the unique network IDs are 169.xx.178 and 169.xx.210.
ID Public IP Private IP Machine Type State Status Zone Version kube-dal10-crb2f60e9735254ac8b20b9c1e38b649a5-w31 169.xx.178.101 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal10 1.18.9 kube-dal10-crb2f60e9735254ac8b20b9c1e38b649a5-w34 169.xx.178.102 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal10 1.18.9 kube-dal12-crb2f60e9735254ac8b20b9c1e38b649a5-w32 169.xx.210.101 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal12 1.18.9 kube-dal12-crb2f60e9735254ac8b20b9c1e38b649a5-w33 169.xx.210.102 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal12 1.18.9
List the VLAN subnets for each unique network ID.
ibmcloud sl subnet list | grep -e <networkID1> -e <networkID2>
Example output:
ID identifier type network_space datacenter vlan_id IPs hardware virtual_servers 1234567 169.xx.210.xxx ADDITIONAL_PRIMARY PUBLIC dal12 1122334 16 0 5 7654321 169.xx.178.xxx ADDITIONAL_PRIMARY PUBLIC dal10 4332211 16 0 6
- Retrieve the subnet address. In the output, find the number of IPs. Then, raise 2 to the power of n equal to the number of IPs. For example, if the number of IPs is 16, then 2 is raised to the power of 4 (n) to equal 16. Now get the subnet CIDR by subtracting the value of n from 32 bits. For example, when n equals 4, then the CIDR is 28 (from the equation 32 - 4 = 28). Combine the identifier mask with the CIDR value to get the full subnet address. In the previous output, the subnet addresses are:
- 169.xx.210.xxx/28
- 169.xx.178.xxx/28
- Individual worker node IP addresses: If we have a small number of worker nodes that run only one app and do not need to scale, or if we want to add only one worker node, list all the worker nodes in the cluster and note the Public IP addresses. If your worker nodes are connected to a private network only and we want to connect to IBM Cloud services by using the private service endpoint, note the Private IP addresses instead. Only these worker nodes are added. If you delete the worker nodes or add worker nodes to the cluster, we must update the firewall accordingly.
ibmcloud oc worker ls --cluster <cluster_name_or_ID>- Add the subnet CIDR or IP addresses to your service's firewall for outbound traffic or your on-premises firewall for inbound traffic.
- Repeat these steps for each cluster that we want to allow traffic to or from.
Updating IAM firewalls for Kubernetes Service IP addresses
By default, all IP addresses can be used to log in to the IBM Cloud console and access the cluster. In the IBM Cloud Identity and Access Management (IAM) console, we can generate a firewall by creating an allowlist by specifying which IP addresses have access, and all other IP addresses are restricted. If you use an IAM firewall, we must add the CIDRs of the Red Hat OpenShift on IBM Cloud control plane for the zones in the region where the cluster is located to the allowlist. You must allow these CIDRs so that Red Hat OpenShift on IBM Cloud can create Ingress ALBs and LoadBalancers in the cluster.Before beginning: The following steps require you to change the IAM allowlist for the user whose credentials are used for the cluster's region and resource group infrastructure permissions. If we are the credentials owner, we can change your own IAM allowlist settings. If we are not the credentials owner, but we are assigned the Editor or Administrator IBM Cloud IAM platform role for the User Management service, we can update the restricted IP addresses for the credentials owner.
Identify what user credentials are used for the cluster's region and resource group infrastructure permissions.
Check the API key for a region and resource group of the cluster.
ibmcloud oc api-key info --cluster <cluster_name_or_ID>Example output:
Getting information about the API key owner for cluster <cluster_name>... OK Name Email <user_name> <name@email.com>Check if the infrastructure account for the region and resource group is manually set to use a different IBM Cloud infrastructure account.
ibmcloud oc credential get --region <us-south>Example output if credentials are set to use a different account. In this case, the user's infrastructure credentials are used for the region and resource group that you targeted, even if a different user's credentials are stored in the API key that you retrieved in the previous step.
OK Infrastructure credentials for user name <1234567_name@email.com> set for resource group <resource_group_name>.Example output if credentials are not set to use a different account. In this case, the API key owner that you retrieved in the previous step has the infrastructure credentials that are used for the region and resource group.
FAILED No credentials set for resource group <resource_group_name>.: The user credentials could not be found. (E0051)- Log in to the IBM Cloud console.
- From the menu bar, click Manage > Access (IAM), and select Users.
- Select the user that you found in step 1 from the list.
- From the User details page, go to the IP address restrictions section.
- For Classic infrastructure, enter the CIDRs of the zones in the region where the cluster is located.
You must allow all of the zones within the region that the cluster is in.
Region Zone Private IP address AP North che01
hkg02
seo01
sng01
tok02, tok04, tok05169.38.111.192/26, 169.38.113.64/27, 169.38.97.192/28
169.56.143.0/26
169.56.110.64/26
119.81.192.0/26
161.202.136.0/26, 169.56.40.128/25, 128.168.68.128/26, 165.192.70.64/26AP South mel01
syd01, syd04, syd05168.1.122.192/26
168.1.199.0/26, 168.1.209.192/26, 168.1.212.128/25, 130.198.67.0/26, 130.198.74.128/26, 130.198.78.128/25, 130.198.92.192/26, 130.198.96.128/25, 130.198.98.0/24, 135.90.68.64/28, 135.90.69.16/28, 135.90.69.160/27, 135.90.73.0/26, 135.90.75.0/27, 135.90.78.128/26EU Central ams03
mil01
osl01
par01
fra02, fra04, fra05169.50.177.128/25, 169.50.185.32/27, 169.51.161.128/25, 169.51.39.64/26, 169.51.41.64/26
159.122.157.192/26, 159.122.168.128/25, 159.122.169.64/26, 169.51.193.0/24
169.51.84.64/26
159.8.74.64/27, 169.51.22.64/26, 169.51.28.128/25, 169.51.3.64/26
158.177.160.0/25, 158.177.84.64/26, 169.50.48.160/28, 169.50.58.160/27, 161.156.102.0/26, 161.156.125.80/28, 161.156.66.224/27, 149.81.105.192/26, 149.81.124.16/28, 149.81.72.192/27UK South lon02, lon04, lon06 159.8.171.0/26, 169.50.199.64/26, 169.50.220.32/27, 169.50.221.0/25, 158.175.101.64/26, 158.175.136.0/25, 158.175.136.128/25, 158.175.139.0/25, 158.175.141.0/24, 158.175.68.192/26, 158.175.77.64/26, 158.175.78.192/26, 158.175.81.128/25, 158.176.111.128/26, 158.176.112.0/26, 158.176.66.208/28, 158.176.75.240/28, 158.176.92.32/27, 158.176.95.64/27 US East mon01
tor01
wdc04, wdc06, wdc07169.54.109.192/26
169.53.178.192/26, 169.55.148.128/25
169.47.160.0/26, 169.47.160.128/26, 169.60.104.64/26, 169.60.76.192/26, 169.63.137.0/25, 169.61.85.64/26, 169.62.0.64/26US South hou02
mex01
sao01
sjc03
sjc04
dal10,dal12,dal13173.193.93.0/24, 184.172.208.0/25, 184.173.6.0/26
169.57.18.48/28, 169.57.91.0/27
169.57.190.64/26, 169.57.192.128/25
169.44.207.0/26
169.62.73.192/26
169.46.30.128/26, 169.48.138.64/26, 169.48.180.128/25, 169.61.206.128/26, 169.63.199.128/25, 169.63.205.0/25, 169.47.126.192/27, 169.47.79.192/26, 169.48.201.64/26, 169.48.212.64/26, 169.48.238.128/25, 169.61.137.64/26, 169.61.176.64/26, 169.61.188.128/25, 169.61.189.128/25, 169.63.18.128/25, 169.63.20.0/25, 169.63.24.0/24, 169.60.131.192/26, 169.62.130.0/26, 169.62.130.64/26, 169.62.216.0/25, 169.62.222.0/25, 169.62.253.0/25- Click Apply.