+

Search Tips   |   Advanced Search


Networking

  1. Overview
  2. Cluster Network Operator
    1. View the cluster network configuration
    2. View Cluster Network Operator status
    3. View Cluster Network Operator logs
    4. Cluster Network Operator custom resource
  3. DNS Operator
  4. Ingress Operator
    1. The Ingress configuration asset
    2. View the default Ingress Controller
    3. View Ingress Operator status
    4. View Ingress Controller logs
    5. View Ingress Controller status
    6. Set a custom default certificate
    7. Scale an Ingress Controller
    8. Configure ingress controller sharding by using route labels
    9. Configure ingress controller sharding by using namespace labels
  5. Manage multiple networks
    1. CNI configurations
    2. Create additional network interfaces
    3. Configure additional interfaces using host devices
    4. Configure SR-IOV
  6. Configure network policy with OpenShift SDN
    1. Example NetworkPolicy object
    2. Create a NetworkPolicy object
    3. Delete a NetworkPolicy object
    4. View NetworkPolicy objects
  7. OpenShift SDN
    1. Assign egress IPs to a project
    2. Enable automatically assigned egress IPs for a namespace
    3. Configure manually assigned egress IPs
    4. Use multicast
    5. Configure network isolation using OpenShift SDN
    6. Configure kube-proxy
  8. Configure Routes
    1. Configure route timeouts
    2. Troubleshoot throughput issues
    3. Use cookies to keep route statefulness
    4. Annotating a route with a cookie
    5. Secured routes
  9. Configure ingress cluster traffic
    1. Configure ingress cluster traffic using an Ingress Controller
    2. Configure ingress cluster traffic using a load balancer
    3. Configure ingress cluster traffic using a service external IP
    4. Configure ingress cluster traffic using a NodePort


Overview

Kubernetes assigns each Pod an internal IP address. Pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. Each container within the Pod acts as if it were on the same host.

Environment variables are created for user names and service IPs so the front-end Pods can communicate with the back-end services. If the service is deleted and recreated, a new IP address can be assigned to the service, and requires the front-end Pods to be recreated to pick up the updated values for the service IP environment variable. To ensure that the service IP is generated properly, and can be provided to the front-end Pods as an environment variable, the back-end service must be created before any of the front-end Pods.

OpenShift has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port.


Cluster Network Operator

The Cluster Network Operator (CNO) deploys and manages cluster network components, including the Container Network Interface (CNI) Software Defined Networking (SDN) plug-in selected for the cluster during installation. The CNO implements the network API from the API group: operator.openshift.io

The Operator deploys the OpenShift SDN plug-in, or a different SDN plug-in if selected during cluster installation, using a DaemonSet. The CNO is deployed during installation as a Kubernetes Deployment.

View the deployment status

View the state

    $ oc get clusteroperator/network
    
    NAME      VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    network   4.1.0     True        False         False      50m

The AVAILABLE field is True when the Cluster Network Operator reports an available status condition.


View the cluster network configuration

Every new OpenShift installation has a network.config object named cluster.

    $ oc describe network.config/cluster
    
    Name:         cluster
    Namespace:
    Labels:       <none>
    Annotations:  <none>
    API Version:  config.openshift.io/v1
    Kind:         Network
    Metadata:
      Self Link:  /apis/config.openshift.io/v1/networks/cluster
    Spec: 1
      Cluster Network:
        Cidr:         10.128.0.0/14
        Host Prefix:  23
      Network Type:   OpenShiftSDN
      Service Network:
        172.30.0.0/16
    Status: 2
      Cluster Network:
        Cidr:               10.128.0.0/14
        Host Prefix:        23
      Cluster Network MTU:  8951
      Network Type:         OpenShiftSDN
      Service Network:
        172.30.0.0/16
    Events:  <none>

1 The Spec field displays the configured state of the cluster network.

2 The Status field displays the current state of the cluster network configuration.


View Cluster Network Operator status

We can inspect the status and view the details of the CNO using the oc describe command.

View the status

    $ oc describe clusteroperators/network


View CNO logs

We can view CNO logs using the oc logs command.

View the logs

    $ oc logs --namespace=openshift-network-operator deployment/network-operator


CNO custom resource (CR)

The cluster network configuration in the Network.operator.openshift.io custom resource (CR) stores the configuration settings for the Cluster Network Operator (CNO).

The following custom resource displays the default configuration for the CNO and explains both the parameters we can configure and valid parameter values:

CNO custom resource

    apiVersion: operator.openshift.io/v1
    kind: Network
    spec:
      clusterNetwork: 1
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      serviceNetwork: 2
      - 172.30.0.0/16
      defaultNetwork:
        type: OpenShiftSDN 3
        openshiftSDNConfig: 4
          mode: NetworkPolicy 5
          mtu: 1450 6
          vxlanPort: 4789 7
      kubeProxyConfig: 8
        iptablesSyncPeriod: 30s 9
        proxyArguments:
          iptables-min-sync-period: 10
          - 30s

1

Blocks of IP addresses from which Pod IPs are allocated and the subnet prefix length assigned to each individual node.

2

A block of IP addresses for services. The OpenShift SDN Container Network Interface (CNI) plug-in supports only a single IP address block for the service network.

3

The Software Defined Networking (SDN) plug-in being used. OpenShift SDN is the only plug-in supported in OpenShift 4.1.

4

OpenShift SDN specific configuration parameters.

5

The isolation mode for the OpenShift SDN CNI plug-in.

6

MTU for the VXLAN overlay network. This value is normally configured automatically.

7

The port to use for all VXLAN packets. The default value is 4789.

8

The Kubernetes network proxy (kube-proxy) configuration.

9

The refresh period for iptables rules. The default value is 30s. Valid suffixes include s, m, and h and are described in the Go time package documentation.

10

The minimum duration before refreshing iptables rules. Ensures the refresh does not happen too frequently. Valid suffixes include s, m, and h and are described in the Go time package


DNS Operator

The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods, enabling DNS-based Kubernetes Service discovery in OpenShift.

The DNS Operator implements the DNS API from the API group: operator.openshift.io

The operator deploys CoreDNS using a DaemonSet, creates a Service for the DaemonSet, and configures the kubelet to instruct pods to use the CoreDNS Service IP for name resolution. The DNS Operator is deployed during installation as a Kubernetes Deployment.

View the deployment status

ClusterOperator is the Custom Resource object which holds the current state of an operator. This object is used by operators to convey their state to the rest of the cluster.

View the state of the DNS Operator

    $ oc get clusteroperator/dns
    NAME      VERSION     AVAILABLE   PROGRESSING   DEGRADED   SINCE
    dns       4.1.0-0.11  True        False         False      92m
    

AVAILABLE is True when at least 1 pod from the CoreDNS DaemonSet is reporting an Available status condition.


View the default DNS

Every new OpenShift installation has a dns.operator named default. It cannot be customized, replaced, or supplemented with additional dnses.

  1. View the default dns:

      $ oc describe dns.operator/default
      Name:         default
      Namespace:
      Labels:       <none>
      Annotations:  <none>
      API Version:  operator.openshift.io/v1
      Kind:         DNS
      ...
      Status:
        Cluster Domain:  cluster.local 1
        Cluster IP:      172.30.0.10 2
      ...

    1 The Cluster Domain field is the base DNS domain used to construct fully qualified Pod and Service domain names.

    2 The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the Service CIDR range.

  2. To find the Service CIDR of the cluster, use the oc get command:

      $ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}'
      [172.30.0.0/16]

Configuration of the CoreDNS Corefile or Kubernetes plugin is not supported.


DNS Operator status

View the status of the DNS Operator:

    $ oc describe clusteroperators/dns


DNS Operator logs

View the logs of the DNS Operator:

    $ oc logs --namespace=openshift-dns-operator deployment/dns-operator


Ingress Operator

The Ingress Operator implements the ingresscontroller API and is the component responsible for enabling external access to OpenShift cluster services. The operator deploys one or more HAProxy-based Ingress Controllers to handle routing. We can use the Ingress Operator to route traffic by specifying OpenShift Route and Kubernetes Ingress resources.


The Ingress configuration asset

The installation program generates an asset with an Ingress resource in the API group: config.openshift.io

    apiVersion: config.openshift.io/v1
    kind: Ingress
    metadata:
      name: cluster
    spec:
      domain: apps.openshiftdemos.com
    

The installation program stores this asset in...

    manifests/cluster-ingress-02-config.yml

This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows:

  • The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller.
  • The OpenShift API server operator uses the domain from the cluster Ingress configuration as the domain used when generating a default host for a Route resource that does not specify an explicit host.


View the default Ingress Controller

The Ingress Operator is a core feature and is enabled out of the box. Every new OpenShift installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute.

View the default Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/default


View Ingress Operator status

    $ oc describe clusteroperators/ingress


View Ingress Controller logs

    $ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator


View Ingress Controller status

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>


Set a custom default certificate

Configure an Ingress Controller to use a custom certificate. Create a Secret resource and edit the IngressController custom resource (CR).

Prerequisites

  • A certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority and valid for the Ingress domain.

  • An IngressController custom resource. You may use the default one:

      $ oc --namespace openshift-ingress-operator get ingresscontrollers
      NAME      AGE
      default   10m
      

If the default certificate is replaced, it must be signed by a public certificate authority already included in the CA bundle as provided by the container userspace.

The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key. We may also substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController custom resource.

This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy.

Procedure...

  1. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files.

      $ oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key

  2. Update the IngressController custom resource to reference the new certificate secret:

      $ oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'

  3. Verify the update was effective:

      $ oc get --namespace openshift-ingress-operator ingresscontrollers/default --output jsonpath='{.spec.defaultCertificate}'

    The output should look like:

      map[name:custom-certs-default]

    The certificate secret name should match the value used to update the custom resource.

Once the IngressController custom resource has been modified, the Ingress Operator will update the Ingress Controller's deployment to use the custom certificate.


Scale an Ingress Controller

Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource.

  1. View the current number of available replicas for the default IngressController:

      $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' 2

  2. Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas:

      $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge ingresscontroller.operator.openshift.io/default patched

  3. Verify that the default IngressController scaled to the number of replicas that you specified:

      $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' 3

Scaling is not an immediate action, as it takes time to create the desired number of replicas.


Configure ingress controller sharding by using route labels

Ingress Controller sharding by using route labels means that the the Ingress Controller serves any route in any namespace that is selected by the route selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

  1. Edit the router-internal.yaml file:

      # cat router-internal.yaml
      apiVersion: v1
      items:
      - apiVersion: operator.openshift.io/v1
        kind: IngressController
        metadata:
          name: sharded
          namespace: openshift-ingress-operator
        spec:
          domain: <apps-sharded.basedomain.example.net>
          nodePlacement:
            nodeSelector:
      matchLabels:
        node-role.kubernetes.io/worker: ""
          routeSelector:
            matchLabels:
      type: sharded
        status: {}
      kind: List
      metadata:
        resourceVersion: ""
        selfLink: ""
      

  2. Apply the Ingress Controller router-internal.yaml file:

    The Ingress Controller selects routes in any namespace that have the label type: sharded.


Configure ingress controller sharding by using namespace labels

Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

  1. Edit the router-internal.yaml file:

      # cat router-internal.yaml
      apiVersion: v1
      items:
      - apiVersion: operator.openshift.io/v1
        kind: IngressController
        metadata:
          name: sharded
          namespace: openshift-ingress-operator
        spec:
          domain: <apps-sharded.basedomain.example.net>
          nodePlacement:
            nodeSelector:
      matchLabels:
        node-role.kubernetes.io/worker: ""
          routeSelector:
            matchLabels:
      type: sharded
        status: {}
      kind: List
      metadata:
        resourceVersion: ""
        selfLink: ""

  2. Apply the Ingress Controller router-internal.yaml file:

    The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded.


Manage multiple networks

Multus CNI provides the capability to attach multiple network interfaces to Pods in OpenShift, providing flexibility when configuring Pods that deliver network functionality, such as switching or routing.

Multus CNI is used when network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for..

    Performance

    Send traffic along two different planes in order to manage how much traffic is along each plane.

    Security

    Send sensitive traffic onto a network plane managed specifically for security considerations, and we can separate private data that must not be shared between tenants or customers.

All of the Pods in the cluster will still use the cluster-wide default network to maintain connectivity across the cluster. Every Pod has an eth0 interface which is attached to the cluster-wide Pod network. We can view the interfaces for a Pod using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces using Multus CNI, they will be named net1, net2, netN.

We attach additional network interfaces to a Pod by creating configurations which define how the interfaces are attached. Each interface is specified using a Custom Resource (CR) that has a NetworkAttachmentDefinition type. A CNI configuration inside each of these custom resources defines how that interface will be created. Multus CNI is a CNI plug-in that can call other CNI plug-ins. This allows the use of other CNI plug-ins to create additional network interfaces. For high performance networking, use the SR-IOV Device Plugin.

To attach additional network interfaces to Pods:

  1. Create a CNI configuration as a custom resource.
  2. Annotate the Pod with the configuration name.
  3. Verify that the attachment was successful by viewing the status annotation.


CNI configurations

CNI configurations are JSON data with only a single required field, type. The configuration in the additional field is free-form JSON data, which allows CNI plug-ins to make the configurations in the form that they require. Different CNI plug-ins use different configurations. See the documentation specific to the CNI plug-in that we want to use.

An example CNI configuration:

    {
      "cniVersion": "0.3.0", 1
      "type": "loopback", 2
      "additional": "<plugin-specific-json-data>" 3
    }

1 cniVersion: The CNI version used. The CNI plug-in uses this information to check whether it is using a valid version.

2 type: The CNI plug-in binary to call on disk. In this example, the loopback binary is specified, Therefore, it creates a loopback-type network interface.

3 additional: Each CNI plug-in specifies the configuration parameters it needs in JSON. These are specific to the CNI plug-in binary named in the type field.


Create additional network interfaces

Additional interfaces for Pods are defined in CNI configurations stored as Custom Resources (CRs). These custom resources can be created, listed, edited, and deleted using the oc tool.

The following procedure configures a macvlan interface on a Pod. This configuration might not apply to all production environments, but we can use the same procedure for other CNI plug-ins.


Create a CNI configuration for an additional interface as a custom resource

To attach an additional interface to a Pod, the custom resource that defines the interface must be in the same project (namespace) as the Pod.

  1. Create a project to store CNI configurations as custom resources and the Pods that will use the custom resources.

  2. Create the custom resource that will define an additional network interface. Create a YAML file called macvlan-conf.yaml with the following contents:

      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition 1
      metadata:
        name: macvlan-conf 2
      spec:
        config: '{ 3
            "cniVersion": "0.3.0",
            "type": "macvlan",
            "master": "eth0",
            "mode": "bridge",
            "ipam": {
      "type": "host-local",
      "subnet": "192.168.1.0/24",
      "rangeStart": "192.168.1.200",
      "rangeEnd": "192.168.1.216",
      "routes": [
        { "dst": "0.0.0.0/0" }
      ],
      "gateway": "192.168.1.1"
            }
          }'

    1 kind: NetworkAttachmentDefinition. Name for the custom resource where this configuration will be stored. It is a custom extension of Kubernetes that defines how networks are attached to Pods.

    2 name maps to the annotation, which is used in the next step.

    3 config: The CNI configuration is packaged in the config field.

    The configuration is specific to a plug-in, which enables macvlan. The type line in the CNI configuration portion. Aside from the IPAM (IP address management) parameters for networking, in this example the master field must reference a network interface that resides on the node(s) hosting the Pod(s).

  3. Create the custom resource:

      $ oc create -f macvlan-conf.yaml

This example is based on a macvlan CNI plug-in. In AWS environments, macvlan traffic might be filtered and, therefore, might not reach the desired destination.


Manage the custom resources for additional interfaces

We can manage the custom resources for additional interfaces using the oc CLI.

List the custom resources for additional interfaces:

    $ oc get network-attachment-definitions.k8s.cni.cncf.io

Delete custom resources for additional interfaces:

    $ oc delete network-attachment-definitions.k8s.cni.cncf.io macvlan-conf


Create an annotated Pod that uses the custom resource

To create a Pod that uses the additional interface, use an annotation that refers to the custom resource. Create a YAML file called samplepod.yaml for a Pod with the following contents:

    apiVersion: v1
    kind: Pod
    metadata:
      name: samplepod
      annotations:
        k8s.v1.cni.cncf.io/networks: macvlan-conf 1
    spec:
      containers:
      - name: samplepod
        command: ["/bin/bash", "-c", "sleep 2000000000000"]
        image: centos/tools

1 The annotations field contains k8s.v1.cni.cncf.io/networks: macvlan-conf, which correlates to the name field in the custom resource defined earlier.

Create the samplepod Pod:

    $ oc create -f samplepod.yaml

To verify that an additional network interface has been created and attached to the Pod, list the IPv4 address information:

    $ oc exec -it samplepod -- ip -4 addr

Three interfaces are listed in the output:

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 1
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP  link-netnsid 0 2
        inet 10.244.1.4/24 scope global eth0
           valid_lft forever preferred_lft forever
    4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN  link-netnsid 0 3
        inet 192.168.1.203/24 scope global net1
           valid_lft forever preferred_lft forever

1 lo: A loopback interface.

2 eth0: The interface that connects to the cluster-wide default network.

3 net1: The new interface that you just created.


Attach multiple interfaces to a Pod

To attach more than one additional interface to a Pod, specify multiple names, in comma-delimited format, in the annotations field in the Pod definition.

The following annotations field in a Pod definition specifies different custom resources for the additional interfaces:

     annotations:
        k8s.v1.cni.cncf.io/networks: macvlan-conf, tertiary-conf, quaternary-conf

The following annotations field in a Pod definition specifies the same custom resource for the additional interfaces:

     annotations:
        k8s.v1.cni.cncf.io/networks: macvlan-conf, macvlan-conf


View the interface configuration in a running Pod

After the Pod is running, we can review the configurations of the additional interfaces created. To view the sample Pod from the earlier example...

    $ oc describe pod samplepod

The metadata section of the output contains a list of annotations, which are displayed in JSON format:

    Annotations:
      k8s.v1.cni.cncf.io/networks: macvlan-conf
      k8s.v1.cni.cncf.io/networks-status:
        [{
    "name": "openshift-sdn",
    "ips": [
    "10.131.0.10"
    ],
    "default": true,
    "dns": {}
        },{
    "name": "macvlan-conf", 1
    "interface": "net1", 2
    "ips": [ 3
    "192.168.1.200"
    ],
    "mac": "72:00:53:b4:48:c4", 4
    "dns": {} 5
        }]

1 name refers to the custom resource name, macvlan-conf.

2 interface refers to the name of the interface in the Pod.

3 ips is a list of IP addresses as assigned to the Pod.

4 mac is the MAC address of the interface.

5 dns refers DNS for the interface.

The first annotation, k8s.v1.cni.cncf.io/networks: macvlan-conf, refers to the custom resource created in the example. This annotation was specified in the Pod definition.

The second annotation is k8s.v1.cni.cncf.io/networks-status. There are two interfaces listed under k8s.v1.cni.cncf.io/networks-status.

  • The first interface describes the interface for the default network, openshift-sdn. This interface is created as eth0. It is used for communications within the cluster.
  • The second interface is the additional interface that createdd, net1. The output above lists some key values that were configured when the interface was created, for example, the IP addresses that were assigned to the Pod.


Configure additional interfaces using host devices

The host-device plug-in connects an existing network device on a node directly to a Pod.

The code below creates a dummy device using a dummy module to back a virtual device, and assigns the dummy device name to exampledevice0.

    $ modprobe dummy
    $ lsmod | grep dummy
    $ ip link add exampledevice0 type dummy

  1. To connect the dummy network device to a Pod, label the host, so that we can assign a Pod to the node where the device exists.

      $ oc label nodes <your-worker-node-name> exampledevice=true
      $ oc get nodes --show-labels

  2. Create a YAML file called hostdevice-example.yaml for a custom resource to refer to this configuration:

      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: hostdevice-example
      spec:
        config: '{
            "cniVersion": "0.3.0",
            "type": "host-device",
            "device": "exampledevice0"
          }'

  3. Create the hostdevice-example custom resource:

      $ oc create -f hostdevice-example.yaml

  4. Create a YAML file for a Pod which refers to this name in the annotation. Include nodeSelector to assign the Pod to the machine where we created the alias.

      apiVersion: v1
      kind: Pod
      metadata:
        name: hostdevicesamplepod
        annotations:
          k8s.v1.cni.cncf.io/networks: hostdevice-example
      spec:
        containers:
        - name: hostdevicesamplepod
          command: ["/bin/bash", "-c", "sleep 2000000000000"]
          image: centos/tools
        nodeSelector:
          exampledevice: "true"

  5. Create the hostdevicesamplepod Pod:

      $ oc create -f hostdevicesamplepod.yaml

  6. View the additional interface that createdd:

      $ oc exec hostdevicesamplepod -- ip a

SR-IOV multinetwork support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.


Configure SR-IOV

OpenShift includes the capability to use SR-IOV hardware on OpenShift nodes, which enables you to attach SR-IOV virtual function (VF) interfaces to Pods in addition to other network interfaces.

Two components are required to provide this capability: the SR-IOV network device plug-in and the SR-IOV CNI plug-in.

  • The SR-IOV network device plug-in is a Kubernetes device plug-in for discovering, advertising, and allocating SR-IOV network virtual function (VF) resources. Device plug-ins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plug-ins give the Kubernetes scheduler awareness of which resources are exhausted, allowing Pods to be scheduled to worker nodes that have sufficient resources available.
  • The SR-IOV CNI plug-in plumbs VF interfaces allocated from the SR-IOV device plug-in directly into a Pod.


Supported Devices

The following Network Interface Card (NIC) models are supported in OpenShift:

  • Intel XXV710-DA2 25G card with vendor ID 0x8086 and device ID 0x158b
  • Mellanox MT27710 Family [ConnectX-4 Lx] 25G card with vendor ID 0x15b3 and device ID 0x1015
  • Mellanox MT27800 Family [ConnectX-5] 100G card with vendor ID 0x15b3 and device ID 0x1017

For Mellanox cards, ensure that SR-IOV is enabled in the firmware before provisioning VFs on the host.


Create SR-IOV plug-ins and daemonsets

The creation of SR-IOV VFs is not handled by the SR-IOV device plug-in and SR-IOV CNI. To provision SR-IOV VF on hosts, configure it manually.

To use the SR-IOV network device plug-in and SR-IOV CNI plug-in, run both plug-ins in daemon mode on each node in the cluster.

  1. Create a YAML file for the openshift-sriov namespace with the following contents:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: openshift-sriov
        labels:
          name: openshift-sriov
          openshift.io/run-level: "0"
        annotations:
          openshift.io/node-selector: ""
          openshift.io/description: "Openshift SR-IOV network components"

  2. Create the openshift-sriov namespace:

      $ oc create -f openshift-sriov.yaml

  3. Create a YAML file for the sriov-device-plugin service account with the following contents:

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: sriov-device-plugin
        namespace: openshift-sriov

  4. Create the sriov-device-plugin service account:

      $ oc create -f sriov-device-plugin.yaml

  5. Create a YAML file for the sriov-cni service account with the following contents:

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: sriov-cni
        namespace: openshift-sriov

  6. Create the sriov-cni service account:

      $ oc create -f sriov-cni.yaml

  7. Create a YAML file for the sriov-device-plugin DaemonSet with the following contents:

    The SR-IOV network device plug-in daemon, when launched, will discover all the configured SR-IOV VFs (of supported NIC models) on each node and advertise discovered resources. The number of available SR-IOV VF resources that are capable of being allocated can be reviewed by describing a node with the oc describe node <node-name> command. The resource name for the SR-IOV VF resources is openshift.io/sriov. When no SR-IOV VFs are available on the node, a value of zero is displayed.

      kind: DaemonSet
      apiVersion: apps/v1
      metadata:
        name: sriov-device-plugin
        namespace: openshift-sriov
        annotations:
          kubernetes.io/description: |
            This daemon set launches the SR-IOV network device plugin on each node.
      spec:
        selector:
          matchLabels:
            app: sriov-device-plugin
        updateStrategy:
          type: RollingUpdate
        template:
          metadata:
            labels:
      app: sriov-device-plugin
      component: network
      type: infra
      openshift.io/component: network
          spec:
            hostNetwork: true
            nodeSelector:
      beta.kubernetes.io/os: linux
            tolerations:
            - operator: Exists
            serviceAccountName: sriov-device-plugin
            containers:
            - name: sriov-device-plugin
      image: quay.io/openshift/ose-sriov-network-device-plugin:v4.0.0
      args:
      - --log-level=10
      securityContext:
        privileged: true
      volumeMounts:
      - name: devicesock
        mountPath: /var/lib/kubelet/
        readOnly: false
      - name: net
        mountPath: /sys/class/net
        readOnly: true
            volumes:
      - name: devicesock
        hostPath:
      path: /var/lib/kubelet/
      - name: net
        hostPath:
      path: /sys/class/net

  8. Create the sriov-device-plugin DaemonSet:

      oc create -f sriov-device-plugin.yaml

  9. Create a YAML file for the sriov-cni DaemonSet with the following contents:

      kind: DaemonSet
      apiVersion: apps/v1
      metadata:
        name: sriov-cni
        namespace: openshift-sriov
        annotations:
          kubernetes.io/description: |
            This daemon set launches the SR-IOV CNI plugin on SR-IOV capable worker nodes.
      spec:
        selector:
          matchLabels:
            app: sriov-cni
        updateStrategy:
          type: RollingUpdate
        template:
          metadata:
            labels:
      app: sriov-cni
      component: network
      type: infra
      openshift.io/component: network
          spec:
            nodeSelector:
      beta.kubernetes.io/os: linux
            tolerations:
            - operator: Exists
            serviceAccountName: sriov-cni
            containers:
            - name: sriov-cni
      image: quay.io/openshift/ose-sriov-cni:v4.0.0
      securityContext:
        privileged: true
      volumeMounts:
      - name: cnibin
        mountPath: /host/opt/cni/bin
            volumes:
      - name: cnibin
        hostPath:
      path: /var/lib/cni/bin

  10. Create the sriov-cni DaemonSet:

      $ oc create -f sriov-cni.yaml


Configure additional interfaces using SR-IOV

  1. Create a YAML file for the Custom Resource (CR) with SR-IOV configuration. The name field in the following custom resource has the value sriov-conf.

      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: sriov-conf
        annotations:
          k8s.v1.cni.cncf.io/resourceName: openshift.io/sriov 1
      spec:
        config: '{
            "type": "sriov", 2
            "name": "sriov-conf",
            "ipam": {
      "type": "host-local",
      "subnet": "10.56.217.0/24",
      "routes": [{
        "dst": "0.0.0.0/0"
      }],
      "gateway": "10.56.217.1"
            }
          }'

    1 k8s.v1.cni.cncf.io/resourceName annotation is set to openshift.io/sriov.

    2 type is set to sriov.

  2. Create the sriov-conf custom resource:

      $ oc create -f sriov-conf.yaml

  3. Create a YAML file for a Pod which references the name of the NetworkAttachmentDefinition and requests one openshift.io/sriov resource:

      apiVersion: v1
      kind: Pod
      metadata:
        name: sriovsamplepod
        annotations:
          k8s.v1.cni.cncf.io/networks: sriov-conf
      spec:
        containers:
        - name: sriovsamplepod
          command: ["/bin/bash", "-c", "sleep 2000000000000"]
          image: centos/tools
          resources:
            requests:
      openshift.io/sriov: '1'
            limits:
      openshift.io/sriov: '1'

  4. Create the sriovsamplepod Pod:

      $ oc create -f sriovsamplepod.yaml

  5. View the additional interface by executing the ip command:

      $ oc exec sriovsamplepod -- ip a


Configure network policy with OpenShift SDN

In a cluster using a Kubernetes Container Network Interface (CNI) plug-in that supports NetworkPolicy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift 4.1, OpenShift SDN supports using NetworkPolicy in its default network isolation mode.

The Kubernetes v1 NetworkPolicy features are available in OpenShift except for egress policy types and IPBlock.

By default, all Pods in a project are accessible from other Pods and network endpoints. To isolate one or more Pods in a project, we can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project.

If a Pod is matched by selectors in one or more NetworkPolicy objects, then the Pod will accept only connections allowed by at least one of those NetworkPolicy objects. A Pod that is not selected by any NetworkPolicy objects is fully accessible.

The following example NetworkPolicy objects demonstrate supporting different scenarios:

  • Deny all traffic:

    To make a project deny by default, add a NetworkPolicy object that matches all Pods but accepts no traffic:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: deny-by-default
      spec:
        podSelector:
        ingress: []

  • Only allow connections from the OpenShift Ingress Controller:

    To make a project allow only connections from the OpenShift Ingress Controller, add the following NetworkPolicy object:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-ingress
      spec:
        ingress:
        - from:
          - namespaceSelector:
      matchLabels:
        network.openshift.io/policy-group: ingress
        podSelector: {}
        policyTypes:
        - Ingress

  • Only accept connections from Pods within a project:

    To make Pods accept connections from other Pods in the same project, but reject all other connections from Pods in other projects, add the following NetworkPolicy object:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-same-namespace
      spec:
        podSelector:
        ingress:
        - from:
          - podSelector: {}

  • Only allow HTTP and HTTPS traffic based on Pod labels:

    To enable only HTTP and HTTPS access to the Pods with a specific label (role=frontend in following example), add a NetworkPolicy object similar to:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-http-and-https
      spec:
        podSelector:
          matchLabels:
            role: frontend
        ingress:
        - ports:
          - protocol: TCP
            port: 80
          - protocol: TCP
            port: 443

NetworkPolicy objects are additive, which means we can combine multiple NetworkPolicy objects together to satisfy complex network requirements.

For example, for the NetworkPolicy objects defined in previous samples, we can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the Pods with the label role=frontend, to accept any connection allowed by each policy. That is, connections on any port from Pods in the same namespace, and connections on ports 80 and 443 from Pods in any namespace.


Example NetworkPolicy object

The following annotates an example NetworkPolicy object:

    kind: NetworkPolicy
    apiVersion: extensions/v1beta1
    metadata:
      name: allow-27107 1
    spec:
      podSelector: 2
        matchLabels:
          app: mongodb
      ingress:
      - from:
        - podSelector: 3
    matchLabels:
      app: app
        ports: 4
        - protocol: TCP
          port: 27017

1 The name of the NetworkPolicy object.

2 A selector describing the Pods the policy applies to. The policy object can only select Pods in the project that the NetworkPolicy object is defined.

3 A selector matching the Pods that the policy object allows ingress traffic from. The selector will match Pods in any project.

4 A list of one or more destination ports to accept traffic on.


Create a NetworkPolicy object

To define granular rules describing Ingress network traffic allowed for projects in the cluster, we can create NetworkPolicy objects.

Prerequisites

  • A cluster using the OpenShift SDN network plug-in with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • Install the OpenShift CLI.
  • Log in to the cluster.

  1. Create a policy rule:
    1. Create a <policy-name>.yaml file where <policy-name> describes the policy rule.

    2. In the file you just created define a policy object, such as in the following example:

        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: <policy-name> 1
        spec:
          podSelector:
          ingress: []

      1 Specify a name for the policy object.

  2. Create the policy object:

      $ oc create -f <policy-name>.yaml -n <project>

    In the following example, a new NetworkPolicy object is created in a project named project1:

      $ oc create -f default-deny.yaml -n project1
      networkpolicy "default-deny" created


Delete a NetworkPolicy object

We can delete a NetworkPolicy object.

Prerequisites

  • A cluster using the OpenShift SDN network plug-in with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • Install the OpenShift CLI.
  • Log in to the cluster.


View NetworkPolicy objects

We can list the NetworkPolicy objects in the cluster.

Prerequisites

  • A cluster using the OpenShift SDN network plug-in with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • Install the OpenShift CLI.
  • Log in to the cluster.

  • To view NetworkPolicy objects defined in the cluster...


OpenShift SDN

OpenShift uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between Pods. OpenShift SDN configures an overlay network using Open vSwitch (OVS).

OpenShift SDN provides three SDN modes for configuring the Pod network:

    Mode Description
    network policy Default. Allows project administrators to configure their own isolation policies using NetworkPolicy objects.
    multitenant Project-level isolation for Pods and Services. Pods from different projects cannot send packets to or receive packets from Pods and Services of a different project. We can disable isolation for a project, allowing it to send network traffic to all Pods and Services in the entire cluster and receive network traffic from those Pods and Services.
    subnet Flat Pod network where every Pod can communicate with every other Pod and Service. The network policy mode provides the same functionality as the subnet mode.


Assign egress IPs to a project

We can configure OpenShift Software Defined Network (SDN) to assign one or more egress IP addresses to a project. All outgoing external connections from the specified project will share the same, fixed source IP, allowing external resources to recognize the traffic based on the egress IP. An egress IP address assigned to a project is different from the egress router, which is used to send traffic to specific destinations.

Egress IPs are implemented as additional IP addresses on the primary network interface of the node and must be in the same subnet as the node's primary IP.

Egress IPs must not be configured in any Linux network configuration files, such as ifcfg-eth0.

Allow additional IP addresses on the primary network interface might require extra configuration when using some cloud or VM solutions.

We can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace resource. After an egress IP is associated with a project, OpenShift SDN allows us to assign egress IPs to hosts in two ways:

  • In the automatically assigned approach, an egress IP address range is assigned to a node. You set the egressCIDRs parameter of each node's HostSubnet resource to indicate the range of egress IP addresses that can be hosted by a node. This is the preferred approach.
  • In the manually assigned approach, a list of one or more egress IP address is assigned to a node. You set the egressIPs parameter of each node's HostSubnet resource to indicate the IP addresses that can be hosted by a node.

Namespaces that request an egress IP addresses are matched with nodes that are able to host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If egressIPs is set on a NetNamespace resource, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped.

High availability of nodes is automatic. If a node that hosts egress IP addresses is unreachable and there are nodes that are able to host those egress IP addresses, then the egress IP addresses will move to a new node. When the original egress IP address node comes back online, the egress IP addresses automatically move to balance egress IP addresses across nodes.

We cannot use manually assigned and automatically assigned egress IP addresses on the same nodes. If you manually assign egress IP addresses from an IP address range, we must not make that range available for automatic IP assignment.


Enable automatically assigned egress IPs for a namespace

In OpenShift we can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes.

Prerequisites

  1. Update the NetNamespace resource with the egress IP address using the following JSON:

       $ oc patch netnamespace <project_name> --type=merge -p \ 1
        '{
          "egressIPs": [
            "<ip_address>" 2
          ]
        }'

    1 Name of the project.

    2 Specify a single egress IP address. Using multiple IP addresses is not supported.

    For example, to assign project1 to an IP address of 192.168.1.100 and project2 to an IP address of 192.168.1.101:

      $ oc patch netnamespace project1 --type=merge -p \
        '{"egressIPs": ["192.168.1.100"]}'
      $ oc patch netnamespace project2 --type=merge -p \
        '{"egressIPs": ["192.168.1.101"]}'

  2. Indicate which nodes can host egress IP addresses by setting the egressCIDRs parameter for each host using the following JSON:

      $ oc patch hostsubnet <node_name> --type=merge -p \ 1
        '{
          "egressCIDRs": [
            "<ip_address_range_1>", "<ip_address_range_2>" 2
          ]
        }'

    1 Specify a node name.

    2 Specify one or more IP address ranges in CIDR format.

    For example, to set node1 and node2 to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255:

      $ oc patch hostsubnet node1 --type=merge -p \
        '{"egressCIDRs": ["192.168.1.0/24"]}'
      $ oc patch hostsubnet node2 --type=merge -p \
        '{"egressCIDRs": ["192.168.1.0/24"]}'
  3. OpenShift automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to node1 and the egress IP address 192.168.1.101 to node2 or vice versa.


Configure manually assigned egress IPs

In OpenShift we can associate one or more egress IPs with a project.

Prerequisites

  1. Update the NetNamespace resource by specifying the following JSON object with the desired IP addresses:

      $ oc patch netnamespace <project> --type=merge -p \ 1
        '{
          "egressIPs": [ 2
            "<ip_address>"
            ]
        }'

    1 Name of the project.

    2 Specify one or more egress IP addresses. The egressIPs parameter is an array.

    For example, to assign the project1 project to an IP address of 192.168.1.100:

      $ oc patch netnamespace project1 --type=merge \
        -p '{"egressIPs": ["192.168.1.100"]}'

    We can set egressIPs to two or more IP addresses on different nodes to provide high availability. If multiple egress IP addresses are set, pods use the first IP in the list for egress, but if the node hosting that IP address fails, pods switch to using the next IP in the list after a short delay.

  2. Manually assign the egress IP to the node hosts. Set the egressIPs parameter on the HostSubnet object on the node host. Using the following JSON, include as many IPs as we want to assign to that node host:

      $ oc patch hostsubnet <node_name> --type=merge -p \ 1
        '{
          "egressIPs": [ 2
            "<ip_address_1>",
            "<ip_address_N>"
            ]
        }'

    1 Name of the project.

    2 Specify one or more egress IP addresses. The egressIPs field is an array.

    For example, to specify that node1 should have the egress IPs 192.168.1.100, 192.168.1.101, and 192.168.1.102:

      $ oc patch hostsubnet node1 --type=merge -p \
        '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}'

In the previous example, all egress traffic for project1 will be routed to the node hosting the specified egress IP, and then connected (using NAT) to that IP address.


Use multicast

With IP multicast, data is broadcast to many IP addresses simultaneously.

At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution.

Multicast traffic between OpenShift Pods is disabled by default. If we are using the OpenShift SDN network plug-in, we can enable multicast on a per-project basis.

When using the OpenShift SDN network plug-in in networkpolicy isolation mode:

  • Multicast packets sent by a Pod will be delivered to all other Pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast.
  • Multicast packets sent by a Pod in one project will never be delivered to Pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects.

When using the OpenShift SDN network plug-in in multitenant isolation mode:

  • Multicast packets sent by a Pod will be delivered to all other Pods in the project.
  • Multicast packets sent by a Pod in one project will be delivered to Pods in other projects only if each project is joined together and multicast is enabled in each joined project.


Enable multicast between Pods

  • Enable multicast for a project:

      $ oc annotate netnamespace <namespace> 1
      netnamespace.network.openshift.io/multicast-enabled=true

    1 The namespace for the project we want to enable multicast for.


Disable multicast between Pods

We can disable multicast between Pods for our project.

Prerequisites

  • Disable multicast:

      $ oc annotate netnamespace <namespace> \ 1
      netnamespace.network.openshift.io/multicast-enabled-

    1 The namespace for the project we want to disable multicast for.


Configure network isolation using OpenShift SDN

When the cluster is configured to use the multitenant isolation mode for the OpenShift SDN CNI plug-in, each project is isolated by default. Network traffic is not allowed between Pods or services in different projects in multitenant isolation mode.

We can change the behavior of multitenant isolation for a project in two ways:

  • We can join one or more projects, allowing network traffic between Pods and services in different projects.
  • We can disable network isolation for a project. It will be globally accessible, accepting network traffic from Pods and services in all other projects. A globally accessible project can access Pods and services in all other projects.

Prerequisites

  • A cluster configured to use the OpenShift SDN Container Network Interface (CNI) plug-in in multitenant isolation mode.


Joining projects

We can join two or more projects to allow network traffic between Pods and services in different projects.

Prerequisites

  1. Join projects to an existing project network:

      $ oc adm pod-network join-projects --to=<project1> <project2> <project3>

    Alternatively, instead of specifying specific project names, we can use the --selector=<project_selector> option to specify projects based upon an associated label.

  2. Optional: View the pod networks that we have joined together:

    Projects in the same pod-network have the same network ID in the NETID column.


Isolating a project

We can isolate a project so that Pods and services in other projects cannot access its Pods and services.

Prerequisites

  • To isolate the projects in the cluster...

      $ oc adm pod-network isolate-projects <project1> <project2>

    Alternatively, instead of specifying specific project names, we can use the --selector=<project_selector> option to specify projects based upon an associated label.


Disable network isolation for a project

We can disable network isolation for a project.

Prerequisites

  • Run the following command for the project:

      $ oc adm pod-network make-projects-global <project1> <project2>

    Alternatively, instead of specifying specific project names, we can use the --selector=<project_selector> option to specify projects based upon an associated label.


Configure kube-proxy

The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services.


About iptables rules synchronization

The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node.

A sync begins when either of the following events occurs:

  • An event occurs, such as service or endpoint is added to or removed from the cluster.
  • The time since the last sync exceeds the sync period defined for kube-proxy.


Modify the kube-proxy configuration

We can modify the Kubernetes network proxy configuration for the cluster.

Prerequisites

  1. Edit the Network.operator.openshift.io Custom Resource (CR):

      $ oc edit network.operator.openshift.io cluster

  2. Modify the kubeProxyConfig parameter in the custom resource with the changes to the kube-proxy configuration, such as in the following example custom resource:

      apiVersion: operator.openshift.io/v1
      kind: Network
      metadata:
        name: cluster
      spec:
        kubeProxyConfig:
          iptablesSyncPeriod: 30s
          proxyArguments:
            iptables-min-sync-period: ["30s"]

  3. Save the file and exit the text editor.

    The syntax is validated by the oc command when you save the file and exit the editor. If modifications contain a syntax error, the editor opens the file and displays an error message.

  4. Confirm the configuration update:

      $ oc get networks.operator.openshift.io -o yaml

    The command returns output similar to the following example:

      apiVersion: v1
      items:
      - apiVersion: operator.openshift.io/v1
        kind: Network
        metadata:
          name: cluster
        spec:
          clusterNetwork:
          - cidr: 10.128.0.0/14
            hostPrefix: 23
          defaultNetwork:
            type: OpenShiftSDN
          kubeProxyConfig:
            iptablesSyncPeriod: 30s
            proxyArguments:
      iptables-min-sync-period:
      - 30s
          serviceNetwork:
          - 172.30.0.0/16
        status: {}
      kind: List

  5. Optional: Confirm that the Cluster Network Operator accepted the configuration change:

      $ oc get clusteroperator network
      NAME      VERSION     AVAILABLE   PROGRESSING   DEGRADED   SINCE
      network   4.1.0-0.9   True        False         False      1m

    The AVAILABLE field is True when the configuration update is applied successfully.


kube-proxy configuration parameters

We can modify the following kubeProxyConfig parameters:

Parameter Description Values Default
iptablesSyncPeriod The refresh period for iptables rules. A time interval, such as 30s or 2m. Valid suffixes include s, m, and h and are described in the Go time package documentation. 30s
proxyArguments.iptables-min-sync-period The minimum duration before refreshing iptables rules. Ensures the refresh does not happen too frequently. Time interval, such as 30s or 2m. Valid suffixes include s, m, and h and are described in the Go time package 30s


Configure Routes

Configure route timeouts

We can configure the default timeouts for an existing route when we have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end.

Prerequisites

  • You need a deployed Ingress Controller on a running cluster.

  1. Using the oc annotate command, add the timeout to the route:

      $ oc annotate route <route_name> \
          --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1

    1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d).

    The following example sets a timeout of two seconds on a route named myroute:

      $ oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s


Enable HTTP strict transport security

HTTP Strict Transport Security (HSTS) policy is a security enhancement, which ensures that only HTTPS traffic is allowed on the host. Any HTTP requests are dropped by default. This is useful for ensuring secure interactions with websites, or to offer a secure application for the user's benefit.

When HSTS is enabled, HSTS adds a Strict Transport Security header to HTTPS responses from the site. We can use the insecureEdgeTerminationPolicy value in a route to redirect to send HTTP to HTTPS. However, when HSTS is enabled, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. This is not required to be supported by the client, and can be disabled by setting max-age=0.

HSTS works only with secure routes (either edge terminated or re-encrypt). The configuration is ineffective on HTTP or passthrough routes.

  • To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge terminated or re-encrypt route:

      apiVersion: v1
      kind: Route
      metadata:
        annotations:
          haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3

    1 max-age is the only required parameter. It measures the length of time, in seconds, that the HSTS policy is in effect. The client updates max-age whenever a response with a HSTS header is received from the host. When max-age times out, the client discards the policy.

    2 includeSubDomains is optional. When included, it tells the client that all subdomains of the host are to be treated the same as the host.

    3 preload is optional. When max-age is greater than 0, then including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS to get the header.


Troubleshooting throughput issues

Sometimes applications deployed through OpenShift can cause network throughput issues such as unusually high latency between specific services.

Use the following methods to analyze performance issues if Pod logs do not reveal any cause of the problem:

  • Use a packet analyzer, such as ping or tcpdump to analyze traffic between a Pod and its node.

    For example, run the tcpdump tool on each Pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a Pod. Latency can occur if a node interface is overloaded with traffic from other Pods, storage devices, or the data plane.

      $ tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1

    1 podip is the IP address for the Pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a Pod.

    tcpdump generates a file at /tmp/dump.pcap containing all traffic between these two Pods. Ideally, run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. We can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with:

      $ tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789

  • Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Run the tool from the Pods first, then from the nodes, to locate any bottlenecks.


Use cookies to keep route statefulness

OpenShift provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint Pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear.

OpenShift can use cookies to configure session persistence. The Ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the next request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same Pod.


Secured routes

The following sections describe how to create re-encrypt and edge routes with custom certificates.


Create a re-encrypt route with a custom certificate

We can configure a secure route using reencrypt TLS termination with a custom certificate using the oc create route command.

Prerequisites

  • A certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.
  • A separate CA certificate in a PEM-encoded file that completes the certificate chain.
  • A separate destination CA certificate in a PEM-encoded file.
  • A Service resource that we want to expose.

Password protected key files are not supported. To remove a passphrase from a key file, use the following command:

    $ openssl rsa -in password_protected_tls.key -out tls.key

This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. We must also specify a destination CA certificate to enable the Ingress Controller to trust the service's certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt, tls.key, cacert.crt, and (optionally) ca.crt. Substitute the name of the Service resource that we want to expose for frontend. Substitute the appropriate host name for www.example.com.

  • Create a secure Route resource using reencrypt TLS termination and a custom certificate:

      $ oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com

    If you examine the resulting Route resource, it should look similar to the following:

    YAML Definition of the Secure Route

      apiVersion: v1
      kind: Route
      metadata:
        name: frontend
      spec:
        host: www.example.com
        to:
          kind: Service
          name: frontend
        tls:
          termination: reencrypt
          key: |-
            -----BEGIN PRIVATE KEY-----
            [...]
            -----END PRIVATE KEY-----
          certificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----
          caCertificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----
          destinationCACertificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----

    See oc create route reencrypt --help for more options.


Create an edge route with a custom certificate

We can configure a secure route using edge TLS termination with a custom certificate using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination Pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route.

Prerequisites

  • A certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.
  • A separate CA certificate in a PEM-encoded file that completes the certificate chain.
  • A Service resource that we want to expose.

Password protected key files are not supported. To remove a passphrase from a key file, use the following command:

    $ openssl rsa -in password_protected_tls.key -out tls.key

This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt, tls.key, and (optionally) ca.crt. Substitute the name of the Service resource that we want to expose for frontend. Substitute the appropriate host name for www.example.com.

  • Create a secure Route resource using edge TLS termination and a custom certificate.

      $ oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com

    If you examine the resulting Route resource, it should look similar to the following:

    YAML Definition of the Secure Route

      apiVersion: v1
      kind: Route
      metadata:
        name: frontend
      spec:
        host: www.example.com
        to:
          kind: Service
          name: frontend
        tls:
          termination: edge
          key: |-
            -----BEGIN PRIVATE KEY-----
            [...]
            -----END PRIVATE KEY-----
          certificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----
          caCertificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----

    See oc create route edge --help for more options.


Configure ingress cluster traffic

OpenShift provides the following methods for communicating from outside the cluster with services running in the cluster.

The methods are recommended, in order or preference:

  • If we have HTTP/HTTPS, use an Ingress Controller.
  • If we have a TLS-encrypted protocol other than HTTPS. For example, for TLS with the SNI header, use an Ingress Controller.
  • Otherwise, use a Load Balancer, an External IP, or a NodePort.
Method Purpose
Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header).
Automatically assign an external IP using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool.
Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address.
Configure a NodePort Expose a service on all nodes in the cluster.


Configure ingress cluster traffic using an Ingress Controller

OpenShift provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller.


Use Ingress Controllers and routes

The Ingress Operator manages Ingress Controllers and wildcard DNS.

Use an Ingress Controller is the most common way to allow external access to an OpenShift cluster.

An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI.

Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes.

The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, we can work with the edge Ingress Controller without having to contact the administrators.

When a set of routes is created in various projects, the overall set of routes is available to the set of Ingress Controllers. Each Ingress Controller admits routes from the set of routes. By default, all Ingress Controllers admit all routes.

The Ingress Controller:

  • Has two replicas by default, which means it should be running on two worker nodes.
  • Can be scaled up to have more replicas on more nodes.

The procedures in this section require prerequisites performed by the cluster administrator.

Prerequisites

Before starting:

  • Set up the external port to the cluster networking environment so that requests can reach the cluster.

  • Make sure there is at least one user with cluster admin role. To add this role to a user...

  • Have an OpenShift cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.


Create a project and service

If the project and service that we want to expose do not exist, first create the project, then the service. If the project and service already exist, go to the next step: Expose the service to create a route.

  1. Log on to OpenShift.

  2. Create a new project for your service:

    For example:

  3. Create a service:

    For example:

      $ oc new-app \
          -e MYSQL_USER=admin \
          -e MYSQL_PASSWORD=redhat \
          -e MYSQL_DATABASE=mysqldb \
          registry.redhat.io/openshift3/mysql-55-rhel7

  4. See that the new service is created:

      $ oc get svc -n openshift-ingress
      NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
      router-default            LoadBalancer   172.30.16.119   52.230.228.163   80:30745/TCP,443:32561/TCP   2d6h
      router-internal-default   ClusterIP      172.30.101.15   <none>           80/TCP,443/TCP,1936/TCP      2d6h

    By default, the new service does not have an external IP address.


Expose the service by creating a route

We can expose the service as a route using the oc expose command.

To expose the service:

  1. Log on to OpenShift.

  2. Log on to the project where the service we want to expose is located.

  3. Expose the route:

      oc expose service <service-name>

    For example:

      oc expose service mysql-55-rhel7
      route "mysql-55-rhel7" exposed

  4. Use a tool, such as cURL, to make sure we can reach the service using the cluster IP address for the service:

      curl <pod-ip>:<port>

    For example:

      curl 172.30.131.89:3306

    The examples in this section use a MySQL service, which requires a client application. If we get a string of characters with the Got packets out of order message,we are connected to the service.

    If we have a MySQL client, log in with the standard CLI command:

      $ mysql -h 172.30.131.89 -u admin -p
      Enter password:
      Welcome to the MariaDB monitor.  Commands end with ; or \g.
      
      MySQL [(none)]>


Configure ingress controller sharding by using route labels

Ingress Controller sharding by using route labels means that the the Ingress Controller serves any route in any namespace that is selected by the route selector.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

  1. Edit the router-internal.yaml file:

      # cat router-internal.yaml
      apiVersion: v1
      items:
      - apiVersion: operator.openshift.io/v1
        kind: IngressController
        metadata:
          name: sharded
          namespace: openshift-ingress-operator
        spec:
          domain: <apps-sharded.basedomain.example.net>
          nodePlacement:
            nodeSelector:
      matchLabels:
        node-role.kubernetes.io/worker: ""
          routeSelector:
            matchLabels:
      type: sharded
        status: {}
      kind: List
      metadata:
        resourceVersion: ""
        selfLink: ""

  2. Apply the Ingress Controller router-internal.yaml file:

    The Ingress Controller selects routes in any namespace that have the label type: sharded.


Configure ingress controller sharding by using namespace labels

Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

  1. Edit the router-internal.yaml file:

      # cat router-internal.yaml
      apiVersion: v1
      items:
      - apiVersion: operator.openshift.io/v1
        kind: IngressController
        metadata:
          name: sharded
          namespace: openshift-ingress-operator
        spec:
          domain: <apps-sharded.basedomain.example.net>
          nodePlacement:
            nodeSelector:
      matchLabels:
        node-role.kubernetes.io/worker: ""
          routeSelector:
            matchLabels:
      type: sharded
        status: {}
      kind: List
      metadata:
        resourceVersion: ""
        selfLink: ""

  2. Apply the Ingress Controller router-internal.yaml file:

    The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded.


Additional resources


Configure ingress cluster traffic using a load balancer

OpenShift provides methods for communicating from outside the cluster with services running in the cluster. This method uses a load balancer.


Use a load balancer to get traffic into the cluster

If we do not need a specific external IP address, we can configure a load balancer service to allow external access to an OpenShift cluster.

A load balancer service allocates a unique IP. The load balancer has a single edge router IP, which can be a virtual IP (VIP), but is still a single machine for initial load balancing.

If a pool is configured, it is done at the infrastructure level, not by a cluster administrator.

The procedures in this section require prerequisites performed by the cluster administrator.

Prerequisites

Before starting:

  • Set up the external port to the cluster networking environment so that requests can reach the cluster.

  • Make sure there is at least one user with cluster admin role. To add this role to a user, run:

  • Have an OpenShift cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.


Create a project and service

If the project and service that we want to expose do not exist, first create the project, then the service. If the project and service already exist, go to the next step: Expose the service to create a route.

  1. Log on to OpenShift.

  2. Create a new project for your service:

    For example:

  3. Create a service:

    For example:

      $ oc new-app \
          -e MYSQL_USER=admin \
          -e MYSQL_PASSWORD=redhat \
          -e MYSQL_DATABASE=mysqldb \
          registry.redhat.io/openshift3/mysql-55-rhel7

  4. See that the new service is created:

      $ oc get svc -n openshift-ingress
      NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
      router-default            LoadBalancer   172.30.16.119   52.230.228.163   80:30745/TCP,443:32561/TCP   2d6h
      router-internal-default   ClusterIP      172.30.101.15   <none>           80/TCP,443/TCP,1936/TCP      2d6h

    By default, the new service does not have an external IP address.


Expose the service by creating a route

We can expose the service as a route using the oc expose command.

To expose the service:

  1. Log on to OpenShift.

  2. Log on to the project where the service we want to expose is located.

  3. Expose the route:

      oc expose service <service-name>

    For example:

      oc expose service mysql-55-rhel7
      route "mysql-55-rhel7" exposed

  4. Use a tool, such as cURL, to make sure we can reach the service using the cluster IP address for the service:

      curl <pod-ip>:<port>

    For example:

      curl 172.30.131.89:3306

    The examples in this section use a MySQL service, which requires a client application. If we get a string of characters with the Got packets out of order message,we are connected to the service.

    If we have a MySQL client, log in with the standard CLI command:

      $ mysql -h 172.30.131.89 -u admin -p
      Enter password:
      Welcome to the MariaDB monitor.  Commands end with ; or \g.
      
      MySQL [(none)]>


Create a load balancer service

  1. Log on to OpenShift.

  2. Load the project where the service we want to expose is located.

  3. On the master node, create a load balancer configuration file

      apiVersion: v1
      kind: Service
      metadata:
        name: egress-2 1
      spec:
        ports:
        - name: db
          port: 3306 2
        loadBalancerIP:
        type: LoadBalancer 3
        selector:
          name: mysql 4

    ...where...

      1 Descriptive name for the load balancer service.

      2 Same port that the service we want to expose is listening on.

      3 loadbalancer as the type.

      4 Name of the service.

  4. Save and exit the file.

  5. Create the service:

      oc create -f <file-name>

    For example:

      oc create -f mysql-lb.yaml

  6. View the new service:

      $ oc get svc -n openshift-ingress
      NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
      router-default            LoadBalancer   172.30.16.119   52.230.228.163   80:30745/TCP,443:32561/TCP   2d6h
      router-internal-default   ClusterIP      172.30.101.15   <none>           80/TCP,443/TCP,1936/TCP      2d6h
      

    The service has an external IP address automatically assigned if there is a cloud provider enabled.

  7. On the master, use a tool, such as cURL, to make sure we can reach the service using the public IP address:

      $ curl <public-ip>:<port>

    ++ For example:

      $ curl 172.29.121.74:3306

    The examples in this section use a MySQL service, which requires a client application. If we get a string of characters with the Got packets out of order message, we are connecting with the service:

    If we have a MySQL client, log in with the standard CLI command:

      $ mysql -h 172.30.131.89 -u admin -p
      Enter password:
      Welcome to the MariaDB monitor.  Commands end with ; or \g.
      
      MySQL [(none)]>


Configure ingress cluster traffic using a service external IP

OpenShift provides methods for communicating from outside the cluster with services running in the cluster. This method uses a service external IP.

Use a service external IP to get traffic into the cluster

One method to expose a service is to assign an external IP address directly to the service we want to make accessible from outside the cluster.

The external IP address that you use must be provisioned on our infrastructure platform and attached to a cluster node.

With an external IP on the service, OpenShift sets up sets up NAT rules to allow traffic arriving at any cluster node attached to that IP address to be sent to one of the internal pods. This is similar to the internal service IP addresses, but the external IP tells OpenShift that this service should also be exposed externally at the given IP. The administrator must assign the IP address to a host (node) interface on one of the nodes in the cluster. Alternatively, the address can be used as a virtual IP (VIP).

These IPs are not managed by OpenShift and administrators are responsible for ensuring that traffic arrives at a node with this IP.

The procedures in this section require prerequisites performed by the cluster administrator.

Prerequisites

Before starting:

  • Set up the external port to the cluster networking environment so that requests can reach the cluster.

  • Make sure there is at least one user with cluster admin role. To add this role to a user, run:

  • Have an OpenShift cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.


Create a project and service

If the project and service that we want to expose do not exist, first create the project, then the service. If the project and service already exist, go to the next step: Expose the service to create a route.

  1. Log on to OpenShift.

  2. Create a new project for your service:

    For example:

  3. Create a service:

    For example:

      $ oc new-app \
          -e MYSQL_USER=admin \
          -e MYSQL_PASSWORD=redhat \
          -e MYSQL_DATABASE=mysqldb \
          registry.redhat.io/openshift3/mysql-55-rhel7

  4. See that the new service is created:

      $ oc get svc -n openshift-ingress
      NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
      router-default            LoadBalancer   172.30.16.119   52.230.228.163   80:30745/TCP,443:32561/TCP   2d6h
      router-internal-default   ClusterIP      172.30.101.15   <none>           80/TCP,443/TCP,1936/TCP      2d6h

    By default, the new service does not have an external IP address.


Expose the service by creating a route

We can expose the service as a route using the oc expose command.

To expose the service:

  1. Log on to OpenShift.

  2. Log on to the project where the service we want to expose is located.

  3. Expose the route:

      oc expose service <service-name>

    For example:

      oc expose service mysql-55-rhel7
      route "mysql-55-rhel7" exposed

  4. Use a tool, such as cURL, to make sure we can reach the service using the cluster IP address for the service:

      curl <pod-ip>:<port>

    For example:

      curl 172.30.131.89:3306

    The examples in this section use a MySQL service, which requires a client application. If we get a string of characters with the Got packets out of order message,we are connected to the service.

    If we have a MySQL client, log in with the standard CLI command:

      $ mysql -h 172.30.131.89 -u admin -p
      Enter password:
      Welcome to the MariaDB monitor.  Commands end with ; or \g.
      
      MySQL [(none)]>


Configure ingress cluster traffic using a NodePort

OpenShift provides methods for communicating from outside the cluster with services running in the cluster. This method uses a NodePort.


Use a NodePort to get traffic into the cluster

Use a NodePort-type Service resource to expose a service on a specific port on all nodes in the cluster. The port is specified in the Service resource's .spec.ports[*].nodePort field.

Using `NodePort`s requires additional port resources.

A node port exposes the service on a static port on the node IP address.

NodePort`s are in the `30000-32767 range by default, which means a NodePort is unlikely to match a service's intended port. For example, 8080 may be exposed as 31020.

The administrator must ensure the external IPs are routed to the nodes.

`NodePort`s and external IPs are independent and both can be used concurrently.

The procedures in this section require prerequisites performed by the cluster administrator.

Prerequisites

Before starting:

  • Set up the external port to the cluster networking environment so that requests can reach the cluster.

  • Make sure there is at least one user with cluster admin role. To add this role to a user, run:

  • Have an OpenShift cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.


Create a project and service

If the project and service that we want to expose do not exist, first create the project, then the service. If the project and service already exist, go to the next step: Expose the service to create a route.

  1. Log on to OpenShift.

  2. Create a new project for your service:

    For example:

  3. Create a service:

    For example:

      $ oc new-app \
          -e MYSQL_USER=admin \
          -e MYSQL_PASSWORD=redhat \
          -e MYSQL_DATABASE=mysqldb \
          registry.redhat.io/openshift3/mysql-55-rhel7

  4. See that the new service is created:

      $ oc get svc -n openshift-ingress
      NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
      router-default            LoadBalancer   172.30.16.119   52.230.228.163   80:30745/TCP,443:32561/TCP   2d6h
      router-internal-default   ClusterIP      172.30.101.15   <none>           80/TCP,443/TCP,1936/TCP      2d6h

    By default, the new service does not have an external IP address.


Expose the service by creating a route

We can expose the service as a route using the oc expose command.

To expose the service:

  • Log on to OpenShift.

  • Log on to the project where the service we want to expose is located.

  • Expose the route:

      oc expose service <service-name>

    For example:

      oc expose service mysql-55-rhel7
      route "mysql-55-rhel7" exposed
      

  • Use a tool, such as cURL, to make sure we can reach the service using the cluster IP address for the service:

      curl <pod-ip>:<port>

    For example:

      curl 172.30.131.89:3306

    The examples in this section use a MySQL service, which requires a client application. If we get a string of characters with the Got packets out of order message,we are connected to the service.

    If we have a MySQL client, log in with the standard CLI command:

      $ mysql -h 172.30.131.89 -u admin -p
      Enter password:
      Welcome to the MariaDB monitor.  Commands end with ; or \g.
      
      MySQL [(none)]>

  • Quick Links


    Help


    Site Info


    Related Sites


    About