Skip to content

Instantly share code, notes, and snippets.

ROKS clusters use RHEL 7.x hosts instead of RHCOS in a standard OCP install. Also, ROKS uses Calico overlay network instead of default OpenShiftSDN. These two changes cause trouble for OpenShift Container Native Virtualization (CNV) on ROKS.

[1] RHEL 7.x doesn’t support q35 machine types. It is the supported machine type in CNV. For CNV to work in ROKS, we need to use legacy i440fx machine types. Support for legacy machine types are not enabled by default in CNV. We need to explicitly enable it in CNV.

oc -n openshift-cnv edit cm kubevirt-config

# Add the following under data

emulated-machines: pc-q35*,pc-i440fx-*

endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems.

If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform:

  • Azure: LoadBalancerService (with external scope)
  • ROKS on Satellite: LoadBalancerService (with external scope)

To view current endpointPublishingStrategy:

Ever needed an option to connect to multiple OpenShift/Kubernetes clusters at the same time? You can do it with from multiple Terminal sessions

Terminal 1

export KUBECONFIG=~/.kube/cluster1
kubectl login ...

Terminal 2

Istio can be configured to forbid the routing of addresses unknown to the mesh. Normally, if an application attempts to open a connection to an address that is unknown to the mesh, Istio would use DNS to resolve the address and execute the request. With the global.outboundTrafficPolicy mode option set to REGISTRY_ONLY, we can configure Istio to only allow connections to known addresses (that is, addresses for which a ServiceEntry is defined)

You can set outboundTrafficPolicy in OpenShift ServiceMesh by adding the following to ServiceMeshControlPlane:

spec:
....
  proxy:
    networking:
      trafficControl:

A ServiceMeshMember resource can be created in a namespace for joining the namespace to mesh. It is safer than editing ServiceMeshMemberRoll in istio-system. ServiceMesh operator will automatically add namespace to the default ServiceMeshMemberRoll and corresponding policies when it sees a ServiceMeshMember in a namespace. Similarly, namespace will be removed from mesh when ServiceMeshMember resource is removed from namespace. ServiceMeshMember resource must be named default for it to work.

apiVersion: maistra.io/v1
kind: ServiceMeshMember
metadata:
  name: default
  namespace: <namespace>
spec:
 controlPlaneRef:

Enabling router access logging is useful in tracking down mis-configured routes or error from upstream services. Router access logging is not enabled by default in OCP. You can enable it by adding the following to default IngressController.

Warning: Enable access logging only for limited time as it will generate quite a lot of log entries

# oc -n openshift-ingress-operator edit IngressController default

spec:
...
  logging:

OpenShift supports wildcard routes but it is not enabled by default. If you create a route for *.domain.com without enabling it, you will see a Rejected status for your route.

To enable wildcard route, you need to edit default IngressController in openshift-ingress-operator namespace and add the following to the spec.

# oc -n openshift-ingress-operator edit ingresscontroller default

spec:
...
 routeAdmission:

Ingress Operator is an OpenShift component which enables external access to cluster services by configuring Ingress Controllers, which route traffic as specified by OpenShift Route and Kubernetes Ingress resources. In a new cluster, a default IngressController is automatically created for you to route all traffic. But you may have a need to split the traffic to multiple routers based on traffic type (external vs internal), namespace isolation, etc. That’s where Route Sharding comes handy. You can create and configure multiple IngressController resources based on the traffic need.

More details:

Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. This Operator makes this possible by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing.

To find the details of the default IngressController instance:

oc -n openshift-ingress-operator get IngressController default -o jsonpath='{.spec}' | jq '.'

{
  "defaultCertificate": {
 "name": "xxx-0b75760e30ayyyf686044987e00a0-0000"