Skip to content

Instantly share code, notes, and snippets.

Istio can be configured to forbid the routing of addresses unknown to the mesh. Normally, if an application attempts to open a connection to an address that is unknown to the mesh, Istio would use DNS to resolve the address and execute the request. With the global.outboundTrafficPolicy mode option set to REGISTRY_ONLY, we can configure Istio to only allow connections to known addresses (that is, addresses for which a ServiceEntry is defined)

You can set outboundTrafficPolicy in OpenShift ServiceMesh by adding the following to ServiceMeshControlPlane:

spec:
....
  proxy:
    networking:
      trafficControl:

Ever needed an option to connect to multiple OpenShift/Kubernetes clusters at the same time? You can do it with from multiple Terminal sessions

Terminal 1

export KUBECONFIG=~/.kube/cluster1
kubectl login ...

Terminal 2

endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems.

If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform:

  • Azure: LoadBalancerService (with external scope)
  • ROKS on Satellite: LoadBalancerService (with external scope)

To view current endpointPublishingStrategy:

ROKS clusters use RHEL 7.x hosts instead of RHCOS in a standard OCP install. Also, ROKS uses Calico overlay network instead of default OpenShiftSDN. These two changes cause trouble for OpenShift Container Native Virtualization (CNV) on ROKS.

[1] RHEL 7.x doesn’t support q35 machine types. It is the supported machine type in CNV. For CNV to work in ROKS, we need to use legacy i440fx machine types. Support for legacy machine types are not enabled by default in CNV. We need to explicitly enable it in CNV.

oc -n openshift-cnv edit cm kubevirt-config

# Add the following under data

emulated-machines: pc-q35*,pc-i440fx-*

Ever needed an option to run oc or kubectl command from within a pod in the cluster with proper permissions and without hard coding your (short-lived) token? With right RBAC, you can do the authn for oc/kubectl using your service account token. This token will be automatically mounted on the pod together with CA cert and you can login to oc/kubectl like this:

oc login --token="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
  --server='https://kubernetes.default' \
  --certificate-authority='/var/run/secrets/kubernetes.io/serviceaccount/ca.crt'

Another option:

  • Show container runtime
oc get no -o custom-columns=NAME:.metadata.name,CONTAINER-RUNTIME:.status.nodeInfo.containerRuntimeVersion

OR

oc get no -o wide