Skip to content

Instantly share code, notes, and snippets.

Openshift doesn’t use docker as it’s container runtime (In fact kubernetes is deprecating it’s support for docker in v1.20). Openshift uses CRI-O as container runtime. Even if no docker tooling available on cluster nodes, you can still debug containers running on cluster nodes with crictl.

Some examples:

# Get pods
crictl pods
# Pull an image
crictl pull <image>
# Stop running container(s)

Good to know the difference between upstream Istio and Openshift Service Mesh as we embrace Service Mesh for applications. Openshift Service Mesh is based on upstream Maistra Service Mesh and you can read more about the difference here: https://maistra.io/docs/ossm-vs-community.html#ossm-vs-istio_ossm-vs-istio

As there is not much detailed documentation on ServiceMeshControlPlane configs, it is useful to look at the source code of the SM operator if you want to dig deep: https://github.com/maistra/istio-operator

Another overview: https://medium.com/@tamber/service-mesh-101-istio-rh-ocp-service-mesh-overview-fc4c4b05da1d

RH documentation: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html-single/service_mesh/index

When you provision an Openshift cluster, your cloud provider assign a publicly accessible ingress domain for your cluster. For example in Azure, you get something like apps.xxx.eastus2.aroapp.io and in ROKS on Satellite, you will get something like xxxx-0b75760e3yyy00a0-0000.upi.containers.appdomain.cloud. Cloud provider will also setup a wildcard SSL cert for your domain. As long as you create routes/secure-routes under that ingress domain you will be fine most of the time. But for a customer application, it may not be ideal to use the ingress domain provided by cloud provider. If you want to use a custom domain for your routes, these are the sample steps you can follow.

  1. Register a domain with a domain registrar. For example purpose, say k8s4.dev registered at domains.google.com.

  2. Obtain a wildcard certificate for your domain *.k8s4.dev. This step is needed only if you want to create secured routes which is default now a days.

If you want to use Let’s Encrypt (A nonprofit Certificate Authorit

During the creation of a project or namespace, OpenShift assigns a User ID (UID) range, a supplemental group ID (GID) range, and unique SELinux MCS labels to the project or namespace. When a Pod is deployed into the namespace, by default, OpenShift will use the first UID and first GID from this range to run the Pod. Any attempt by a Pod definition to specify a UID outside the assigned range will fail and requires special privileges.

In most scenarios, there is no need for special privileges as long as the docker image is built with the above security restrictions in mind. But if you are pulling in a third party docker image which requires to run with a special UID, you can control the permissions and capabilities granted to a Pod using Security Context Constraints (SCC). But restrict the SCC only to a specific ServiceAccount and use that ServiceAccount to run the pod.

To add anyuid SCC to a ServiceAccount:

oc -n <ns> adm policy add-scc-to-user anyuid -z <service_account>

OpenShift 4 is an operator-focused NoOps Platform. Almost all services are managed by cluster operators with the exception of few services like crio and kubelet. crio and kubelet run as systemd services.

Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts.

  • The CRI-O container engine (crio), which runs and manages the containers
  • Kubelet (kubelet), which accepts requests for managing containers on the machine from master services

With everything else running as pods, you can monitor and debug them using traditional kubernetes debugging steps.

If you want to check logs of systemd services you normally run journalctl. That means you need to hop on to the node where it is running using oc debug. But there is an easier way to check logs of systemd services using oc cli.

Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. This Operator makes this possible by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing.

To find the details of the default IngressController instance:

oc -n openshift-ingress-operator get IngressController default -o jsonpath='{.spec}' | jq '.'

{
  "defaultCertificate": {
 "name": "xxx-0b75760e30ayyyf686044987e00a0-0000"

Ingress Operator is an OpenShift component which enables external access to cluster services by configuring Ingress Controllers, which route traffic as specified by OpenShift Route and Kubernetes Ingress resources. In a new cluster, a default IngressController is automatically created for you to route all traffic. But you may have a need to split the traffic to multiple routers based on traffic type (external vs internal), namespace isolation, etc. That’s where Route Sharding comes handy. You can create and configure multiple IngressController resources based on the traffic need.

More details:

OpenShift supports wildcard routes but it is not enabled by default. If you create a route for *.domain.com without enabling it, you will see a Rejected status for your route.

To enable wildcard route, you need to edit default IngressController in openshift-ingress-operator namespace and add the following to the spec.

# oc -n openshift-ingress-operator edit ingresscontroller default

spec:
...
 routeAdmission:

Enabling router access logging is useful in tracking down mis-configured routes or error from upstream services. Router access logging is not enabled by default in OCP. You can enable it by adding the following to default IngressController.

Warning: Enable access logging only for limited time as it will generate quite a lot of log entries

# oc -n openshift-ingress-operator edit IngressController default

spec:
...
  logging:

A ServiceMeshMember resource can be created in a namespace for joining the namespace to mesh. It is safer than editing ServiceMeshMemberRoll in istio-system. ServiceMesh operator will automatically add namespace to the default ServiceMeshMemberRoll and corresponding policies when it sees a ServiceMeshMember in a namespace. Similarly, namespace will be removed from mesh when ServiceMeshMember resource is removed from namespace. ServiceMeshMember resource must be named default for it to work.

apiVersion: maistra.io/v1
kind: ServiceMeshMember
metadata:
  name: default
  namespace: <namespace>
spec:
 controlPlaneRef: