Skip to content

Instantly share code, notes, and snippets.

@danehans
Last active April 30, 2019 15:29
Show Gist options
  • Save danehans/55a1079ac4669fdb8976f90a0452cf3e to your computer and use it in GitHub Desktop.
Save danehans/55a1079ac4669fdb8976f90a0452cf3e to your computer and use it in GitHub Desktop.
ocp route walk

Route Walkthrough for OCP 4 on AWS

The walkthrough uses the console route for a cluster named cluster. The route spec.host is: console-openshift-console.apps.cluster1.devcluster.openshift.com.

  1. An external client performs a DNS query for console-openshift-console.apps.cluster1.devcluster.openshift.com
  2. devcluster.openshift.com. is a hosted zone on AWS Route 53. The zone contains a record set for a subdomain (aacluster1-api.devcluster.openshift.com.) to an alias A record of the AWS ELB Public DNS name. This name is created asyncronously when the router service is created:
    $ oc get svc/router-default -n openshift-ingress -o yaml
    <SNIP>
    status:
      loadBalancer:
        ingress:
        - hostname: a525f68fc2f4d11e9b0d706ddb7e0319-1530896078.us-west-2.elb.amazonaws.com
    
  3. One of the AWS DNS servers in the devcluster.openshift.com. hosted zone resolves the query to the ELB Public IP. This is the dest ip the client uses.
  4. The client sends an http/https request to the dest ip.
  5. The aws elb is configured to listen on tcp 80/443, so it accepts the request, lb's among instances with a InService Status and fwd's the request to an instance. The source ip/port is still the original client and the dest ip is the instance's private ip (k8s worker node's INTERNAL-IP) and port is the http/https nodePort used by the router svc.
  6. The request is received by the node and goes through iptables (kube-proxy backend) nat'ing. Since the router service is configured with externalTrafficPolicy: Local, kube-proxy will only lb to local router pods on the node. Since only 1 router pod exists on the node, kube-proxy changes the src ip/port to TODO: look at iptables nat table and the dest ip is the router pod ip:port.
  7. The request is forwarded to the router pod over the contaienr network on the worker node.
  8. The router has been instructed to

Notes:

  • HAProxy is the default router implementation.
  • aws elb listens on TCP not HTTP, HTTPS or TLS.
  • aws elb is config'd for 3 backends, but only 1 has an active status.
  • aws elb is a "classic" lb.
  • If the client sends an http request to the ELB Public IP.dserver responds with the
  • The route svc is configured with healthCheckNodePort: 30110 and externalTrafficPolicy: Local.
  • The aws elb uses HTTP:30110/healthz for each instance in the lb . See the the official docs for more details.
  • AWS ELB uses HTTP:30110/healthz for instance health checking and sets the Status accordingly. 30110 is the value of the router svc healthCheckNodePort field. See the Notes section for more details.
  • Only 1 router is being deployed for the cluster.

Main

  1. Create a kubeconfig & client so the cluster ingress operator (cio) can talk to the cluster.
  2. Get env var's used when cio starts, i.e. WATCH_NAMESPACE
  3. Get cluster config info needed to create the DNS Manager: infra, cluster, ing config, dns, and cluster ver resources.
  4. Create the DNS manager using config from #3.

createDNSManager func

  1. Cloud Cred Operator creates a secret in ns WATCH_NAMESPACE named cloud-credentials. This secret/ns is referenced in the CredentialsRequest submitted by cio:
$ oc get credentialsrequest/openshift-ingress -n openshift-cloud-credential-operator -o yaml
apiVersion: cloudcredential.openshift.io/v1beta1
kind: CredentialsRequest
<SNIP>
  name: openshift-ingress
  namespace: openshift-cloud-credential-operator
<SNIP>
  secretRef:
    name: cloud-credentials
    namespace: openshift-ingress-operator

The secretRef contains aws key id and access key:

$ oc get secret/cloud-credentials -n openshift-ingress-operator -o yaml
apiVersion: v1
data:
  aws_access_key_id: QUtJQUlKQlRQWUtSR0xZUEs0VVE=
  aws_secret_access_key: U21jWlZhNmFSTDJTK1lGRG5DUnE2VmFBMngvLzR1Mmw5Z0lmakF5RQ==
<SNIP>
  1. The aws_access_key_id and aws_secret_access_key values from the secret, the BaseDomain from the dns resource and ClusterID from the cluster ver resource are used to create an awsdns.Config struct. The config is passed into the awsdns.NewManager func to create a new AWS DNS Manager (awsdns.Manager). Note: awsdns.Manager implements the dns.Manager interface.

  2. The dns.Manager interface and error values are returned to main().

awsdns.NewManager

  1. NewManager uses the creds passed in from Config to create an aws client session.
  2. The session is used to create an ec2 metadata client session.
  3. The metadata client session is used to discover the aws region where the cluster resides.
  4. An awsdns.Manager is created:

If you want to run a local operator process against a remote cluster: WHAT=managed hack/uninstall.sh which deletes everything BUT the resources in the /manifests dir.

Next, run a local process and trust that all the stuff from /manifests that the CVO would normally lay down (or that you installed yourself through release-local.sh) is present.

WATCH_NAMESPACE=openshift-ingress-operator IMAGE=openshift/origin-haproxy-router:v4.0 ./cluster-ingress-operator 

You can specify a specific test to run with: WATCH_NAMESPACE=openshift-ingress-operator go test -v -tags e2e -count 1 -run TestDeploymentStrategyForPublishingStrategy ./test/e2e/operator_test.go

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment