The walkthrough uses the console
route for a cluster named cluster
. The route spec.host is: console-openshift-console.apps.cluster1.devcluster.openshift.com
.
- An external client performs a DNS query for console-openshift-console.apps.cluster1.devcluster.openshift.com
- devcluster.openshift.com. is a hosted zone on AWS Route 53. The zone contains a record set for
a subdomain (aacluster1-api.devcluster.openshift.com.) to an alias A record of the AWS ELB Public DNS name. This name is
created asyncronously when the router service is created:
$ oc get svc/router-default -n openshift-ingress -o yaml <SNIP> status: loadBalancer: ingress: - hostname: a525f68fc2f4d11e9b0d706ddb7e0319-1530896078.us-west-2.elb.amazonaws.com
- One of the AWS DNS servers in the
devcluster.openshift.com.
hosted zone resolves the query to the ELB Public IP. This is the dest ip the client uses. - The client sends an http/https request to the dest ip.
- The aws elb is configured to listen on tcp 80/443, so it accepts the request, lb's among instances with a
InService
Status and fwd's the request to an instance. The source ip/port is still the original client and the dest ip is the instance's private ip (k8s worker node'sINTERNAL-IP
) and port is the http/httpsnodePort
used by the router svc. - The request is received by the node and goes through iptables (kube-proxy backend) nat'ing. Since the router service is configured with
externalTrafficPolicy: Local
, kube-proxy will only lb to local router pods on the node. Since only 1 router pod exists on the node, kube-proxy changes the src ip/port to TODO: look at iptables nat table and the dest ip is the router pod ip:port. - The request is forwarded to the router pod over the contaienr network on the worker node.
- The router has been instructed to
- HAProxy is the default router implementation.
- aws elb listens on TCP not HTTP, HTTPS or TLS.
- aws elb is config'd for 3 backends, but only 1 has an active status.
- aws elb is a "classic" lb.
- If the client sends an http request to the ELB Public IP.dserver responds with the
- The route svc is configured with
healthCheckNodePort: 30110
andexternalTrafficPolicy: Local
. - The aws elb uses
HTTP:30110/healthz
for each instance in the lb . See the the official docs for more details. - AWS ELB uses
HTTP:30110/healthz
for instance health checking and sets the Status accordingly.30110
is the value of the router svchealthCheckNodePort
field. See the Notes section for more details. - Only 1 router is being deployed for the cluster.