API gateways are crucial components of microservice architectures.
The API gateway acts as a single entrypoint into a distributed system, providing a unified interface for clients who don't need to care (or know) that the response to their API call is aggregated from multiple microservices.
Diagram one - API Gateway
/-------------------------\
: +--------+ |
| | micro | |
| /-------->+ service| |
| | | A | |
| | +--------+ |
| v |
/--------\ +------+------+ |
| | | | |
| Client +<---->+ API Gateway | |
| | | | |
\--------/ +------+------+ |
| ^ |
| | +--------+ |
| | | micro | |
| \-------->+ service| |
| | B | |
| +--------+ |
\-------------------------/
Common use-cases for API gateways include:
- Routing inbound requests to the appropriate microservice
- Presenting a unified interface to a distributed architecture, by aggregating responses from multiple backend services
- Transforming microservice responses into the format required by the caller
- Implementing non-functional/policy concerns such as authentication, logging, monitoring and observability, rate-limiting, IP filtering, and attack mitigation
- Facilitating deployment strategies such as blue/green, or canary releases
Providing these features at the API gateway can greatly simplify the development and maintenance of a microservice architecture by freeing up development teams to focus on the business logic of individual components.
Kubernetes is increasingly (and deservedly) the hosting platform of choice for many distributed architectures, and in this article I'm going to walk through setting up the open source Kong Ingress Controller as an API Gateway on a kubernetes cluster.
As a native kubernetes application, Kong is installed and managed in exactly the same way as any other kubernetes resource. It integrates well with other CNCF projects, and automatically updates itself with zero downtime, in response to cluster events like pod deployments. There's also a great plugin ecosystem, and native gRPC support.
To keep this article to a manageable size, I'm only going to cover a single, very simple use-case.
Diagram two - Kong foo/bar routing
kubernetes cluster
/-------------------------\
: +--------+ |
| /foo | foo | |
| /-------->+ micro | |
| | /-------| service| |
| | | +--------+ |
| | v |
/--------\ +------+-+----+ |
| | /foo | | |
| +----->+ | |
| +<-----+ | |
| Client | | Kong | |
| | /bar | | |
| +----->+ | |
| +<-----+ | |
\--------/ +------+-+----+ |
| | ^ |
| | | +--------+ |
| | \-------| bar | |
| \-------->+ micro | |
| /bar | service| |
| +--------+ |
\-------------------------/
We're going to create a kubernetes cluster, deploy two dummy microservices
"foo" and "bar", and install and configure Kong to route inbound calls to
/foo
to the foo microservice, and send calls to /bar
to the bar
microservice.
This barely scratches the surface of what you can do with Kong, but it's a good starting point.
There are a few things you'll need to work through this article.
I'm going to create a "real" kubernetes cluster on DigitalOcean, because it's quick and easy, and I like to keep things as close to real world scenarios as possible. But you can use minikube or KinD if you want to do everything locally. You will need to fake a load-balancer though, either using minikube tunnel or by setting up a port forward to the API gateway.
For DigitalOcean, you will need:
- A DigitalOcean account
- A DigitalOcean API token with read & write scopes
- The doctl command-line tool
To build and push docker images representing our microservices, you will need:
- docker
- An account on Docker Hub
This is optional, because you can deploy the images I've already created.
You will also need kubectl to access the kubernetes cluster.
After installing doctl
we need to authenticate using the DigitalOcean API token:
$ doctl auth init
...
Enter your access token: <-- paste your API token, when prompted
Validating token... OK
Now that you have authenticated doctl
, you can create your kubernetes cluster
with this command:
$ doctl kubernetes cluster create mycluster --size s-1vcpu-2gb --count 1
This spins up a kubernetes cluster on DigitalOcean, which will incur charges (approx $0.01/hour, at the time of writing) as long as it is running. Please remember to destroy any resources you create, when you have finished with them.
This creates a cluster with a single worker node of the smallest viable size,
in the New York datacentre. This is the smallest and simplest cluster (and the
cheapest to run). You can explore other options by running doctl kubernetes --help
.
The command will take a couple of minutes to complete, and you should see output like this:
$ doctl kubernetes cluster create mycluster --size s-1vcpu-2gb --count 1
Notice: Cluster is provisioning, waiting for cluster to be running
....................................................
Notice: Cluster created, fetching credentials
Notice: Adding cluster credentials to kubeconfig file found in "/Users/david/.kube/config"
Notice: Setting current-context to do-nyc1-mycluster
ID Name Region Version Auto Upgrade Status Node Pools
4cf2159a-01c1-423c-907d-51f19c3f9a01 mycluster nyc1 1.20.2-do.0 false running mycluster-default-pool
As you can see, cluster credentials and a context are automatically added to
the ~/.kube/config
file, so you should be able to access your cluster using
kubectl:
$ kubectl get namespace
NAME STATUS AGE
default Active 24m
kube-node-lease Active 24m
kube-public Active 24m
kube-system Active 24m
To represent backend microservices, I'm going to use a trivial Python Flask application that returns a JSON string:
foo.py
from flask import Flask
app = Flask(__name__)
@app.route('/foo')
def hello():
return '{"msg":"Hello from the foo microservice"}'
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0')
This Dockerfile builds a docker image we can deploy:
Dockerfile
FROM python:3-alpine
WORKDIR /app
RUN echo "Flask==1.1.1" > requirements.txt
RUN pip install -r requirements.txt
COPY foo.py .
EXPOSE 5000
CMD ["python", "foo.py"]
This gist contains files and a script to build a foo
and bar
microservices docker images, and push them to Docker Hub as:
digitalronin/foo-microservice:0.1
digitalronin/bar-microservice:0.1
You don't have to build and push these images - you can just use the ones I've already created.
For each microservice, we need a manifest which defines a Deployment and a Service, like this:
foo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: foo
template:
metadata:
labels:
app: foo
spec:
containers:
- name: api
image: digitalronin/foo-microservice:0.1
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: foo-service
labels:
app: foo-service
spec:
ports:
- port: 5000
name: http
targetPort: 5000
selector:
app: foo
This gist has manifests for both microservices, which you can download and deploy to your cluster like this:
$ kubectl apply -f foo.yaml
$ kubectl apply -f bar.yaml
We can check that the microservices are running correctly using a port forward:
$ kubectl port-forward service/foo-service 5000:5000
Then in a different terminal:
$ curl http://localhost:5000/foo
{"msg":"Hello from the foo microservice"}
Ditto for bar
.
Now that we have our two microservices running in our kubernetes cluster, let's install Kong.
There are several options for this, which you will find in the documentation. I'm going to apply the manifest directly, like this:
$ kubectl create -f https://bit.ly/k4k8s
The last few lines of output should look like this:
...
service/kong-proxy created
service/kong-validation-webhook created
deployment.apps/ingress-kong created
You may get a couple of API deprecation warnings at this point, which you can ignore
Installing Kong will create a DigitalOcean load-balancer. This is the internet-facing endpoint to which we will make API calls to access our microservices.
DigitalOcean load-balancers incur charges, so please remember to delete your load-balancer along with your cluster, when you have finished
Creating the load-balancer will take a minute or two. You can monitor its progress like this:
$ kubectl -n kong get service kong-proxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.245.14.22 <pending> 80:32073/TCP,443:30537/TCP 71s
When the load-balancer has been created, the EXTERNAL-IP
value will change
from <pending>
to a real IP address:
$ kubectl -n kong get service kong-proxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.245.14.22 167.172.7.192 80:32073/TCP,443:30537/TCP 3m45s
For convenience, let's export that IP number as an environment variable:
$ export PROXY_IP=167.172.7.192 # <--- use your own EXTERNAL-IP number here
Now we can check that Kong is working:
$ curl $PROXY_IP
{"message":"no Route matched with those values"}
This is the correct response, because we haven't yet told Kong what to do with any API calls it receives.
We use Ingress resources like this to configure Kong to route API calls to the microservices:
foo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo
namespace: default
annotations:
kubernetes.io/ingress.class: kong
spec:
rules:
- http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: foo-service
port:
number: 5000
This gist defines ingresses for both microservices. Download and apply them:
$ kubectl apply -f foo-ingress.yaml
$ kubectl apply -f bar-ingress.yaml
Now, Kong will route calls to /foo
to the foo microservice, and /bar
to bar.
We can check this using curl:
$ curl $PROXY_IP/foo
{"msg":"Hello from the foo microservice"}
$ curl $PROXY_IP/bar
{"msg":"Hello from the bar microservice"}
In this article, we have:
- Deployed a kubernetes cluster on DigitalOcean
- Created docker images for two dummy microservices "foo" and "bar"
- Deployed the microservices to the kubernetes cluster
- Installed the Kong Ingress Controller
- Configured Kong to route API calls to the appropriate backend microservice
This is just one simple use of Kong, and I'd encourage you to have a look at the documentation and start to explore some of the more advanced features such as authentication or integration with cert-manager.
The API gateway is a crucial part of a microservices architecture, and the Kong Ingress Controller is very well suited for this role in a kubernetes cluster, and can be managed in exactly the same way as any other kubernetes resource.
Don't forget to destroy your kubernetes cluster when you have finished with it, so that you don't incur unnecessary charges:
$ kubectl delete -f https://bit.ly/k4k8s # <-- this will destroy the load-balancer
$ doctl kubernetes cluster delete mycluster
Warning: Are you sure you want to delete this Kubernetes cluster? (y/N) ? y
Notice: Cluster deleted, removing credentials
...
If you delete the cluster first, the load-balancer will be left behind. You can delete any leftover resources via the DigitalOcean web interface
This article shows some additional configurations which can be done to avoid using a DigitalOcean load balancer.