Skip to content

Instantly share code, notes, and snippets.

@Pratik-Pandit
Created May 3, 2024 11:48
Show Gist options
  • Save Pratik-Pandit/4a7bd71ef4a4220b1e7fa140f5708a2d to your computer and use it in GitHub Desktop.
Save Pratik-Pandit/4a7bd71ef4a4220b1e7fa140f5708a2d to your computer and use it in GitHub Desktop.

Kong Gateway

Introducing Kong Gateway

Kong Gateway is a lightweight, fast, and flexible cloud-native API gateway. An API gateway is a reverse proxy that lets you manage, configure, and route requests to your APIs. Kong Gateway runs in front of any RESTful API and can be extended through modules and plugins. It’s designed to run on decentralized architectures, including hybrid-cloud and multi-cloud deployments. With Kong Gateway, users can:

  • Leverage workflow automation and modern GitOps practices

  • Decentralize applications/services and transition to microservices

  • Create a thriving API developer ecosystem

  • Proactively identify API-related anomalies and threats

  • Secure and govern APIs/services, and improve API visibility across the

    entire organization.

Kong Gateway is available in two different packages: Open Source (OSS) and Enterprise.

Kong Gateway (OSS): An open-source package containing the basic API gateway functionality and open-source plugins. You can manage the open-source Gateway with Kong’s Admin API, Kong Manager Open Source, or with declarative configuration.

Kong Gateway Enterprise (available in Free or Enterprise mode): Kong’s API gateway with added functionality.

Kong Admin API

Kong Admin API provides a RESTful interface for administration and configuration of Gateway entities such as services, routes, plugins, consumers, and more. All of the tasks you can perform against the Gateway can be automated using the Kong Admin API.

Kong Manager

Kong Manager is the graphical user interface (GUI) for Kong Gateway. It uses the Kong Admin API under the hood to administer and control Kong Gateway. Here are some of the things you can do with Kong Manager:

  • Create new routes and services

  • Activate or deactivate plugins with a couple of clicks

  • Group your teams, services, plugins, consumer management, and

    everything else exactly how you want them

img

NOTE : For open-source we will only use Kong Gateway OSS

Installation Options

img

img

img

Installation With Linux Ubuntu (on local machine)

Installation

The quickest way to get started with Kong Gateway is using the install script:

bash <(curl -sS https://get.konghq.com/install) -p kong -v

This script detects your operating system and automatically installs the correct package. It also installs a PostgreSQL database and bootstraps Kong Gateway for you.

Few steps you have to do in kong.conf file in order to connect PostgreSQL with Kong localhost

Steps to do it : -

sudo su

nano /etc/kong/kong.conf

After entering in kong.conf check for the following lines : -

  • #admin_gui_api_url (uncomment this into below lines where url will

    be linked to localhost 8001)

admin_gui_api_url = http://127.0.0.1:8001

  • *#admin_listen = 0.0.0.0:0 reuseport backlog=16384, 127.0.0.1:8444

    http2 ssl reuseport backlog=16384* (uncomment this into below lines where admin will listen to 8001)

admin_listen = 0.0.0.0:8001 reuseport backlog=16384, 127.0.0.1:8444 http2 ssl reuseport backlog=16384

Check for Datastore in the same kong.conf file and make these changes :

  • #database = postgres (uncomment this into below lines where you

    link postgres as database to link with kong)

database = postgres

  • # pg_host = 127.0.0.1 (uncomment this line)

  • # pg_port = 5432 (uncomment this line)

  • # pg_timeout = 5000 (uncomment this line)

Setup username and password for postgres database in the kong.conf file

  • #pg_user (uncomment this line and set username as per your need)

pg_user = kong # Postgres user

  • *#pg_password (uncomment this line and set password as per your

    need)*

pg_password = kong # Postgres user's password.

  • *#pg_database (uncomment this line and set database name as per your

    need)*

pg_database = kong # The database name to connect to.

Verify install

curl -i http://localhost:8001

Verify install from terminal as service

sudo su

kong start

Verify kong manager is started on browser use this link

http://localhost:8002

Hence Kong will be installed and configured on local machine enjoy 😃

Installation With Docker

Install Kong Gateway with a database in docker

Set up a Kong Gateway container with a PostgreSQL database to store Kong configuration

Prepare the database

  1. Create a custom Docker network to allow the containers to discover

    and communicate with each other:

docker network create kong-net (You can name this network anything you want.)

  1. Start a PostgreSQL container:

docker run -d --name kong-database \

--network=kong-net \

-p 5432:5432 \

-e "POSTGRES_USER=kong" \

-e "POSTGRES_DB=kong" \

-e "POSTGRES_PASSWORD=kong" \

postgres:13

  1. Prepare the Kong database:

docker run --rm --network=kong-net \

-e "KONG_DATABASE=postgres" \

-e "KONG_PG_HOST=kong-database" \

-e "KONG_PG_PASSWORD=kong" \

-e "KONG_PASSWORD=test" \

kong/kong-gateway:3.6.1.3 kong migrations bootstrap

Where:

  • KONG_DATABASE:

    Specifies the type of database that Kong is using.

  • KONG_PG_HOST:

    The name of the Postgres Docker container that is communicating over the kong-net network, from the previous step.

  • KONG_PG_PASSWORD:

    The password that you set when bringing up the Postgres container in the previous step.

  • KONG_PASSWORD (Enterprise only): The default password for the admin

    super user for Kong Gateway.

  • {IMAGE-NAME:TAG} kong migrations bootstrap: In order, this is the Kong

    Gateway container name and tag, followed by the command to Kong to prepare the Postgres database.

Verify your installation:

Access the /services endpoint using the Admin API:

curl -i -X GET --url http://localhost:8001/services

Verify that Kong Manager is running by accessing it using the URL specified

http://localhost:8002

Hence Kong will be installed and configured using docker enjoy 😃

NOTE: For trouble-shooting problems of Docker follow these commands

  1. docker images (this command will display list of

    docker images where you can check image of kong)

  2. docker run <img_id> (this command will start

    the kong image if it is not started)

  3. docker ps (this command will help you to check

    running docker containers)

Install Kong Gateway in Docker in DB-less mode

Create a Docker network

  1. Run the following command:

docker network create kong-net

  1. Prepare your declarative configuration file in .yml or .json format.

_format_version: "3.0"

_transform: true

services:

- host: httpbin.org

name: example_service

port: 80

protocol: http

routes:

- name: example_route

paths:

- /mock

strip_path: true

Verify that Kong Gateway is running:

curl -i http://localhost:8001

Clean up containers

If you’re done testing Kong Gateway and no longer need the containers, you can clean them up using the following commands:

docker kill kong-gateway

docker kill kong-database

docker container rm kong-gateway

docker container rm kong-database

docker network rm kong-net

Get Started With Kong

Services and Routes

img

What is a service

In Kong Gateway, a service is an abstraction of an existing upstream application. Services can store collections of objects like plugin configurations, and policies, and they can be associated with routes. When defining a service, the administrator provides a name and the upstream application connection information.

What is a route

A route is a path to a resource within an upstream application. Routes are added to services to allow access to the underlying application. In Kong Gateway, routes typically map to endpoints that are exposed through the Kong Gateway application. Routes can also define rules that match requests to associated services. Because of this, one route can reference multiple endpoints. A basic route should have a name, path or paths, and reference an existing service.

You can also configure routes with:

  • Protocols: The protocol used to communicate with

    the upstream application.

  • Hosts: Lists of domains that match a route

  • Methods: HTTP methods that match a route

  • Headers: Lists of values that are expected in the

    header of a request

  • Redirect status codes: HTTPS status codes

  • Tags: Optional set of strings to group routes

    with

Managing services

  1. Creating services -

To add a new service, send a POST request to Kong Gateway’s Admin API /services route:

curl -i -s -X POST http://localhost:8001/services \

--data name=example_service \ --data url='http://httpbin.org'

  1. Viewing service configuration -

To view the current state of a service, make a GET request to the service URL

curl -X GET http://localhost:8001/services/example_service

  1. Updating services -

To dynamically set the service retries from 5 to 6, send this PATCH request:

curl --request PATCH \ --url localhost:8001/services/example_service \

--data retries=6

  1. Listing services -

You can list all current services by sending a GET request to the base /services URL.

curl -X GET http://localhost:8001/services

In order to check these services on kong manager open on browser open http://localhost:8002/services

Managing routes

  1. Creating routes -

Routes define how requests are proxied by Kong Gateway. You can create a route associated with a specific service by sending a POST request to the service URL. Configure a new route on the /mock path to direct traffic to the example_service service created earlier

curl -i -X POST http://localhost:8001/services/example_service/routes \

--data 'paths[]=/mock' \ --data name=example_route

  1. Viewing route configuration -

To view the current state of the example_route route, make a GET request to the route URL:

curl -X GET http://localhost:8001/services/example_service/

routes/example_route

  1. Updating routes -

Like services, routes can be updated dynamically by sending a PATCH request to the route URL. Tags are an optional set of strings that can be associated with the route for grouping and filtering. You can assign tags by sending a PATCH request to the services endpoint and specifying a route.

Update the route by assigning it a tag with the value tutorial:

curl --request PATCH \ --url localhost:8001/services/example_service/

routes/example_route \ --data tags="tutorial"

  1. Listing routes -

The Admin API also supports the listing of all routes currently configured:

curl http://localhost:8001/routes

In order to check these routes on kong manager open on browser open http://localhost:8002/default/routes

Rate Limiting

Rate limiting is used to control the rate of requests sent to an upstream service. It can be used to prevent DoS attacks, limit web scraping, and other forms of overuse. Kong Gateway imposes rate limits on clients through the use of the Rate Limiting plugin. When rate limiting is enabled, clients are restricted in the number of requests that can be made in a configurable period of time. The plugin supports identifying clients as consumers or by the client IP address of the requests.

Global rate limiting

  1. Enable rate limiting -

The rate limiting plugin is installed by default on Kong Gateway, and can be enabled by sending a POST request to the plugins object on the Admin API:

curl -i -X POST http://localhost:8001/plugins \ --data name=rate-limiting \

--data config.minute=5 \ --data config.policy=local

  1. Validate -

After configuring rate limiting, you can verify that it was configured correctly and is working, by sending more requests then allowed in the configured time limit.

Run the following command to quickly send 6 mock requests:

for _ in {1..6}; do curl -s -i localhost:8000/mock/anything; echo; sleep 1; done

Run the following link on browser to quickly send 6 mock requests:

*http://localhost:8000/mock/anything *

  1. Service Rate Limiting -

The Rate Limiting plugin can be enabled for specific services. The request is the same as above, but posted to the service URL:

curl -X POST http://localhost:8001/services/example_service/plugins \

-data "name=rate-limiting" \ --data config.minute=5 \ --data config.policy=local

  1. Route level rate limiting -

The Rate Limiting plugin can be enabled for specific routes. The request is the same as above, but posted to the route URL:

curl -X POST http://localhost:8001/routes/example_route/plugins \

--data "name=rate-limiting" \-data config.minute=5 \-data config.policy=local

  1. Consumer level rate limiting -

In Kong Gateway, consumers are an abstraction that defines a user of a service. Consumer-level rate limiting can be used to limit request rates per consumer.

curl -X POST http://localhost:8001/consumers/ \ --data username=jsmith

  • Enable rate limiting for the consumer : Using the consumer id,

    enable rate limiting for all routes and services for the jsmith consumer.

curl -X POST http://localhost:8001/plugins \ --data "name=rate-limiting" \

--data "consumer.username=jsmith" \ --data "config.second=5"

Proxy Caching

The Proxy Cache plugin accelerates performance by caching responses based on configurable response codes, content types, and request methods. When caching is enabled, upstream services are not bogged down with repetitive requests, because Kong Gateway responds on their behalf with cached results. Caching can be enabled on specific Kong Gateway objects or for all requests globally.cc

Global proxy caching

  1. Enable proxy caching : The Proxy Cache plugin is installed by

    default on Kong Gateway, and can be enabled by sending a POST request to the plugins object on the Admin API:

curl -i -X POST http://localhost:8001/plugins \

--data "name=proxy-cache" \

--data "config.request_method=GET" \

--data "config.response_code=200" \

--data "config.content_type=application/json" \

--data "config.cache_ttl=30" \

--data "config.strategy=memory"

  1. Validate : You can check that the Proxy Cache plugin is working

    by sending GET requests and examining the returned headers. In step two of this guide, services and routes, you setup a /mock route and service that can help you see proxy caching in action.

curl -i -s -XGET http://localhost:8000/mock/anything | grep X-Cache

On the initial request, there should be no cached responses, and the headers will indicate this with X-Cache-Status: Miss.

Within 30 seconds of the initial request, repeat the command to send an identical request and the headers will indicate a cache Hit.

Entity-level proxy caching

  1. Service-level : The Proxy Cache plugin can be enabled for

    specific services. The request is the same as above, but the request is sent to the service URL:

curl -X POST http://localhost:8001/services/example_service/plugins \

--data "name=proxy-cache" \

--data "config.request_method=GET" \

--data "config.response_code=200" \

--data "config.content_type=application/json" \

--data "config.cache_ttl=30" \

--data "config.strategy=memory"

  1. Route-level : The Proxy Caching plugin can be enabled for

    specific routes. The request is the same as above, but the request is sent to the route URL:

curl -X POST http://localhost:8001/routes/example_route/plugins \

--data "name=proxy-cache" \

--data "config.request_method=GET" \

--data "config.response_code=200" \

--data "config.content_type=application/json" \

--data "config.cache_ttl=30" \

--data "config.strategy=memory"

  1. Consumer-level : In Kong Gateway,

    consumers are an abstraction that defines a user of a service. Consumer-level proxy caching can be used to cache responses per consumer.

  • Create a consumer:

curl -X POST http://localhost:8001/consumers/ \ --data username=sasha

  • Enable caching for the consumer:

curl -X POST http://localhost:8001/consumers/sasha/plugins \

--data "name=proxy-cache" \

--data "config.request_method=GET" \

--data "config.response_code=200" \

--data "config.content_type=application/json" \

--data "config.cache_ttl=30" \

--data "config.strategy=memory"

Key Authentication

Authentication is the process of verifying that a requester has permissions to access a resource. As its name implies, API gateway authentication authenticates the flow of data to and from your upstream services.

Kong Gateway has a library of plugins that support the most widely used methods of API gateway authentication.

Common authentication methods include:

Key Authentication

Basic Authentication

OAuth 2.0 Authentication

LDAP Authentication Advanced

OpenID Connect

Set up consumers and keys

  1. Create a new consumer : create a new consumer with a username

    luka:

curl -i -X POST http://localhost:8001/consumers/ \ --data username=luka

  1. Assign the consumer a key : Once provisioned, call the Admin API

    to assign a key for the new consumer. For this tutorial, set the key value to top-secret-key:

curl -i -X POST http://localhost:8001/consumers/luka/key-auth \

--data key=top-secret-key

Global key authentication

  1. Enable key authentication : The Key Authentication plugin is

    installed by default on Kong Gateway and can be enabled by sending a POST request to the plugins object on the Admin API:

curl -X POST http://localhost:8001/plugins/ \ --data "name=key-auth" \

--data "config.key_names=apikey"

  1. Send an unauthenticated request : Try to access the service

    without providing the key:

curl -i http://localhost:8000/mock/anything

  1. Send the wrong key : Try to access the service with the wrong

    key

curl -i http://localhost:8000/mock/anything \ -H 'apikey:bad-key'

  1. Send a valid request : Send a request with the valid key in the

    apikey header:

curl -i http://localhost:8000/mock/anything \ -H 'apikey:top-secret-key'

Service based key authentication

The Key Authentication plugin can be enabled for specific services. The request is the same as above, but the POST request is sent to the service URL:

curl -X POST http://localhost:8001/services/example_service/plugins \ --data name=key-auth

Route based key authentication

The Key Authentication plugin can be enabled for specific routes. The request is the same as above, but the POST request is sent to the route URL:

curl -X POST http://localhost:8001/routes/example_route/plugins \ --data name=key-auth

Load Balancing

Load balancing is a method of distributing API request traffic across multiple upstream services. Load balancing improves overall system responsiveness and reduces failures by preventing overloading of individual resources.

img

Steps to enable load balancing

  1. Create an upstream : Use the Admin API to create an upstream

    named example_upstream

curl -X POST http://localhost:8001/upstreams \ --data name=example_upstream

  1. Create upstream targets : Create two targets for

    example_upstream. Each request creates a new target, and sets the backend service connection endpoint:

curl -X POST http://localhost:8001/upstreams/example_upstream/targets \

--data target='httpbun.com:80'

curl -X POST http://localhost:8001/upstreams/example_upstream/targets \

--data target='httpbin.org:80'

  1. Update the service : In the [services and

    routes](https://docs.konghq.com/gateway/latest/get-started/services-and-routes/) section of this guide, you created example_service which pointed to an explicit host, http://httpbun.com. Now you’ll modify that service to point to the upstream instead:

curl -X PATCH http://localhost:8001/services/example_service \ --data host='example_upstream'

Validate that the upstream you configured is working by visiting the route http://localhost:8000/mock using a web browser or CLI

Lets create an application which will use kong gateway api to access services

img

Step 1: Create flask application which will contain code of access route and JWT token

Code -

from flask import Flask, jsonify, request

import jwt

app = Flask(__name__)

app.config['SECRET_KEY'] = 'de07012a76fc1839675f68c3e4f348d9579395094ed4b79826b56e08998f6001'

# Authentication endpoint

@app.route('/login', methods=['POST'])

def login():

# Assuming you have a user object or username/password verification logic here

user = {'username': 'pratik'}

# Generate JWT token

token = jwt.encode({'username': user['username']}, app.config['SECRET_KEY'], algorithm='HS256')

# Return the token

return jsonify({'token': token.decode('utf-8')}), 200

# Protected endpoint

@app.route('/get_tea', methods=['GET'])

def get_tea():

# Get token from request headers

token = request.headers.get('Authorization')

if not token:

return jsonify({'message': 'Token is missing'}), 401

# Verify the token

try:

decoded_token = jwt.decode(token.split(' ')[1], app.config['SECRET_KEY'], algorithms=['HS256'])

username = decoded_token['username']

# Your authentication logic here

return jsonify({"message": "Authenticated"}), 200

except jwt.ExpiredSignatureError:

return jsonify({'message': 'Token has expired'}), 401

if __name__ == '__main__':

app.run(debug=True, port=5001)

NOTE: You need to generate JWT token and replace your token at line 5

Command to generate JWT token :

curl -X POST -H "Content-Type: application/json" -d '{"username": "pratik"}' http://localhost:5001/login

Step 2 - Now create services, routes in kong for the application

# Create a service

curl -i -X POST http://localhost:8001/services \

--data "name=flask-app" \

--data "url=http://localhost:5001"

# Create a route

curl -i -X POST http://localhost:8001/services/flask-app/routes \

--data "paths[]=/" \

--data "paths[]=/get_tea" \

--data "strip_path=true"

After running these commands, Kong will forward requests to http://localhost:8000/ and http://localhost:8000/get_tea to your Flask application running on port 5001.

Step 3 - Now create plugins for JWT and Rate Limiting in kong for the application

curl -i -X POST http://localhost:8001/services/flask-app/plugins \

--data "name=key-auth"

curl -i -X POST http://localhost:8001/services/flask-app/plugins \

--data "name=jwt" \

--data "config.claims_to_verify=exp"

curl -i -X POST http://localhost:8001/routes/\<route_id>/plugins \

--data "name=rate-limiting" \

--data "config.second=5" \

Step 4 - Test the endpoints

curl -i -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VybmFtZSI6InByYXRpayJ9.syUkKDn6CV1olbN9kuU7K7w1n8LxJICH6wa4oADwNe4" http://localhost:5001/get_tea

Step 5 - Now test this endpoint on Postman

First, make sure you have snapd installed. If not, you can install it using:

sudo apt update

sudo apt install snapd

Once snapd is installed, you can install Postman using snap:

sudo snap install postman

  1. Open Postman.

  2. Create a new request by clicking on the "New" button in the top-left

    corner of the Postman window.

  3. In the request tab, enter the request URL

    http://localhost:5001/get_tea.

  4. Select the HTTP method as "GET" from the dropdown menu next to the

    URL input field.

  5. Add the Authorization header:

    • Click on the "Headers" tab.

    • In the Key field, enter "Authorization".

    • In the Value field, enter "Bearer

      eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VybmFtZSI6InByYXRpayJ9.syUkKDn6CV1olbN9kuU7K7w1n8LxJICH6wa4oADwNe4".

  6. Click on the "Send" button to send the request.

  7. Postman will display the response from your Flask application in the

    Response section below

Screenshots of Application

img

This is screenshot of flask application code

img

This is screenshot of Flask application running on port 5001

img

This is screenshot of Services created on Kong Manager

img

This is screenshot of Flask Application service created on Kong Manager

img

This is screenshot of Flask Application route created on Kong Manager

which is accessible after clicking on routes displayed inside service

img

This is screenshot of Flask Application plugins created on Kong Manager

img

This is screenshot of deployment of flask application in kong

img

This is screenshot of postman request deployment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment