Skip to content

Instantly share code, notes, and snippets.

@metafeather
Created April 14, 2025 22:04
Show Gist options
  • Save metafeather/de5151b3e4915810c65aaea8ba673e06 to your computer and use it in GitHub Desktop.
Save metafeather/de5151b3e4915810c65aaea8ba673e06 to your computer and use it in GitHub Desktop.
Serve Caddy v2 as an Ingress behind LB and Nginx with full TLS, HTTP/2 and gRPC support
{
debug
# Will issue SSL certs that have external DNS setup pointing to the LB
# and issue a HTTP-01 challenge to http://domain.org/.well-known/acme-challenge/
# This means Ingress must do ssl-passthrough but not do a ssl-redirect
email <redacted>
acme_ca https://acme-v02.api.letsencrypt.org/directory
# email <redacted>
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
log {
format json {
time_format "iso8601"
}
output stdout
}
# Store certs in a Postgres database to avoid PVC and S3 issues
# https://github.com/yroc92/postgres-storage
storage postgres {
connection_string {$POSTGRES_CONN_STRING}?sslmode=disable
disable_ddl {$POSTGRES_DISABLE_DDL}
}
}
# Required: Container healthcheck
:9999 {
handle / {
respond "OK"
}
}
http://ssl-test.api.cloud.macrocosmos.ai {
respond "No SSL" 200
}
https://ssl-test.api.cloud.macrocosmos.ai {
respond "With SSL" 200
}
# ConnectRPC example
connectrpc.api.cloud.metafeather.net {
# NOTE: gRPC requires HTTP/2 which requires SSL/TLS
# NOTE: CORS implemented by ConnectRPC
reverse_proxy https://staging-connectrpc-pcb2.encr.app {
header_up Host {upstream_hostport}
}
}
# NOTE(metafeather): For debugging ConnectRPC locally
# We need to global enable h2c in order to use gRPC without TLS.
# ref: https://caddyserver.com/docs/caddyfile/options#protocols
# ref: https://caddy.community/t/caddy-grpc-h2c-passthrough/11780
# servers :80 {
# protocols h1 h2c h2 h3
# }
# (grpc-localhost-proxy) {
# # buf curl can select the protocol to use
# # ref: https://kmcd.dev/posts/connectrpc/
# # @grpc header Content-Type application/grpc
# @grpc protocol grpc
# reverse_proxy @grpc {args.0} {
# # transport http {
# # versions h2c 2
# # }
# header_up Host {upstream_hostport}
# }
# }
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/instance: metafeather-caddy
app.kubernetes.io/name: metafeather-caddy
app.kubernetes.io/version: 2.9.1
name: metafeather-caddy
namespace: kustomize-me
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: metafeather-caddy
app.kubernetes.io/name: metafeather-caddy
app.kubernetes.io/version: 2.9.1
name: metafeather-caddy
namespace: kustomize-me
annotations:
tailscale.com/expose: "true"
tailscale.com/hostname: metafeather-caddy-branch-dev
tailscale.com/funnel: "true"
spec:
ports:
- name: health
port: 9999
protocol: TCP
targetPort: health
- name: http
port: 80
protocol: TCP
targetPort: http
- name: h2
port: 443
protocol: TCP
targetPort: h2
selector:
app.kubernetes.io/instance: metafeather-caddy
app.kubernetes.io/name: metafeather-caddy
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: metafeather-caddy
app.kubernetes.io/name: metafeather-caddy
app.kubernetes.io/version: 2.9.1
name: metafeather-caddy
namespace: kustomize-me
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: metafeather-caddy
app.kubernetes.io/name: metafeather-caddy
template:
metadata:
labels:
app.kubernetes.io/instance: metafeather-caddy
app.kubernetes.io/name: metafeather-caddy
spec:
containers:
- image: caddy:v2.9.1
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /
port: health
name: caddy
ports:
- containerPort: 9999
name: health
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- name: h2
containerPort: 443
protocol: TCP
envFrom:
- secretRef:
name: secrets-env
resources:
requests:
cpu: 32m
memory: 64Mi
securityContext: {}
volumeMounts:
- mountPath: /etc/caddy/Caddyfile
name: conf
subPath: Caddyfile
- name: sites-enabled
mountPath: /etc/sites-enabled
readOnly: true
imagePullSecrets:
- name: metafeather-platform
securityContext: {}
serviceAccountName: metafeather-caddy
volumes:
- name: conf
configMap:
name: caddy
- name: sites-enabled
configMap:
name: caddy-sites-enabled
---
apiVersion: v1
data:
Caddyfile: "import /etc/sites-enabled/*\n"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/instance: caddy
app.kubernetes.io/name: caddy
app.kubernetes.io/version: 2.9.1
name: caddy
namespace: kustomize-me
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: metafeather-caddy
namespace: kustomize-me
annotations:
# Pass SSL, GRPC, etc through Nginx to Caddy with HTTP-01 challenge
# ref: https://www.devgem.io/posts/exposing-well-known-resources-with-kubernetes-ingress-a-pathtype-dilemma-and-solution
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx-cloud
rules:
- host: cloud.metafeather.net
http: &caddy
paths:
# Match for Caddy:80 to do the http->https redirect
- path: /(.*)
pathType: ImplementationSpecific
backend:
service:
name: metafeather-caddy
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: metafeather-caddy
port:
number: 443
# Examples
- host: ssl-test.api.cloud.metafeather.net
http: *caddy
- host: connectrpc.api.cloud.metafeather.net
http: *caddy
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: metafeather-branch-dev
helmCharts:
- name: nsq
repo: https://nsqio.github.io/helm-chart
releaseName: nsq-identity-branch-dev
namespace: kustomize-me
valuesFile: nsq.values.yaml
# NOTE(metafeather): Patches will be applied to these manifests by ArgoCD
# NOTE(metafeather): Generated deployment.yaml from helm chart at https://gitlab.com/alexander-chernov/helm/caddy
images:
- name: caddy
newTag: v2.9.1
configMapGenerator:
- files:
- Caddyfile=sites-enabled/Caddyfile
name:
caddy-sites-enabled
# options:
# disableNameSuffixHash: true
resources:
- ns.yaml
- ingress.yaml
- deployment.yaml
# ref: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
# ref: https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/README.md
controller:
# Run Multiple Ingress Controllers with different Ingress Classes
# ref: https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
ingressClass: nginx-cloud # default: nginx
ingressClassResource:
name: nginx-cloud # default: nginx
enabled: true
default: false
controllerValue: k8s.io/ingress-nginx-cloud # default: k8s.io/ingress-nginx
# A replica count of minimum 2 ensures high availability for Nginx Ingress main application Pods
replicaCount: 2
service:
type: LoadBalancer
annotations:
# ref: https://docs.digitalocean.com/products/kubernetes/how-to/configure-load-balancers/
service.beta.kubernetes.io/do-loadbalancer-hostname: cloud.metafeather.net
service.beta.kubernetes.io/do-loadbalancer-name: cloud.metafeather.net
service.beta.kubernetes.io/do-loadbalancer-protocol: tcp
service.beta.kubernetes.io/do-loadbalancer-http-ports: "80"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive: "true"
service.beta.kubernetes.io/do-loadbalancer-http-idle-timeout-seconds: 180
service.beta.kubernetes.io/do-loadbalancer-algorithm: least_connections
# 1.1. Pass SSL, GRPC, etc through the LB to Nginx
# TODO(metafeather): Replace with DO L4 Network LB to remove Nginx -> Caddy proxy
# ref: https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/03-setup-ingress-controller/assets/manifests/nginx-values-v4.1.3.yaml
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
targetPorts:
http: http
https: https
# -- Global configuration passed to the ConfigMap consumed by the controller. Values may contain Helm templates.
# Ref.: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
config:
# 1.2. Pass SSL, GRPC, etc through the LB to Nginx
# ref: https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/03-setup-ingress-controller/assets/manifests/nginx-values-v4.1.3.yaml
use-proxy-protocol: "true"
extraArgs:
# 1.3. Pass SSL, GRPC, etc through Nginx to Caddy, etc
enable-ssl-passthrough: "true"
default-ssl-certificate: cert-manager/letsencrypt-cloud-wildcard
# extraEnvs:
# - name: FOO
# valueFrom:
# secretKeyRef:
# key: FOO
# name: secret-resource
## Enable the metrics of the NGINX Ingress controller https://kubernetes.github.io/ingress-nginx/user-guide/monitoring/
metrics:
enabled: true
podAnnotations:
controller:
metrics:
service:
servicePort: "9090"
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
@metafeather
Copy link
Author

metafeather commented Apr 14, 2025

This is for Digital Ocean but the same approach works for AWS LoadBalancers to effectively cut them out of the process by forwarding SSL termination through Nginx to Caddy to handle with it's excellent LetsEncrypt support and reverse proxying capabilities:

graph LR;
  Internet -- http --> LB --> Ingress --> Caddy:80 --> HTTP-01 --> redirect[Redirect to HTTPS]
  Internet -- https --> LB -- PROXY protocol --> Ingress -- ssl-passthrough --> Caddy:443 --> LetsEncrypt --> proxy[Reverse Proxy] --> backend[h2, gRPC etc]
Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment