- Deploy NGINX Ingress Controller Operator
- Create an instance of NginxIngress
- Deploy an app and expose it at the Ingress
- Deploy RateLimit and a WAF Policy
- Helpful Sites
- Clean Up
We are deploying via the certified NGINX Ingress Operator, therefore, create the ns
, subs
, and og
for the operator.
kubectl create -f -<<EOF
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: nginx-ingress
spec: {}
status: {}
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: nginx-ingress-tvj24
namespace: nginx-ingress
spec:
upgradeStrategy: Default
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
labels:
operators.coreos.com/nginx-ingress-operator.nginx-ingress: ""
name: nginx-ingress-operator
namespace: nginx-ingress
spec:
channel: alpha
installPlanApproval: Automatic
name: nginx-ingress-operator
source: certified-operators
sourceNamespace: openshift-marketplace
startingCSV: nginx-ingress-operator.v1.3.1
EOF
Wait for the operator to be ready
kubectl wait --for=condition=Ready pod -l control-plane=controller-manager -n nginx-ingress
Set current-context to the nginx-ingress
namespace.
kubectl config set-context $(kubectl config current-context) --namespace=nginx-ingress
Create an instance of the NginxIngress
:
kubectl create -f -<<EOF
apiVersion: charts.nginx.org/v1alpha1
kind: NginxIngress
metadata:
name: starburst
namespace: nginx-ingress
spec:
controller:
affinity: {}
annotations: {}
appprotect:
enable: false # This enables WAF
# F0307 23:14:59.281060 1 flags.go:222] NGINX App Protect support is for NGINX Plus # only
# https://github.com/nginxinc/nginx-ingress-helm-operator/blob/b99d3cb9355458a55bac7d44b43ebe13c11e4ea2/helm-charts/nginx-ingress/README.md
appprotectdos: # This is a nice to have too, protect against DOS
debug: true
enable: false
maxDaemons: 0
maxWorkers: 0
memory: 0
# autoscaling:
# annotations: {}
# enabled: false
# maxReplicas: 3
# minReplicas: 1
# targetCPUUtilizationPercentage: 50
# targetMemoryUtilizationPercentage: 50
config:
annotations: {}
entries: {}
customConfigMap: ''
customPorts: []
# If you do this, you have to manually create the secret
# defaultTLS:
# secret: nginx-ingress/default-server-secret
disableIPV6: false
dnsPolicy: ClusterFirst
enableCertManager: false
enableCustomResources: true
enableExternalDNS: false
enableLatencyMetrics: false
enableOIDC: false
enablePreviewPolicies: false
enableSnippets: false
enableTLSPassthrough: false
extraContainers: []
globalConfiguration:
create: false
spec: {}
healthStatus: false
healthStatusURI: /nginx-health
hostNetwork: false
image:
pullPolicy: IfNotPresent
repository: nginx/nginx-ingress
tag: 3.0.2-ubi
includeYear: false
ingressClass: nginx
initContainers: []
lifecycle: {}
logLevel: 1
minReadySeconds: 0
nginxDebug: false
nginxReloadTimeout: 60000
nginxStatus:
allowCidrs: 127.0.0.1
enable: true
port: 8080
nginxplus: false
pod:
annotations: {}
extraLabels: {}
readyStatus:
enable: true
initialDelaySeconds: 0
port: 8081
replicaCount: 1
reportIngressStatus:
annotations: {}
enable: true
enableLeaderElection: true
ingressLink: ''
resources:
requests:
cpu: 100m
memory: 128Mi
service:
annotations: {}
create: true
customPorts: []
externalIPs: []
externalTrafficPolicy: Local
extraLabels: {}
httpPort:
enable: true
port: 80
targetPort: 80
httpsPort:
enable: true
port: 443
targetPort: 443
loadBalancerIP: ''
loadBalancerSourceRanges: []
type: LoadBalancer
serviceAccount:
annotations: {}
imagePullSecretName: ''
serviceMonitor:
create: false # not now
endpoints: []
labels: {}
selectorMatchLabels: {}
setAsDefaultIngress: false
strategy: {}
terminationGracePeriodSeconds: 30
tolerations: []
volumeMounts: []
volumes: []
watchNamespace: ''
watchNamespaceLabel: ''
watchSecretNamespace: ''
kind: deployment
nginxServiceMesh:
enable: false
enableEgress: false
prometheus:
create: false # not now
port: 9113
scheme: http
secret: ''
rbac:
create: true
serviceInsight:
create: false
port: 9114
scheme: http
secret: ''
EOF
If you had turned on App Protect support, you would get this error:
F0307 23:14:59.281060 1 flags.go:222] NGINX App Protect support is for NGINX Plus only
Catch the error around allowPriviledgeEscalation
in the controller manager:
kubectl logs -f -l control-plane=controller-manager | egrep -A9 "allowPrivilegeEscalation|error"
# or
kubectl get ev --sort-by='.lastTimestamp' | grep forbidden
output:
W0307 19:01:10.543082 1 warnings.go:70] would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "starburst-nginx-ingress" must set securityContext.allowPrivilegeEscalation=false), seccompProfile (pod or container "starburst-nginx-ingress" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
{"level":"info","ts":1678215670.5483713,"logger":"helm.controller","msg":"Reconciled release","namespace":"nginx-ingress","name":"starburst","apiVersion":"charts.nginx.org/v1alpha1","kind":"NginxIngress","release":"starburst"}
If we go with this method of implementation for the WAF, this is an opportunity for us to contribute back upstream. This operator is a helm-operator and we need to expose the
securityContext
. UPDATE: THIS IS ALREADY A WIP.
Secondly, I don't think this error is very bad. This service account is localized to only one deployment. We can implement network policies in the cluster if we need to ease concern from prodsec
Fix this issue for now by adding a ClusterRole
to the ServiceAccount nginx-ingress
which allows it to create SecurityContext
.
Read more about SecurityContext:
kubectl create -f -<<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: nginx-privileged
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:privileged
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: nginx-ingress
EOF
This shows the securityContext used by the deployment:
kubectl get deploy starburst-nginx-ingress -ojsonpath='{.spec.template.spec.containers[0].securityContext}' | jq
output:
{
"allowPrivilegeEscalation": true,
"capabilities": {
"add": [
"NET_BIND_SERVICE"
],
"drop": [
"ALL"
]
},
"runAsNonRoot": true,
"runAsUser": 101
}
Now we can finally go on to the ingress object after making sure the logs are reporting clean in the nginx instance
kubectl logs -l app=starburst-nginx-ingress
output
2023/03/07 23:32:14 [notice] 23#23: worker process 30 exited with code 0
2023/03/07 23:32:14 [notice] 23#23: worker process 33 exited with code 0
2023/03/07 23:32:14 [notice] 23#23: worker process 41 exited with code 0
2023/03/07 23:32:14 [notice] 23#23: signal 29 (SIGIO) received
2023/03/07 23:32:14 [notice] 23#23: signal 17 (SIGCHLD) received from 31
2023/03/07 23:32:14 [notice] 23#23: worker process 31 exited with code 0
2023/03/07 23:32:14 [notice] 23#23: signal 29 (SIGIO) received
2023/03/07 23:32:14 [notice] 23#23: signal 17 (SIGCHLD) received from 26
2023/03/07 23:32:14 [notice] 23#23: worker process 26 exited with code 0
2023/03/07 23:32:14 [notice] 23#23: signal 29 (SIGIO) received
Deploy a sample app that we will route to, eventually this app will be Starburst, for now, it is a simple NGINX pod.
kubectl run backend --image=nginx --port=80 --expose
output
service/backend created
pod/backend created
Create an Ingress to route to the app.
kubectl create ing starburst --class=nginx --rule="redhat.com/=backend:80"
output
ingress.networking.k8s.io/starburst created
Test
curl -H "Host: redhat.com" $(k get svc starburst-nginx-ingress -ojsonpath='{.status.loadBalancer.ingress[0].hostname}')
output
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
IGNORE THIS - You still must have nginx plus Skip to Clean Up
When we deployed the Nginx operator, we deployed a CRD called Policy, it has both WAF and RateLimit. Let's check it out:
k explain policies.spec --recursive
output
KIND: Policy
VERSION: k8s.nginx.org/v1
RESOURCE: spec <Object>
DESCRIPTION:
PolicySpec is the spec of the Policy resource. The spec includes multiple
fields, where each field represents a different policy. Only one policy
(field) is allowed.
FIELDS:
accessControl <Object>
allow <[]string>
deny <[]string>
basicAuth <Object>
realm <string>
secret <string>
egressMTLS <Object>
ciphers <string>
protocols <string>
serverName <boolean>
sessionReuse <boolean>
sslName <string>
tlsSecret <string>
trustedCertSecret <string>
verifyDepth <integer>
verifyServer <boolean>
ingressClassName <string>
ingressMTLS <Object>
clientCertSecret <string>
verifyClient <string>
verifyDepth <integer>
jwt <Object>
jwksURI <string>
keyCache <string>
realm <string>
secret <string>
token <string>
oidc <Object>
authEndpoint <string>
clientID <string>
clientSecret <string>
jwksURI <string>
redirectURI <string>
scope <string>
tokenEndpoint <string>
zoneSyncLeeway <integer>
rateLimit <Object>
burst <integer>
delay <integer>
dryRun <boolean>
key <string>
logLevel <string>
noDelay <boolean>
rate <string>
rejectCode <integer>
zoneSize <string>
waf <Object>
apPolicy <string>
enable <boolean>
securityLog <Object>
apLogConf <string>
enable <boolean>
logDest <string>
securityLogs <[]Object>
apLogConf <string>
enable <boolean>
logDest <string>
kubectl apply -f -<<EOF
kind: Policy
version: k8s.nginx.org/v1
spec:
name: starburst
namespace: nginx-ingress
spec:
waf:
enable: true
securityLog:
enable: true
logDest: /tmp
- Modsec Logging and Debugging
- Kubernetes NGINX Ingress WAF with ModSecurity. From zero to hero!
- TroobleShooting
- ModSecurity
- Operator GitHub
- Helm Chart
- NGINX WAF.
kubectl delete nginxingress starburst --force
kubectl delete clusterrolebinding nginx-privileged
kubectl delete og,subs,csv --all --force
# clean up crds
kubectl get crd | grep nginx | awk '{print $1}' | xargs kubectl delete crd