- Head to https://dev-NNNNNNNN-admin.okta.com/admin/apps/active
- Click "Create App Integration", leaving ALL as default except the following:
- Sign-in method: "OIDC - OpenID Connect"
- Application Type: "Web Application"
- App integration name: "pomerium-test" (or similar)
- Grant type: select "Refresh Token"
- Sign-in redirect URIs: "https://authenticate.<DOMAIN_TO_SECURE>/oauth2/callback"
- Assignments: select "Allow everyone in your organization to access" and DISABLE "Federation Broker Mode"
- "Save"
- Note the ClientID and Secret
Ensure you add people or groups to this new App Integration
domain_to_secure=<DOMAIN_TO_SECURE> # e.g. jetstack.mcginlay.net
okta_dev_id=<OKTA_DEV_ID> # the numbers that appear in okta-dev-NNNNNNNN
okta_client_id=<CLIENT_ID>
okta_client_secret=<CLIENT_SECRET>
jsctl auth login
jsctl config set organization <ORG_NAME> # e.g. gallant-wright
jsctl registry auth output 2>&1 > /dev/null # force an image pull secret to be created as necessary
k8s_cluster_name=$(kubectl config current-context | cut -d'@' -f2 | cut -d'.' -f1)
k8s_cluster_name_jss=$(tr "-" "_" <<< ${k8s_cluster_name}_${RANDOM}) # JSS doesn't like '-'
jsctl clusters connect ${k8s_cluster_name_jss}
jsctl operator deploy --auto-registry-credentials
jsctl operator installations apply --auto-registry-credentials --cert-manager-replicas 1
You can add new Issuer resources to your cluster by editing the Installation manifest. Open the manifest, in whatever EDITOR you have configured, as follows.
kubectl edit Installation installation
You can add the Let's Encrypt ClusterIssuer by inserting the following snippet into the spec:
section of the Installation manifest.
issuers:
- clusterScope: true
name: letsencrypt
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: pomerium
Saving the file will apply those changes.
kubectl apply -f https://raw.githubusercontent.com/pomerium/ingress-controller/v0.20.0/deployment.yaml
Start by setting variables to represent the ELB and DNS record name you wish to target.
elb_dnsname=$(kubectl -n pomerium get service pomerium-proxy -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
hosted_zone=${domain_to_secure}
record_subdomain_name=*
dns_record_name=${record_subdomain_name}.${hosted_zone}
Now use the hosted_zone
and elb_dnsname
settings to configure Route53.
hosted_zone_id=$(aws route53 list-hosted-zones --query "HostedZones[?Name=='${hosted_zone}.'].Id" --output text | cut -d '/' -f3)
hosted_zone_id_for_elb=$(aws elb describe-load-balancers --query "LoadBalancerDescriptions[?DNSName=='${elb_dnsname}'].CanonicalHostedZoneNameID" --output text)
action=UPSERT # switch to DELETE to reverse this operation
aws route53 change-resource-record-sets --hosted-zone-id ${hosted_zone_id} --change-batch file://<(
cat << EOF
{
"Changes": [{
"Action": "${action}",
"ResourceRecordSet": {
"Name": "${dns_record_name}",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "${hosted_zone_id_for_elb}",
"DNSName": "dualstack.${elb_dnsname}.",
"EvaluateTargetHealth": false
}
}
}]
}
EOF
)
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: idp
namespace: pomerium
type: Opaque
stringData:
client_id: ${okta_client_id}
client_secret: ${okta_client_secret}
EOF
cat <<EOF|kubectl apply -f -
apiVersion: ingress.pomerium.io/v1
kind: Pomerium
metadata:
name: global
spec:
secrets: pomerium/bootstrap
authenticate:
url: https://authenticate.${domain_to_secure}
identityProvider:
provider: okta
secret: pomerium/idp
url: https://dev-${okta_dev_id}.okta.com
certificates:
- default/pomerium-wildcard-tls
EOF
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: verify
labels:
app: verify
service: verify
spec:
ports:
- port: 8000
targetPort: 8000
name: http
selector:
app: pomerium-verify
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: verify
spec:
replicas: 1
selector:
matchLabels:
app: pomerium-verify
template:
metadata:
labels:
app: pomerium-verify
spec:
containers:
- image: docker.io/pomerium/verify
imagePullPolicy: IfNotPresent
name: verify
ports:
- containerPort: 8000
protocol: TCP
name: http
EOF
cat <<EOF|kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: verify
annotations:
cert-manager.io/cluster-issuer: "letsencrypt"
ingress.pomerium.io/allow_public_unauthenticated_access: "false"
ingress.pomerium.io/allow_any_authenticated_user: "true"
ingress.pomerium.io/pass_identity_headers: 'true'
spec:
ingressClassName: pomerium
rules:
- host: verify.${domain_to_secure}
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: verify
port:
number: 8000
tls:
- hosts:
- verify.${domain_to_secure}
- authenticate.${domain_to_secure}
secretName: 'pomerium-wildcard-tls'
EOF
Run the following to display the secured application endpoint.
echo "https://verify.${domain_to_secure}/"
Navigate to the displayed endpoint and you will be taken to the dedicated Okta sign-in page for the app. You previously added people/groups to this new App Integration. Only those users will be able to sign-in.
In the URL, change verify.
to authenticate.
to see more info.
Would like to know why verify.
yields "TLS Certificate verification failed" warning.
I think it has something to to with tls_upstream_server_name not being set on the inbound request.
Not sure how to address from Kubernetes manifests.