For infromationm, on GKE we create a cluster without Httploadbalancing because we use an nginx ingress:
gcloud container clusters create NAME --disable-addons HttpLoadBalancing
With autoscaling and some scopes to write to cloudDNS and storage read-write for a persistent registry, a node-pool looks like this:
gcloud container node-pools create tm --cluster munuprod --zone us-central1-a --num-nodes 5 -m n1-standard-8 --enable-autoscaling --min-nodes 3 --max-nodes 10 --scopes https://www.googleapis.com/auth/ndev.clouddns.readwrite,https://www.googleapis.com/auth/devstorage.read_write
We need to set our main account to have admin privileges (GKE thing before being able to create RBAC clusterroles)
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user [email protected]
We use the Helm Chart to deploy the nginx
Ingress controller. Following the instructions here.
Install Helm on your local machine, deploy tiller
in the cluster.
Get the Service Account and RBAC roles properly set and then deply tiller:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true
Note that the service fronting the controller will be a LoadBalancer
type service hence you will get a public IP from Google:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-nginx-ingress-controller LoadBalancer 10.31.247.106 35.184.224.62 80:30390/TCP,443:32244/TCP 1d
nginx-ingress-nginx-ingress-default-backend ClusterIP 10.31.240.58 <none> 80/TCP 1d
NOTE: Use that IP to create the A record
Annotate it to enable CORS with:
kubectl annotate svc nginx-ingress-controller nginx.ingress.kubernetes.io/enable-cors=true
Edit the ConfigMap to add a hide-headers
entry otherwise a basic auth prompt might show up. Set hide-headers: "Www-authenticate"
CAREFUL: Add the - --publish-service=default/nginx-ingress-controller
option to the ingress controller or your Ingress objects will get the wrong IP address.
We follow the core documentation
Let's go with the 0.2.3 release still -even though 0.3.0 is out since last week-
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.2/third_party/istio-1.0.2/istio.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.2/release.yaml
kubectl edit cm config-domain -n knative-serving
###install eventing
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.2.0/release.yaml
kubectl apply --filename https://github.com/knative/eventing-sources/releases/download/v0.2.0/release.yaml
kubectl create ns triggermesh
kubectl create ns registry
NOTE: We will have to see which registry we can use on-prem/
Our local registry setup is a copy of https://github.com/triggermesh/knative-local-registry/releases/tag/v0.2 but with GCS persistence enabled. We maintain a copy of the manifests here.
Node pools must be created with buckets read-write scope (or we'd have to set up a service account for registry). We currently have a tiny extra node pool: 3685734 should be reverted once the default node pool supports GCS write.
Thanks to bucket persistence we can scale up the registry replicaset to any number of pods, and access the same images from multiple clusters.
For details and compatibility of the registry-etc-hosts-update
daemonset see https://github.com/triggermesh/knative-local-registry/s.
At a high level we should be able to simply apply the two manifests, which contains Ingress/Services/Deployments, RBAC rules , PVC and secrets.
kubectl apply -f console.yaml
kubectl apply -f app.yaml
Backend application requires several secrets to be available in the namespace to be fully functional.
First of all, Auth0 service is being used for user authorization, so backend expects auth0-token
secret with API credentials to be available before start. To get those credentials, go to Auth0 applications page, open (or create) API application of MACHINE TO MACHINE
type, copy Client ID
and Client Secret
into k8s secret:
kubectl -n triggermesh create secret generic auth0-token \
--from-literal=client_id=<CLIENT_ID> \
--from-literal=client_secret=<CLIENT_SECRET>
Backend sets payload secret to secure webhook endpoint. Webhook secret name which backend will use can be configured via GIT_HOOK_SECRET_NAME
environment variable (munusecret
by default) and must contain secret
key with random string. Please note: setting new key for existing secret will make all Github webhooks with old secret key invalid.
kubectl -n triggermesh create secret generic munusecret --from-literal=secret=<RANDOM_STRING>
If you have Bitbucket enabled in your Social Connections Auth0 dashboard, you need to create bitbucket-token
secret with client_id
and client_secret
keys which can be obtainetd from Bitbucket settings in Auth0 dashboard.
kubectl -n triggermesh create secret generic bitbucket-token \
--from-literal=client_id=<CLIENT_ID> \
--from-literal=client_secret=<CLIENT_SECRET>
Triggermesh users whose default service account were added as subject into triggermesh-admin-binding
ClusterRoleBinding are considered as Administrators with full access to resources across all namespaces. User's ClusterRoleBinding setup is manual operation and can be done by another Administrator. Please be aware that triggermesh-admin-binding
grants to the user super wide access rights of Cluster Admin role.