Skip to content

Instantly share code, notes, and snippets.

@xgenvn
Forked from johnandersen777/.gitignore
Created January 11, 2022 14:41
Show Gist options
  • Save xgenvn/3b7cf59f5cbb0186fe691ccc411b3b6b to your computer and use it in GitHub Desktop.
Save xgenvn/3b7cf59f5cbb0186fe691ccc411b3b6b to your computer and use it in GitHub Desktop.
Setting Up k3s for Serverless (knative) on a $5 DigitalOcean Droplet Using k3d
.terraform/
*.pem
*.tf
*.tfstate
*.yaml
*.backup
istio-*/
cert-manager-*/
*.swp
env

Setting Up k3s for Serverless (knative) on a $5 DigitalOcean Droplet Using k3d

The cheapest Kubernetes deployment either side of the Mississippi

BEWARE This guide is devoid of binary verification

Generate Your SSH keys

Generate an ssh key with 4096 bits (if you don't already have one you want to use). I would recommend putting a password on it and using it only for this VM.

ssh-keygen -b 4096

TODO Add info on storing keys in TPM via tpm2-pkcs11

Provision VM

We'll be using a CoreOS Container Linux VM since only because I'm personally inclined to believe that's the most secure option at the moment.

Add swap

Our VM has 1GB of memory, we'll most assuredly need some swap.

https://coreos.com/os/docs/latest/adding-swap.html

sudo mkdir -p /var/vm
sudo fallocate -l 5G /var/vm/swapfile1
sudo chmod 600 /var/vm/swapfile1
sudo mkswap /var/vm/swapfile1
sudo tee /etc/systemd/system/var-vm-swapfile1.swap > /dev/null <<EOF
[Unit]
Description=Turn on swap

[Swap]
What=/var/vm/swapfile1

[Install]
WantedBy=multi-user.target
EOF
sudo systemctl enable --now var-vm-swapfile1.swap
echo 'vm.swappiness=30' | sudo tee /etc/sysctl.d/80-swappiness.conf
sudo systemctl restart systemd-sysctl
sudo swapon

Setup PATH

Create a ~/.local/bin directory and a ~/.profile which will add that directory to your PATH when you source it.

mkdir -p "${HOME}/.local/bin"
cat >> ~/.profile <<'EOF'
export PATH="${PATH}:${HOME}/.local/bin"
EOF

Whenever you ssh in (and now) you'll want to run

. .profile

To add ~/.local/bin to your PATH, which is where we'll install the binaries.

Install Binaries

  • k3d
    • k3d is k3s in docker
  • kubectl
    • kubectl is the binary we'll use to interact with our Kubernetes cluster
curl -L -o k3d https://github.com/rancher/k3d/releases/download/v1.3.0-dev.0/k3d-linux-amd64
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod 700 k3d kubectl
mv k3d kubectl ~/.local/bin/

Cluster Creation

Create a cluster with 3 workers and exposing ports 80 and 443 (HTTP and HTTPS).

At time of writing the Rancher devs have just recently fixed bugs related to Knative deployment. As such we need to specify the k3s image that now works.

k3d create --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6

Access Your Cluster

The k3d get-kubeconfig may take a second or two before it works, just try re-running it a few times until it works.

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info

Installing Helm

Grab the latest release from https://github.com/helm/helm/releases and install it.

curl -sSL https://get.helm.sh/helm-v2.14.2-linux-amd64.tar.gz | tar xvz
mv linux-amd64/{helm,tiller} ~/.local/bin/
rm -rf linux-amd64/

Now we need to configure Role Based Access Control (RBAC) and create a Certificate Authority (CA) which will secure our helm/tiller installation.

You should read these guides, but I'll summarize the CLI commands.

Configuring Role Based Access Control (RBAC)

We're going to Deploy Helm in a namespace, talking to Tiller in another namespace

kubectl create namespace helm-world
kubectl create namespace tiller-world
kubectl create serviceaccount tiller --namespace tiller-world
kubectl create << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: helm
  namespace: helm-world
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tiller-user
  namespace: tiller-world
rules:
- apiGroups:
  - ""
  resources:
  - pods/portforward
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tiller-user-binding
  namespace: tiller-world
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: tiller-user
subjects:
- kind: ServiceAccount
  name: helm
  namespace: helm-world
EOF

Install Terraform

We're going to use terraform to generate all the certificates.

curl -LO https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip
unzip terraform_0.12.5_linux_amd64.zip
rm terraform_0.12.5_linux_amd64.zip
mv terraform ~/.local/bin/

Generating Certificates

Now we download the terraform file and run it to create the certs. More info on the terraform file can be found here: https://github.com/jbussdieker/tiller-ssl-terraform

Warning terraform will not regenerate the certs when you re-run the apply command if they exist on disk.

curl -LO https://github.com/jbussdieker/tiller-ssl-terraform/raw/master/tiller_certs.tf
terraform init
terraform apply -auto-approve

Helm init

helm init \
  --tiller-tls \
  --tiller-tls-cert ./tiller.cert.pem \
  --tiller-tls-key ./tiller.key.pem \
  --tiller-tls-verify \
  --tls-ca-cert ca.cert.pem \
  --tiller-namespace=tiller-world \
  --service-account=tiller-user \

Installing Istio

Work In Progress When deploying Istio's pods on our cheap VM you'll notice that kubernetes is going to leave some of the Istio pods in Pending state. This is due to our 1GB of RAM. Yes, we gave the VM 5GB of swap, but Kubernetes doesn't add our swap to the total amount of RAM.

Important --set gateways.custom-gateway.type='ClusterIP' needs to be --set gateways.custom-gateway.type='NodePort'.

TODO Enable global.mtls.enabled and global.controlPlaneSecurityEnabled

https://knative.dev/docs/install/installing-istio/#installing-istio-with-sds-to-secure-the-ingress-gateway

export ISTIO_VERSION=1.1.7
curl -L https://git.io/getLatestIstio | sh -
cd istio-${ISTIO_VERSION}

for i in istio-?.?.?/install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: istio-system
  labels:
    istio-injection: disabled
EOF

helm template --namespace=istio-system \
  --set sidecarInjectorWebhook.enabled=true \
  --set sidecarInjectorWebhook.enableNamespacesByDefault=true \
  --set global.proxy.autoInject=disabled \
  --set global.disablePolicyChecks=true \
  --set prometheus.enabled=false \
  `# Disable mixer prometheus adapter to remove istio default metrics.` \
  --set mixer.adapters.prometheus.enabled=false \
  `# Disable mixer policy check, since in our template we set no policy.` \
  --set global.disablePolicyChecks=true \
  `# Set gateway pods to 1 to sidestep eventual consistency / readiness problems.` \
  --set gateways.istio-ingressgateway.autoscaleMin=1 \
  --set gateways.istio-ingressgateway.autoscaleMax=1 \
  --set gateways.istio-ingressgateway.resources.requests.cpu=500m \
  --set gateways.istio-ingressgateway.resources.requests.memory=256Mi \
  `# Enable SDS in the gateway to allow dynamically configuring TLS of gateway.` \
  --set gateways.istio-ingressgateway.sds.enabled=true \
  `# More pilot replicas for better scale` \
  --set pilot.autoscaleMin=2 \
  `# Set pilot trace sampling to 100%` \
  --set pilot.traceSampling=100 \
  istio-?.?.?/install/kubernetes/helm/istio \
  > ./istio.yaml

kubectl apply -f istio.yaml

helm template --namespace=istio-system \
  --set gateways.custom-gateway.autoscaleMin=1 \
  --set gateways.custom-gateway.autoscaleMax=1 \
  --set gateways.custom-gateway.cpu.targetAverageUtilization=60 \
  --set gateways.custom-gateway.labels.app='cluster-local-gateway' \
  --set gateways.custom-gateway.labels.istio='cluster-local-gateway' \
  --set gateways.custom-gateway.type='NodePort' \
  --set gateways.istio-ingressgateway.enabled=false \
  --set gateways.istio-egressgateway.enabled=false \
  --set gateways.istio-ilbgateway.enabled=false \
  istio-?.?.?/install/kubernetes/helm/istio \
  -f istio-?.?.?/install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \
  | sed -e "s/custom-gateway/cluster-local-gateway/g" -e "s/customgateway/clusterlocalgateway/g" \
  > ./istio-local-gateway.yaml

kubectl apply -f istio-local-gateway.yaml

Assign Domain Name

https://knative.dev/docs/serving/using-a-custom-domain/

export DOMAIN=example.com
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: config-domain
  namespace: knative-serving
data:
  # Default value for domain, for routes that does not have app=prod labels.
  # Although it will match all routes, it is the least-specific rule so it
  # will only be used if no other domain matches.
  ${DOMAIN}: ""
EOF

Enabling Auto-TLS Via Let's Encrypt

Auto-TLS means our Knative applications will get HTTPS certs from Let's Encrypt without us doing anything (other than setting it up)! Awesome!

Cert Manager

First we need to install cert manager which is what talks to Let's Encrypt to get us certificates. We'll need to combine a few guides for this.

https://knative.dev/docs/serving/installing-cert-manager/

export CERT_MANAGER_VERSION=0.6.1
curl -sSL https://github.com/jetstack/cert-manager/archive/v${CERT_MANAGER_VERSION}.tar.gz | tar xz
kubectl apply -f cert-manager-?.?.?/deploy/manifests/00-crds.yaml
kubectl apply -f cert-manager-?.?.?/deploy/manifests/cert-manager.yaml

Now that cert-manager is installed, we need to set up a way to answer the ACME DNS challenge. Since we're on DigitalOcean we'll use the cert-manager plugin for them.

Create your DigitalOcean personal access token and export it as an environment variable, so that the DNS TXT records can be updated for the ACME challenge.

export DO_PA_TOKEN=9a978d78fe57a9f6760ea
kubectl apply -f - <<EOF
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-issuer
  namespace: cert-manager
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    # This will register an issuer with LetsEncrypt.  Replace
    # with your admin email address.
    email: [email protected]
    privateKeySecretRef:
      # Set privateKeySecretRef to any unused secret name.
      name: letsencrypt-issuer
    dns01:
      providers:
      - name: digitalocean
        digitalocean:
          tokenSecretRef:
            name: digitalocean-dns
            key: ${DO_PA_TOKEN}
EOF
kubectl get clusterissuer --namespace cert-manager letsencrypt-issuer --output yaml

https://knative.dev/docs/serving/using-cert-manager-on-gcp/#adding-your-service-account-to-cert-manager

Installing Knative

https://knative.dev/docs/install/knative-with-iks/#installing-knative

There's an open issue about how there's a race condition on these apply commands. I can't find it right now but just wait a bit and try re-running them if they complain.

curl -LO https://github.com/knative/serving/releases/download/v0.7.0/serving.yaml
curl -LO https://github.com/knative/build/releases/download/v0.7.0/build.yaml
curl -LO https://github.com/knative/eventing/releases/download/v0.7.0/release.yaml
curl -LO https://github.com/knative/serving/releases/download/v0.7.0/monitoring.yaml

kubectl apply \
  --selector knative.dev/crd-install=true \
  --filename serving.yaml \
  --filename build.yaml \
  --filename release.yaml \
  --filename monitoring.yaml

# TODO Auto TLS
kubectl apply \
  --filename serving.yaml \
  --selector networking.knative.dev/certificate-provider!=cert-manager \
  --filename build.yaml \
  --filename release.yaml \
  --filename monitoring.yaml

kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
kubectl get pods --namespace knative-eventing
kubectl get pods --namespace knative-monitoring

kubectl get deploy -n knative-serving --label-columns=serving.knative.dev/release
#!/bin/sh
set -xe
export K3D_VERSION=${K3D_VERSION:-"1.3.0-dev.0"}
export KUBECTL_VERSION=${KUBECTL_VERSION:-"$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)"}
export HELM_VERSION=${HELM_VERSION:-"2.14.2"}
export TERRAFORM_VERSION=${TERRAFORM_VERSION:-"1.1.7"}
export ISTIO_VERSION=${ISTIO_VERSION:-"1.1.7"}
export KNATIVE_VERSION=${KNATIVE_VERSION:-"0.7.0"}
export CERT_MANAGER_VERSION=${CERT_MANAGER_VERSION:-"0.6.1"}
curl -L -o k3d https://github.com/rancher/k3d/releases/download/v${K3D_VERSION}/k3d-linux-amd64
curl -LO https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl
chmod 700 k3d kubectl
mv k3d kubectl ~/.local/bin/
curl -sSL https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz | tar xvz
mv linux-amd64/{helm,tiller} ~/.local/bin/
rm -rf linux-amd64/
curl -LO https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip
rm terraform_${TERRAFORM_VERSION}_linux_amd64.zip
mv terraform ~/.local/bin/
# TODO version tag
curl -LO https://github.com/jbussdieker/tiller-ssl-terraform/raw/master/tiller_certs.tf
# Download and unpack Istio
# TODO version tag
curl -L https://git.io/getLatestIstio | sh -
cd istio-${ISTIO_VERSION}
# Download Knative
curl -LO https://github.com/knative/build/releases/download/v${KNATIVE_VERSION}/build.yaml
curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/serving.yaml
curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/monitoring.yaml
curl -LO https://github.com/knative/eventing/releases/download/v${KNATIVE_VERSION}/release.yaml
curl -sSL https://github.com/jetstack/cert-manager/archive/v${CERT_MANAGER_VERSION}.tar.gz | tar xz
#!/bin/sh
set -xe
if [ -f "${K3D_ENV}" ]; then
source "${K3D_ENV}"
fi
if [ "x${DOMAIN}" == "x" ]; then
echo "[-] ERROR: DOMAIN (aka example.com) not set" >&2
exit 1
fi
if [ "x${EMAIL}" == "x" ]; then
echo "[-] ERROR: EMAIL (Your email for Let's Encrypt ACME) not set" >&2
exit 1
fi
if [ "x${DO_PA_TOKEN}" == "x" ]; then
echo "[-] ERROR: DO_PA_TOKEN (DigitalOcean Personal Access Token) not set" >&2
exit 1
fi
k3d create --auto-restart --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6
kube_up() {
k3d get-kubeconfig --name='k3s-default' 2>&1
}
set +e
KUBE_UP="$(kube_up | grep -E 'does not exist|copy kubeconfig')"
while [ "x${KUBE_UP}" != "x" ]; do
sleep 0.25s
KUBE_UP="$(kube_up | grep -E 'does not exist|copy kubeconfig')"
done
set -e
export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info
kubectl create namespace helm-world
kubectl create namespace tiller-world
kubectl create serviceaccount tiller --namespace tiller-world
kubectl create -f - << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm
namespace: helm-world
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tiller-user
namespace: tiller-world
rules:
- apiGroups:
- ""
resources:
- pods/portforward
verbs:
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tiller-user-binding
namespace: tiller-world
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tiller-user
subjects:
- kind: ServiceAccount
name: helm
namespace: helm-world
EOF
terraform init
rm -f *.pem
terraform apply -auto-approve
# Installing Helm - Helm init
helm init \
--override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' \
--tiller-tls \
--tiller-tls-verify \
--tls-ca-cert ca.cert.pem \
--tiller-tls-cert ./tiller.cert.pem \
--tiller-tls-key ./tiller.key.pem \
--tiller-namespace=tiller-world \
--service-account=tiller-user \
# Installing Istio
for i in istio-?.?.?/install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: istio-system
labels:
istio-injection: disabled
EOF
helm template --namespace=istio-system \
--set sidecarInjectorWebhook.enabled=true \
--set sidecarInjectorWebhook.enableNamespacesByDefault=true \
--set global.proxy.autoInject=disabled \
--set global.disablePolicyChecks=true \
--set prometheus.enabled=false \
`# Disable mixer prometheus adapter to remove istio default metrics.` \
--set mixer.adapters.prometheus.enabled=false \
`# Disable mixer policy check, since in our template we set no policy.` \
--set global.disablePolicyChecks=true \
`# Set gateway pods to 1 to sidestep eventual consistency / readiness problems.` \
--set gateways.istio-ingressgateway.autoscaleMin=1 \
--set gateways.istio-ingressgateway.autoscaleMax=1 \
--set gateways.istio-ingressgateway.resources.requests.cpu=500m \
--set gateways.istio-ingressgateway.resources.requests.memory=256Mi \
`# Enable SDS in the gateway to allow dynamically configuring TLS of gateway.` \
--set gateways.istio-ingressgateway.sds.enabled=true \
`# More pilot replicas for better scale` \
--set pilot.autoscaleMin=1 \
`# Set pilot trace sampling to 100%` \
--set pilot.traceSampling=100 \
`# Tune down required resources for pilot.` \
--set pilot.resources.requests.cpu=30m \
`# Tune down required resources for telemetry.` \
--set mixer.telemetry.resources.requests.cpu=30m \
istio-?.?.?/install/kubernetes/helm/istio \
> ./istio.yaml
kubectl apply -f istio.yaml
helm template --namespace=istio-system \
--set gateways.custom-gateway.autoscaleMin=1 \
--set gateways.custom-gateway.autoscaleMax=1 \
--set gateways.custom-gateway.cpu.targetAverageUtilization=60 \
--set gateways.custom-gateway.labels.app='cluster-local-gateway' \
--set gateways.custom-gateway.labels.istio='cluster-local-gateway' \
--set gateways.custom-gateway.type='NodePort' \
--set gateways.istio-ingressgateway.enabled=false \
--set gateways.istio-egressgateway.enabled=false \
--set gateways.istio-ilbgateway.enabled=false \
istio-?.?.?/install/kubernetes/helm/istio \
-f istio-?.?.?/install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \
| sed -e "s/custom-gateway/cluster-local-gateway/g" -e "s/customgateway/clusterlocalgateway/g" \
> ./istio-local-gateway.yaml
kubectl apply -f istio-local-gateway.yaml
ISTIO_UP="$(kubectl get pods --namespace istio-system 2>&1)"
while [ "x${ISTIO_UP}" == "xNo resources found." ]; do
sleep 0.25s
ISTIO_UP="$(kubectl get pods --namespace istio-system 2>&1)"
done
kubectl get pods --namespace istio-system
set +e
ISTIO_UP="$(kubectl get pods --namespace istio-system | grep -viE 'status|running|complete')"
while [ "x${ISTIO_UP}" != "x" ]; do
sleep 0.25s
ISTIO_UP="$(kubectl get pods --namespace istio-system | grep -viE 'status|running|complete')"
done
kubectl get pods --namespace istio-system
kubectl apply \
--selector knative.dev/crd-install=true \
--filename serving.yaml \
--filename build.yaml \
--filename release.yaml \
--filename monitoring.yaml
kubectl apply \
--filename serving.yaml \
--selector networking.knative.dev/certificate-provider=cert-manager \
--filename build.yaml \
--filename release.yaml \
--filename monitoring.yaml
sleep 2
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
kubectl get pods --namespace knative-eventing
kubectl get pods --namespace knative-monitoring
kubectl get deploy -n knative-serving --label-columns=serving.knative.dev/release
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
# Default value for domain, for routes that does not have app=prod labels.
# Although it will match all routes, it is the least-specific rule so it
# will only be used if no other domain matches.
${DOMAIN}: ""
EOF
sleep 2
kubectl apply -f cert-manager-?.?.?/deploy/manifests/00-crds.yaml
kubectl apply -f cert-manager-?.?.?/deploy/manifests/cert-manager.yaml
sleep 2
kubectl apply -f - <<EOF
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-issuer
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
# This will register an issuer with LetsEncrypt. Replace
# with your admin email address.
email: ${EMAIL}
privateKeySecretRef:
# Set privateKeySecretRef to any unused secret name.
name: letsencrypt-issuer
dns01:
providers:
- name: digitalocean
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: ${DO_PA_TOKEN}
EOF
sleep 2
kubectl get clusterissuer --namespace cert-manager letsencrypt-issuer --output yaml
kubectl apply -f - <<EOF
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: my-certificate
# Istio certs secret lives in the istio-system namespace, and
# a cert-manager Certificate is namespace-scoped.
namespace: istio-system
spec:
# Reference to the Istio default cert secret.
secretName: istio-ingressgateway-certs
acme:
config:
# Each certificate could rely on different ACME challenge
# solver. In this example we are using one provider for all
# the domains.
- dns01:
provider: digitalocean
domains:
# Since certificate wildcards only allow one level, we will
# need to one for every namespace that Knative is used in.
# We don't need to use wildcard here, fully-qualified domains
# will work fine too.
- "*.default.$DOMAIN"
- "*.other-namespace.$DOMAIN"
# The certificate common name, use one from your domains.
commonName: "*.default.$DOMAIN"
dnsNames:
# Provide same list as `domains` section.
- "*.default.$DOMAIN"
- "*.other-namespace.$DOMAIN"
# Reference to the ClusterIssuer we created in the previous step.
issuerRef:
kind: ClusterIssuer
name: letsencrypt-issuer
EOF
sleep 2
kubectl get certificate --namespace istio-system my-certificate --output yaml
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: knative-ingress-gateway
namespace: knative-serving
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
# Sends 301 redirect for all http requests.
# Omit to allow http and https.
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
EOF
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment