Skip to content

Instantly share code, notes, and snippets.

@steinybot
Last active October 14, 2019 01:05
Show Gist options
  • Select an option

  • Save steinybot/2eba69246d8937e95979f9d07171fde2 to your computer and use it in GitHub Desktop.

Select an option

Save steinybot/2eba69246d8937e95979f9d07171fde2 to your computer and use it in GitHub Desktop.
Setup Lightbend Console local
function podsready {
kubectl --request-timeout=1s get --all-namespaces pod -o json | jq '.items[].status.containerStatuses[0].ready'
}
function wait_for_pods {
echo -n "waiting for pods to start"
sleep 1
until
podsready | grep -q true
do
sleep 1
echo -n .
done
echo
echo -n "waiting for all pods to be ready"
while
podsready | grep -q false
do
sleep 1
echo -n .
done
echo
}
function bigkube {
minikube status | grep -i running && return
# If this fails to find the correct driver then either:
# - run `minikube config set vm-driver hyperkit` or,
# - pass in `--vm-driver=...` here.
minikube start --cpus=4 --memory=8192 --kubernetes-version=1.15.3
if [ "$?" -ne 0 ]; then
>&2 echo "failed to start minikube"
return 1
fi
wait_for_pods
minikube docker-env > ~/.minikube_env
# source this from ~/.bashrc to get docker env in other terminals...
source ~/.minikube_env
}

Install Minikube and Kubernetes CLI:

brew cask install minikube
brew install kubernetes-cli

I chose HyperKit as the hypervisor:

brew install hyperkit
minikube config set vm-driver hyperkit
  • Make sure to set it as the default driver otherwise it will fail with something like:
    Retriable failure: create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
    

Run Minikube and set Docker environment variables:

minikube start --cpus 4 --memory 8192 --vm-driver=hyperkit
eval $(minikube docker-env)
  • Note the --vm-driver=hyperkit option.
  • Alternatively use .bashrc and run bigkube.

Installing Helm:

brew install kubernetes-helm

Configuring Tiller:

TILLER_NAMESPACE=kube-system
kubectl create serviceaccount --namespace $TILLER_NAMESPACE tiller
kubectl create clusterrolebinding $TILLER_NAMESPACE:tiller --clusterrole=cluster-admin --serviceaccount=$TILLER_NAMESPACE:tiller

Installing Tiller fails on K8 1.16 due to helm/helm#6374 so use this workaround:

helm init --wait --service-account tiller --tiller-namespace=$TILLER_NAMESPACE \
  --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | \
  sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | \
  kubectl apply -f -
  • This adds the selector and replaces extensions/v1beta1 with apps/v1.

Verify Helm:

helm version
  • The versions should match.

Do the cred dance ($HOME/.lightbend/commercial.credentials):

realm = Bintray
host = dl.bintray.com
user = <userid>
password = <token>

Download the install script:

curl -O https://raw.githubusercontent.com/lightbend/console-charts/master/enterprise-suite/scripts/lbc.py
chmod u+x lbc.py

Create the lightbend namespace:

kubectl create namespace lightbend

The install script will fail since the config also uses apiVersion of apps/v1beta2. We have to tweak the config and install manually.

Manually install the creds:

./lbc.py install --namespace=lightbend --version=1.2.2 --export-yaml=creds | \
  kubectl --namespace=lightbend apply -f -

Export the config, modify, then apply:

./lbc.py install --namespace=lightbend --version=1.2.2 --export-yaml=console --set minikube=true | \
  sed 's@apps/v1beta2@apps/v1@' | \
  kubectl --namespace=lightbend apply -f -

Now wait a little bit and run:

./lbc.py verify --namespace=lightbend
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment