Note: Kubernetes version changes rapidly and so as its dashboard. Despite that fact, the versions of kubernetes, kubectl, dashboard, helm have to be in concordance. Otherwise, it fails to match DNS or yaml properties one another.
Table of Contents, Install kubectl, Install minikube, Install kubernetes, Print kubernetes info, Install Dashboard, Install Weavescope, Install helm, Install Prometheus and Grafana, Install kubefwd
ryoji@ubuntu:~$ kubectl version Client Version: version.Info{ Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64" } Server Version: version.Info{ Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64" }
tool | version |
---|---|
kubectl | v1.17.0 |
minikube | v1.7.1 |
kubernetes | v1.17.2 |
helm | v3.0.2 |
The easiest way to collect them is just to let VSCode kubernetes extention do it.
ryoji@ubuntu:~/.vs-kubernetes$ ./tools/kubectl/kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} ryoji@ubuntu:~/.vs-kubernetes$ ./tools/minikube/linux-amd64/minikube version minikube version: v1.7.2 commit: 50d543b5fcb0e1c0d7c27b1398a9a9790df09dfb ryoji@ubuntu:~/.vs-kubernetes$ ./tools/helm/linux-amd64/helm version version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
ref: https://kubernetes.io/docs/tasks/tools/install-kubectl/
ryoji@ubuntu:~$ which kubectl /home/ryoji/.local/bin/kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl
ref: https://kubernetes.io/docs/tasks/tools/install-minikube/
ref: https://github.com/kubernetes/minikube/releases/tag/v1.7.1
ryoji@ubuntu:~$ minikube version minikube version: v1.7.1 commit: 7de0325eedac0fbe3aabacfcc43a63eb5d029fda
curl -LO https://github.com/kubernetes/minikube/releases/download/v1.7.1/minikube-linux-amd64
ryoji@ubuntu:~$ minikube start --vm-driver=virtualbox --kubernetes-version=1.17.2 π minikube v1.7.1 on Ubuntu 18.04 β¨ Using the virtualbox driver based on user configuration π₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ... π³ Preparing Kubernetes v1.17.2 on Docker '19.03.5' ... πΎ Downloading kubelet v1.17.2 πΎ Downloading kubectl v1.17.2 πΎ Downloading kubeadm v1.17.2 π Pulling images ... π Launching Kubernetes ... π Enabling addons: default-storageclass, storage-provisioner β Waiting for cluster to come online ... π Done! kubectl is now configured to use "minikube"
ref: https://code.visualstudio.com/docs/azure/kubernetes
If you find a problem, ~/.kube/config
is a place to look at.
ryoji@ubuntu:~$ minikube dashboard π Enabling dashboard ... π€ Verifying dashboard health ... π Launching proxy ... π€ Verifying proxy health ... π Opening http://127.0.0.1:37351/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
You might want to have a look at this command:
ryoji@ubuntu:~$ minikube ssh _ _ _ _ ( ) ( ) ___ ___ (_) ___ (_)| |/') _ _ | |_ __ /' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\ | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/ (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____) $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.17.2 cba2a99699bd 3 weeks ago 116MB k8s.gcr.io/kube-controller-manager v1.17.2 da5fd66c4068 3 weeks ago 161MB k8s.gcr.io/kube-apiserver v1.17.2 41ef50a5f06a 3 weeks ago 171MB k8s.gcr.io/kube-scheduler v1.17.2 f52d4c527ef2 3 weeks ago 94.4MB kubernetesui/dashboard v2.0.0-beta8 eb51a3597525 2 months ago 90.8MB k8s.gcr.io/coredns 1.6.5 70f311871ae1 3 months ago 41.6MB k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 3 months ago 288MB kubernetesui/metrics-scraper v1.0.2 3b08661dc379 3 months ago 40.1MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 2 years ago 80.8MB
Do NOT try to install Dashboard into minikube manually using kubectl, you might face errors:
-f v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
-f v2.0.0-beta7/aio/deploy/recommended.yaml
ref: https://www.weave.works/docs/scope/latest/installing/
ryoji@ubuntu:~$ kubectl apply -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')" namespace/weave created serviceaccount/weave-scope created clusterrole.rbac.authorization.k8s.io/weave-scope created clusterrolebinding.rbac.authorization.k8s.io/weave-scope created deployment.apps/weave-scope-app created service/weave-scope-app created deployment.apps/weave-scope-cluster-agent created daemonset.apps/weave-scope-agent created
You might want to access this service without using proxy:
- Change a type of it in Service from ClusterIP to NodePort.
- Get that address.
ryoji@ubuntu:~$ minikube service weave-scope-app -n weave --url http://192.168.99.100:31418
ref: https://helm.sh/docs/intro/install/
ref: https://github.com/helm/helm/releases/tag/v3.0.2
curl -LO https://get.helm.sh/helm-v3.0.2-linux-amd64.tar.gz
ref: https://medium.com/@at_ishikawa/install-prometheus-and-grafana-by-helm-9784c73a3e97
helm repo add stable https://kubernetes-charts.storage.googleapis.com
ryoji@ubuntu:~$ helm install prometheus stable/prometheus NAME: prometheus LAST DEPLOYED: Mon Feb 10 19:42:11 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: prometheus-server.default.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9090 The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster: prometheus-alertmanager.default.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 ################################################################################# ###### WARNING: Pod Security Policy has been moved to a global property. ##### ###### use .Values.podSecurityPolicy.enabled with pod-based ##### ###### annotations ##### ###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) ##### ################################################################################# The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster: prometheus-pushgateway.default.svc.cluster.local Get the PushGateway URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9091 For more information on running Prometheus, visit: https://prometheus.io/
ryoji@ubuntu:~$ helm install grafana stable/grafana NAME: grafana LAST DEPLOYED: Mon Feb 10 19:43:41 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: 1. Get your 'admin' user password by running: kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo 2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: grafana.default.svc.cluster.local Get the Grafana URL to visit by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana,release=grafana" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 3000 3. Login with the password from step 1 and the username: admin ################################################################################# ###### WARNING: Persistence is disabled!!! You will lose your data when ##### ###### the Grafana pod is terminated. ##### #################################################################################
As it's written above, the password of user:admin was obtained by
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
ref: https://github.com/txn2/kubefwd/releases/tag/1.11.1
ryoji@ubuntu:/media/VirtualBox VMs$ kubefwd version INFO[17:50:45] _ _ __ _ INFO[17:50:45] | | ___ _| |__ ___ / _|_ ____| | INFO[17:50:45] | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _ | INFO[17:50:45] | <| |_| | |_) | __/ _|\ V V / (_| | INFO[17:50:45] |_|\_\\__,_|_.__/ \___|_| \_/\_/ \__,_| INFO[17:50:45] INFO[17:50:45] Version 1.11.1 INFO[17:50:45] https://github.com/txn2/kubefwd INFO[17:50:45] Kubefwd version: 1.11.1 https://github.com/txn2/kubefwd
curl -LO https://github.com/txn2/kubefwd/releases/download/1.11.1/kubefwd_linux_amd64.tar.gz
ryoji@ubuntu:~$ sudo kubefwd svc -n default [sudo] password for ryoji: INFO[20:04:20] _ _ __ _ INFO[20:04:20] | | ___ _| |__ ___ / _|_ ____| | INFO[20:04:20] | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _ | INFO[20:04:20] | <| |_| | |_) | __/ _|\ V V / (_| | INFO[20:04:20] |_|\_\\__,_|_.__/ \___|_| \_/\_/ \__,_| INFO[20:04:20] INFO[20:04:20] Version 1.11.1 INFO[20:04:20] https://github.com/txn2/kubefwd INFO[20:04:20] INFO[20:04:20] Press [Ctrl-C] to stop forwarding. INFO[20:04:20] 'cat /etc/hosts' to see all host entries. INFO[20:04:20] Loaded hosts file /etc/hosts INFO[20:04:20] Hostfile management: Original hosts backup already exists at /home/ryoji/hosts.original INFO[20:04:20] Forwarding: prometheus-kube-state-metrics:80 to pod prometheus-kube-state-metrics-dbb8f96c-4z8dw:8080 INFO[20:04:20] Forwarding: prometheus-kube-state-metrics:81 to pod prometheus-kube-state-metrics-dbb8f96c-4z8dw:8081 INFO[20:04:20] Forwarding: prometheus-kube-state-metrics-dbb8f96c-4z8dw.prometheus-kube-state-metrics:80 to pod prometheus-kube-state-metrics-dbb8f96c-4z8dw:8080 INFO[20:04:20] Forwarding: prometheus-kube-state-metrics-dbb8f96c-4z8dw.prometheus-kube-state-metrics:81 to pod prometheus-kube-state-metrics-dbb8f96c-4z8dw:8081 INFO[20:04:20] Forwarding: prometheus-alertmanager:80 to pod prometheus-alertmanager-cddcc88d6-z88tv:9093 INFO[20:04:21] Forwarding: prometheus-pushgateway:9091 to pod prometheus-pushgateway-75946db59c-zf6sf:9091 WARN[20:04:21] WARNING: No Pod selector for service kubernetes in default on cluster . INFO[20:04:21] Forwarding: prometheus-node-exporter:9100 to pod prometheus-node-exporter-6fnwg:9100 INFO[20:04:21] Forwarding: prometheus-node-exporter-6fnwg.prometheus-node-exporter:9100 to pod prometheus-node-exporter-6fnwg:9100 INFO[20:04:22] Forwarding: prometheus-server:80 to pod prometheus-server-746dd86648-g4nq6:9090 INFO[20:04:23] Forwarding: grafana:80 to pod grafana-69f9d77964-6275v:3000