The errors you're encountering are primarily related to network issues that prevent your Kubernetes pods from pulling Docker images. The ImagePullBackOff
error indicates that Kubernetes is repeatedly trying and failing to download the required container images (wazuh/wazuh-dashboard:4.7.5
and busybox
). The root cause seems to be a network connectivity issue, as indicated by the dial tcp i/o timeout
and request canceled while waiting for connection
messages.
Here’s a step-by-step guide to troubleshoot and potentially fix these issues:
Ensure that your Minikube node has proper internet connectivity. You can verify this by running:
kubectl exec -it <pod_name> -- curl -I https://registry-1.docker.io/v2/
This command attempts to reach Docker Hub from inside one of your running pods (if any). If it fails, you likely have a network issue.
Sometimes, restarting Minikube can resolve transient network issues:
minikube stop
minikube start
Make sure that the DNS resolution inside your Minikube cluster is functioning correctly. You can test DNS resolution within a pod:
kubectl exec -it <pod_name> -- nslookup google.com
If DNS resolution fails, you might need to check Minikube's DNS settings or consider restarting the kube-dns
service.
If the issue is intermittent, you can try increasing the timeout settings to give the pod more time to pull the image:
minikube config set wait-timeout 10m
You can manually pull the required images on your Minikube node to check for any specific issues:
minikube ssh
sudo docker pull wazuh/wazuh-dashboard:4.7.5
sudo docker pull busybox
Modify your deployment YAML to set imagePullPolicy: IfNotPresent
for the containers, so Kubernetes doesn't attempt to pull the image if it’s already available on the node:
imagePullPolicy: IfNotPresent
Ensure you're not hitting Docker Hub rate limits. If you’re using Docker Hub anonymously, consider configuring Docker credentials in Kubernetes:
kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=<your-username> --docker-password=<your-password> --docker-email=<your-email>
Update your deployment to use this secret.
The wazuh-indexer
pod is also facing an issue with unbound PersistentVolumeClaims. Ensure that the PersistentVolume is correctly configured and bound to the claim.
Ensure your Minikube instance has sufficient resources (CPU and memory). Consider increasing resources if needed:
minikube config set memory 4096
minikube config set cpus 2
After applying these steps, redeploy your Wazuh components and check if the issues are resolved.
Deployment Guide for Wazuh on Local Kubernetes Environment
This guide outlines the steps to deploy Wazuh on a local Kubernetes environment, such as Microk8s, Minikube, or Kind. It focuses on a local development scenario. For more detailed deployment instructions on an EKS cluster, refer to the instructions.md file.
Prerequisites
Resource Requirements
To deploy the
local-env
variant, the Kubernetes cluster should have at least the following resources available:Deployment Steps
1. Clone the Repository
git clone https://github.com/wazuh/wazuh-kubernetes.git cd wazuh-kubernetes
2. Setup SSL Certificates
You can generate self-signed certificates for the ODFE cluster using the script located at
wazuh/certs/indexer_cluster/generate_certs.sh
, or you can provide your own.Since the Wazuh Dashboard requires HTTPS, it will need its own certificates. These can be generated using:
Additionally, there's a utility script at
wazuh/certs/dashboard_http/generate_certs.sh
to assist with this.The required certificates are imported via
secretGenerator
in thekustomization.yml
file:3. Tune the Storage Class with a Custom Provisioner
Depending on the type of local development cluster you're using, the Storage Class may have a different provisioner. You can verify the provisioner by running:
For example, you might see:
If the provisioner is
microk8s.io/hostpath
, you should edit the fileenvs/local-env/storage-class.yaml
to set up this provisioner.4. Apply All Manifests Using Kustomize
We use the overlay feature of Kustomize to create two variants:
eks
andlocal-env
. This guide focuses on thelocal-env
variant. (For a production deployment on EKS, refer to the guide in instructions.md).Resource allocation for the cluster can be adjusted by editing patches in
envs/local-env/
. Thelocal-env
variant reduces the number of replicas for Elasticsearch nodes and Wazuh workers to save resources. These patches can be removed or altered inkustomization.yaml
to modify these settings.To deploy the entire cluster with a single command, use:
5. Access the Dashboard
To access the Wazuh Dashboard interface, use port-forwarding:
The Dashboard will be accessible at
https://localhost:8443
.