Skip to content

Instantly share code, notes, and snippets.

@anubhavg-icpl
Created September 4, 2024 05:20
Show Gist options
  • Save anubhavg-icpl/2c48f0d311b3de65ec215082ac60e88a to your computer and use it in GitHub Desktop.
Save anubhavg-icpl/2c48f0d311b3de65ec215082ac60e88a to your computer and use it in GitHub Desktop.

The errors you're encountering are primarily related to network issues that prevent your Kubernetes pods from pulling Docker images. The ImagePullBackOff error indicates that Kubernetes is repeatedly trying and failing to download the required container images (wazuh/wazuh-dashboard:4.7.5 and busybox). The root cause seems to be a network connectivity issue, as indicated by the dial tcp i/o timeout and request canceled while waiting for connection messages.

Here’s a step-by-step guide to troubleshoot and potentially fix these issues:

1. Check Network Connectivity

Ensure that your Minikube node has proper internet connectivity. You can verify this by running:

kubectl exec -it <pod_name> -- curl -I https://registry-1.docker.io/v2/

This command attempts to reach Docker Hub from inside one of your running pods (if any). If it fails, you likely have a network issue.

2. Restart Minikube

Sometimes, restarting Minikube can resolve transient network issues:

minikube stop
minikube start

3. Ensure DNS Resolution

Make sure that the DNS resolution inside your Minikube cluster is functioning correctly. You can test DNS resolution within a pod:

kubectl exec -it <pod_name> -- nslookup google.com

If DNS resolution fails, you might need to check Minikube's DNS settings or consider restarting the kube-dns service.

4. Increase Timeout Limits

If the issue is intermittent, you can try increasing the timeout settings to give the pod more time to pull the image:

minikube config set wait-timeout 10m

5. Pull the Image Manually

You can manually pull the required images on your Minikube node to check for any specific issues:

minikube ssh
sudo docker pull wazuh/wazuh-dashboard:4.7.5
sudo docker pull busybox

6. Set ImagePullPolicy to IfNotPresent

Modify your deployment YAML to set imagePullPolicy: IfNotPresent for the containers, so Kubernetes doesn't attempt to pull the image if it’s already available on the node:

imagePullPolicy: IfNotPresent

7. Check Docker Hub Rate Limits

Ensure you're not hitting Docker Hub rate limits. If you’re using Docker Hub anonymously, consider configuring Docker credentials in Kubernetes:

kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=<your-username> --docker-password=<your-password> --docker-email=<your-email>

Update your deployment to use this secret.

8. Verify Persistent Volume Claims

The wazuh-indexer pod is also facing an issue with unbound PersistentVolumeClaims. Ensure that the PersistentVolume is correctly configured and bound to the claim.

9. Recheck Resource Limits

Ensure your Minikube instance has sufficient resources (CPU and memory). Consider increasing resources if needed:

minikube config set memory 4096
minikube config set cpus 2

After applying these steps, redeploy your Wazuh components and check if the issues are resolved.

@anubhavg-icpl
Copy link
Author

To allocate all available resources, including CPU, memory, and swap, to your Minikube cluster, follow these steps:

Step 1: Stop Minikube

If Minikube is currently running, stop it:

minikube stop

Step 2: Set Resource Allocation

To allocate all available resources, you'll need to specify the maximum number of CPUs, memory, and swap. You can determine your system's total resources and set Minikube to use all of them.

  1. Check Available Resources:

    • CPU Cores:
      nproc
    • Memory (in MiB):
      free -m | grep Mem | awk '{print $2}'
    • Swap (in MiB):
      free -m | grep Swap | awk '{print $2}'
  2. Configure Minikube:
    Replace <total_cpus>, <total_memory_in_mib>, and <total_swap_in_mib> with the values obtained from the above commands.

    minikube config set cpus <total_cpus>
    minikube config set memory <total_memory_in_mib>
    minikube config set disk-size <total_swap_in_mib>

    If you don't have a large swap space, consider adjusting this to match your needs.

Step 3: Start Minikube with Maximum Resources

Now, start Minikube with these configurations:

minikube start --force --extra-config=kubelet.reserved-cpus= --extra-config=kubelet.system-reserved-memory= --extra-config=kubelet.eviction-hard=memory.available<200Mi

Step 4: Verify the Allocation

After starting Minikube, you can verify the resource allocation:

kubectl describe nodes | grep -E "cpu:|memory:"

This setup will allocate the maximum possible resources to your Minikube environment, ensuring it has the highest priority on your system.

Note that allocating all resources may affect the performance of other applications on your system, so adjust as necessary.

@anubhavg-icpl
Copy link
Author

Wazuh Kubernetes Commands

Check Pods Status

kubectl -n wazuh get pods
NAME                               READY   STATUS                  RESTARTS   AGE
wazuh-dashboard-54f99f5985-c8dkp   0/1     ImagePullBackOff        0          49s
wazuh-indexer-0                    0/1     Init:ImagePullBackOff   0          49s
wazuh-manager-master-0             0/1     Pending                 0          49s
wazuh-manager-worker-0             0/1     Pending                 0          49s

Delete Resources

kubectl delete -k envs/local-env/
namespace "wazuh" deleted
storageclass.storage.k8s.io "wazuh-storage" deleted
configmap "dashboard-conf-tgmhtkc5dm" deleted
configmap "indexer-conf-67g4h64bf2" deleted
configmap "wazuh-conf-7hthk8g768" deleted
secret "dashboard-certs-t5h8kdcm47" deleted
secret "dashboard-cred" deleted
secret "indexer-certs-gm97k667hb" deleted
secret "indexer-cred" deleted
secret "wazuh-api-cred" deleted
secret "wazuh-authd-pass" deleted
secret "wazuh-cluster-key" deleted
service "dashboard" deleted
service "indexer" deleted
service "wazuh" deleted
service "wazuh-cluster" deleted
service "wazuh-indexer" deleted
service "wazuh-workers" deleted
deployment.apps "wazuh-dashboard" deleted
statefulset.apps "wazuh-indexer" deleted
statefulset.apps "wazuh-manager-master" deleted
statefulset.apps "wazuh-manager-worker" deleted

Stop Minikube

minikube stop
✋  Stopping node "minikube"  ...
🛑  1 node stopped.

Check System Resources

nproc
16
free -m | grep Mem | awk '{print $2}'
15324
free -m | grep Swap | awk '{print $2}'
32767

Configure Minikube

minikube config set cpus 14 &&
minikube config set memory 13231 &&
minikube config set disk-size 27000
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start

Delete Minikube Cluster

minikube delete
🔥  Deleting "minikube" in kvm2 ...
💀  Removed all traces of the "minikube" cluster.
minikube config set driver kvm2
❗  These changes will take effect upon a minikube delete and then a minikube start
minikube delete
🙄  "minikube" profile does not exist, trying anyways.
💀  Removed all traces of the "minikube" cluster.

Start Minikube

minikube start
😄  minikube v1.33.1 on Archcraft
✨  Using the kvm2 driver based on user configuration
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🔥  Creating kvm2 VM (CPUs=14, Memory=13231MB, Disk=27000MB) ...
❗  This VM is having trouble accessing https://registry.k8s.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment