By the end of this lab you’ll be able to:
- Run kubectl-ai as an MCP server.
- Wire it into Cursor via
mcp.json. - Use Cursor chat + kubectl-ai tools to:
| FROM openeuler/vllm-cpu:0.9.1-oe2403lts | |
| # Patch the cpu_worker.py to handle zero NUMA nodes | |
| RUN sed -i 's/cpu_count_per_numa = cpu_count \/\/ numa_size/cpu_count_per_numa = cpu_count \/\/ numa_size if numa_size > 0 else cpu_count/g' \ | |
| /workspace/vllm/vllm/worker/cpu_worker.py | |
| ENV VLLM_TARGET_DEVICE=cpu \ | |
| VLLM_CPU_KVCACHE_SPACE=1 \ | |
| OMP_NUM_THREADS=2 \ | |
| OPENBLAS_NUM_THREADS=1 \ |
| deploymentMode: SingleBinary | |
| singleBinary: | |
| replicas: 1 | |
| loki: | |
| commonConfig: | |
| replication_factor: 1 | |
| # Required for new installs |
kubectl get secret -n monitoring prom-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Following is a crisp, battle-tested playbook for running databases on Kubernetes—what to do, what to avoid, and how to keep them safe, fast, and recoverable.
| # airbnb_mcp.py | |
| from textwrap import dedent | |
| from agno.agent import Agent | |
| from agno.models.google import Gemini | |
| from agno.tools.mcp import MCPTools | |
| from agno.tools.reasoning import ReasoningTools | |
| from agno.os import AgentOS |
switch to instavote namespace
kubectl config set-context --current --namespace=instavote
helm uninstall -n dev instavote
kubectl delete deploy vote redis db result worker -n instavote
kubectl delete svc vote redis db result -n instavote
| --- | |
| apiVersion: networking.k8s.io/v1 | |
| kind: Ingress | |
| metadata: | |
| name: vote | |
| namespace: instavote | |
| spec: | |
| ingressClassName: nginx | |
| rules: | |
| - host: vote.example.com |