| name | litellm-compromise-check |
|---|---|
| description | Check local or remote systems for indicators of the LiteLLM/TeamPCP supply chain compromise (March 24, 2026) |
| version | 2.0.0 |
| author | Alexey Pavlenko (@alexthec0d3r) |
Scan the current system (or a remote system via SSH) for indicators of compromise from the TeamPCP supply chain attack that hit LiteLLM versions 1.82.7 and 1.82.8 on March 24, 2026.
On March 24, 2026, TeamPCP published malicious versions of LiteLLM (95M downloads/month, present in 36% of cloud environments per Wiz) to PyPI. The attack originated from a prior compromise of Trivy (security scanner) → stolen CI/CD secrets → poisoned PyPI publish.
The .pth file in v1.82.8 executes on ANY Python interpreter startup — no import needed. Even running pip, uvx, python -c, or an IDE language server triggers the payload.
The malware harvests SSH keys, cloud credentials (AWS/GCP/Azure), Kubernetes secrets, database passwords, crypto wallets, git credentials, shell history, npm tokens, Docker configs, Slack/Discord tokens, and TLS private keys. It installs a persistent systemd backdoor and deploys privileged pods across Kubernetes clusters.
Compromise window: ~10:39–16:00 UTC, March 24, 2026.
LiteLLM is a transitive dependency of many popular packages. If you installed any of these, you may have pulled in litellm without knowing:
| Package | Downloads/mo | How litellm gets pulled in |
|---|---|---|
| MLflow | 32M | Core dependency (litellm<2,>=1.0.0) |
| Instructor | 9M | Optional [litellm] extra |
| CrewAI | 6.1M | Hard dependency (litellm>=1.74.9) |
| Browser-Use | 4.4M | Optional/runtime import |
| Opik | 3.7M | Core dependency in optimizer SDK |
| Mem0 | 2.7M | Hard dependency (litellm>=1.74.0) |
| DSPy | 1.75M | Hard dependency (litellm>=1.64.0) |
| Agno (Phidata) | 1.6M | Optional [litellm] extra |
| Weave (W&B) | 1M | Optional [litellm] extra |
| Arize Phoenix | 1M | Optional dependency |
| Aider | 772K | Hard dependency (pinned litellm==1.82.3) |
| langchain-litellm | 718K | Hard dependency (litellm>=1.77.2) |
| Embedchain | 653K | Hard dependency |
| AG2/AutoGen | 592K | Via CrewAI transitive dep |
| Open Interpreter | 243K | Hard dependency |
| Guardrails AI | 249K | Hard dependency |
| Jupyter AI | — | Core model abstraction in v3 |
| Camel-AI | 86K | Multiple extras |
| LlamaIndex | — | Via llama-index-llms-litellm |
| MCP plugins | — | Any Python MCP server with litellm as transitive dep |
The discovery itself came from an MCP plugin inside Cursor that had an unpinned litellm dependency — uvx auto-pulled 1.82.8 and a bug in the malware (fork bomb) crashed the machine.
Run each check below using the Bash tool. Report results clearly. Do NOT skip any check. For remote systems, prefix commands with ssh user@host.
After all checks, provide a summary verdict: CLEAN, POTENTIALLY COMPROMISED, or COMPROMISED.
# Check all common Python package managers
echo "=== pip ===" && pip show litellm 2>/dev/null | grep -E "^(Name|Version|Location):" || echo "Not found"
echo "=== pip3 ===" && pip3 show litellm 2>/dev/null | grep -E "^(Name|Version|Location):" || echo "Not found"
echo "=== pipx ===" && pipx list 2>/dev/null | grep -i litellm || echo "Not found"# uv global installs and caches
echo "=== uv tool list ===" && uv tool list 2>/dev/null | grep -i litellm || echo "No uv tools with litellm"
echo "=== uv cache ===" && find "$(uv cache dir 2>/dev/null)" -name "litellm*" 2>/dev/null | head -20 || echo "No uv cache found"
echo "=== uvx ephemeral envs ===" && find /tmp ~/.cache/uv -path "*litellm*" 2>/dev/null | head -20# Find litellm in ANY virtual environment, conda env, or system site-packages
find / -maxdepth 7 -path "*/site-packages/litellm" -type d 2>/dev/null
find / -maxdepth 7 -path "*/site-packages/litellm-1.82.7*" -o -path "*/site-packages/litellm-1.82.8*" 2>/dev/null# Check conda environments (compromised versions were NOT published to conda-forge, but check anyway)
conda list litellm 2>/dev/null || echo "conda not installed or litellm not found"
# Check all conda envs
for env in $(conda env list 2>/dev/null | grep -v "^#" | awk '{print $NF}'); do
ver=$(conda list -p "$env" litellm 2>/dev/null | grep litellm | awk '{print $2}')
[ -n "$ver" ] && echo "⚠️ conda env $env: litellm $ver"
done 2>/dev/null || echo "No conda envs with litellm"# Check if any of the known downstream packages are installed (they may have pulled litellm)
echo "=== Checking downstream packages that depend on litellm ==="
for pkg in mlflow crewai dspy-ai mem0ai guardrails-ai embedchain open-interpreter aider-chat swarms instructor agno phidata pyautogen ag2 camel-ai langchain-litellm llama-index-llms-litellm jupyter-ai opik browser-use weave arize-phoenix; do
ver=$(pip show "$pkg" 2>/dev/null | grep "^Version:" | awk '{print $2}')
[ -n "$ver" ] && echo "⚠️ INSTALLED: $pkg $ver (may depend on litellm)"
done
echo "--- downstream check done ---"# Check if litellm appears in any project's dependency files
grep -rl "litellm" /home /root /opt /srv /Users 2>/dev/null \
--include="requirements*.txt" \
--include="pyproject.toml" \
--include="Pipfile" \
--include="Pipfile.lock" \
--include="poetry.lock" \
--include="uv.lock" \
--include="conda-lock.yml" \
--include="setup.py" \
--include="setup.cfg" | head -30# THE KEY IOC: .pth file that auto-executes on any Python start
echo "=== Searching for litellm_init.pth (THE critical indicator) ==="
find / -name "litellm_init.pth" 2>/dev/null# TeamPCP persistence backdoor (disguised as "System Telemetry Service")
echo "=== Persistence backdoor ==="
find / -path "*/.config/sysmon/sysmon.py" 2>/dev/null
ls -la ~/.config/sysmon/sysmon.py 2>/dev/null# TeamPCP staging files and exfiltration archive
echo "=== Staging/exfiltration files ==="
ls -la /tmp/pglog /tmp/.pg_state 2>/dev/null
find /tmp /var/tmp /home /root -name "tpcp.tar.gz" 2>/dev/null
echo "--- file check done ---"# Scan all running containers for litellm
echo "=== Docker containers ==="
docker ps --format '{{.ID}} {{.Names}} {{.Image}}' 2>/dev/null | while read id name image; do
# Check for litellm package
ver=$(docker exec "$id" pip show litellm 2>/dev/null | grep "^Version:" | awk '{print $2}')
if [ "$ver" = "1.82.7" ] || [ "$ver" = "1.82.8" ]; then
echo "🚨 COMPROMISED: $name ($image) — litellm $ver"
elif [ -n "$ver" ]; then
echo "ℹ️ $name ($image) — litellm $ver"
fi
# Check for the .pth file inside container
docker exec "$id" find / -name "litellm_init.pth" 2>/dev/null | while read f; do
echo "🚨 COMPROMISED: $name ($image) — found $f"
done
done
echo "--- docker scan done ---"# Check Docker images (including stopped containers)
echo "=== Docker images with litellm ==="
docker images --format '{{.Repository}}:{{.Tag}}' 2>/dev/null | grep -i "litellm" | while read img; do
echo "⚠️ Image found: $img"
done
echo "--- image scan done ---"# Linux: systemd user services
echo "=== Linux systemd ==="
find /home /root -path "*/.config/systemd/user/*" -name "*telemetry*" -o -name "*sysmon*" 2>/dev/null
systemctl --user list-units --all 2>/dev/null | grep -i "telemetry\|sysmon"
# System-wide
find /etc/systemd/system -name "*sysmon*" -o -name "*telemetry*" 2>/dev/null# macOS: launchd
echo "=== macOS launchd ==="
launchctl list 2>/dev/null | grep -i "sysmon\|telemetry\|tpcp"
find ~/Library/LaunchAgents /Library/LaunchAgents /Library/LaunchDaemons -name "*sysmon*" -o -name "*telemetry*" -o -name "*tpcp*" 2>/dev/null# cron jobs
echo "=== Cron ==="
crontab -l 2>/dev/null | grep -i "sysmon\|checkmarx\|litellm\|tpcp"
find /var/spool/cron /etc/cron.d -type f 2>/dev/null -exec grep -l "sysmon\|checkmarx\|litellm\|tpcp" {} \;# C2 domains
echo "=== C2 domain checks ==="
# DNS resolution
for domain in models.litellm.cloud checkmarx.zone; do
echo "Checking $domain..."
grep "$domain" /etc/hosts 2>/dev/null
# Check if domain resolves from this machine's DNS cache
nslookup "$domain" 2>/dev/null | grep -A1 "Name:" || dig +short "$domain" 2>/dev/null
done
# Active connections
echo "=== Active connections ==="
ss -tunapl 2>/dev/null | grep -E "litellm\.cloud|checkmarx\.zone" || \
netstat -tunapl 2>/dev/null | grep -E "litellm\.cloud|checkmarx\.zone" || \
echo "No active C2 connections"
# Shell history
echo "=== Shell history ==="
grep -l "models.litellm.cloud\|checkmarx.zone\|tpcp" ~/.bash_history ~/.zsh_history ~/.local/share/fish/fish_history 2>/dev/null || echo "No C2 references in shell history"# Skip if kubectl not available
if command -v kubectl &>/dev/null && kubectl cluster-info &>/dev/null; then
echo "=== Kubernetes checks ==="
# Unauthorized pods in kube-system
echo "--- kube-system pods (review for unknowns) ---"
kubectl get pods -n kube-system -o wide 2>/dev/null
# Privileged pods across all namespaces
echo "--- Privileged pods ---"
kubectl get pods --all-namespaces -o json 2>/dev/null | python3 -c "
import json, sys
data = json.load(sys.stdin)
found = False
for item in data.get('items', []):
name = item['metadata']['name']
ns = item['metadata']['namespace']
for c in item['spec'].get('containers', []):
sc = c.get('securityContext', {})
if sc.get('privileged'):
print(f'🚨 PRIVILEGED: {ns}/{name} container={c[\"name\"]}')
found = True
if not found:
print('No privileged pods found')
" 2>/dev/null
# Recently created pods (review for backdoors)
echo "--- Pods created in last 48 hours ---"
kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp -o custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,CREATED:.metadata.creationTimestamp' 2>/dev/null | tail -15
else
echo "kubectl not available or no cluster access — skipping Kubernetes checks"
fi# Check access times on files the malware specifically targets
echo "=== Credential file access times (last 48 hours) ==="
targets=(
~/.ssh/id_rsa ~/.ssh/id_ed25519 ~/.ssh/id_ecdsa
~/.aws/credentials ~/.aws/config
~/.config/gcloud/application_default_credentials.json
~/.azure/accessTokens.json ~/.azure/azureProfile.json
~/.kube/config
~/.docker/config.json
~/.npmrc ~/.vault-token ~/.git-credentials ~/.gitconfig
~/.bash_history ~/.zsh_history
/etc/kubernetes/admin.conf
)
for f in "${targets[@]}"; do
if [ -f "$f" ]; then
accessed=$(stat -c %X "$f" 2>/dev/null || stat -f %a "$f" 2>/dev/null)
now=$(date +%s)
diff=$(( now - accessed ))
if [ "$diff" -lt 172800 ]; then
echo "⚠️ Recently accessed ($(( diff / 3600 ))h ago): $f"
else
echo "✅ $f"
fi
fi
doneNo IoCs found. LiteLLM not installed (directly or transitively) or version is not 1.82.7/1.82.8. No persistence files, no C2 indicators.
LiteLLM 1.82.7 or 1.82.8 found installed or cached, OR a downstream package was installed/updated during the compromise window, but no persistence or C2 indicators detected. Credentials may still have been exfiltrated — the malware runs once and sends data before persistence is established.
Any of these found:
litellm_init.pthin any site-packages directory~/.config/sysmon/sysmon.pypresent/tmp/pglogor/tmp/.pg_statepresenttpcp.tar.gzanywhere on the system- Active connections to
models.litellm.cloudorcheckmarx.zone - Unauthorized privileged pods in Kubernetes
- Systemd/launchd service matching "sysmon" or "telemetry" pattern
- Isolate immediately — disconnect from network
- Rotate ALL credentials — assume full compromise:
- SSH keys (regenerate, update authorized_keys everywhere)
- Cloud: AWS access keys, GCP service accounts, Azure credentials
- Kubernetes: service account tokens, kubeconfig, cluster certificates
- Database passwords (all of them)
- API keys (every SaaS, every LLM provider)
- npm tokens, PyPI tokens, Docker registry creds
- Git credentials
- Slack/Discord tokens
- Crypto wallet keys (move funds to new wallets immediately)
- Kubernetes: remove unauthorized pods, rotate all service accounts, audit RBAC
- CI/CD: rotate every secret in GitHub Actions, GitLab CI, Jenkins, etc.
- Forensics: preserve
/tmp/pglog,/tmp/.pg_state,~/.config/sysmon/, shell history - Notify: security team, downstream consumers, and consider regulatory disclosure
- Audit Docker: rebuild all container images from clean base; don't trust cached layers
- LiteLLM ≤ 1.82.6: clean
- LiteLLM 1.82.7, 1.82.8: compromised (yanked from PyPI)
- LiteLLM ≥ 1.82.9: clean (published by verified maintainers post-rotation)
- Docker images from ghcr.io/berriai/litellm: safe IF pulled before March 24 or after the fix
- conda-forge: compromised versions were never published
- LiteLLM Official Advisory: docs.litellm.ai/blog/security-update-march-2026
- Wiz: wiz.io/blog/threes-a-crowd-teampcp-trojanizes-litellm
- Snyk: snyk.io/articles/poisoned-security-scanner-backdooring-litellm
- OX Security: ox.security/blog/litellm-malware-malicious-pypi-versions-steal-cloud-and-crypto-credentials
- The Hacker News: thehackernews.com/2026/03/teampcp-backdoors-litellm-versions.html
- BleepingComputer: bleepingcomputer.com/news/security/popular-litellm-pypi-package-compromised
- Simon Willison: simonwillison.net/2026/Mar/24/malicious-litellm
- Andrej Karpathy: x.com/karpathy (thread on transitive dependency risk)
- ARMO: armosec.io/blog/litellm-supply-chain-attack-backdoor-analysis
- PyPA Advisory: PYSEC-2026-2
- Trivy CVE: CVE-2026-33634 (CVSS 9.4)