| name | monitor |
|---|---|
| description | Spawn a fast-polling haiku subagent to monitor a condition until it's met, then report back |
Spawn a lightweight haiku subagent that polls a condition at a fast interval and reports back when the condition is met.
/monitor until the deploy finishes
/monitor gh run status every 10s
/monitor "curl localhost:3000/health" returns 200
/monitor file /tmp/done.txt exists
/monitor "kubectl get pods -l app=web" shows Running every 3s timeout 2m
Extract from the user's input:
| Field | Default | Examples |
|---|---|---|
| Check command | (infer from context) | curl -s url, gh run view, test -f path |
| Success condition | (infer from context) | grep pattern, exit code, output match |
| Interval | 5 seconds |
"every 3s", "every 10s" |
| Timeout | 300 seconds (5 min) |
"timeout 2m", "timeout 60s" |
"quoted string"→ literal bash command to rununtil X/shows X/returns X/contains X→ success conditionevery Ns→ polling intervaltimeout Nm/timeout Ns→ max duration- Bare descriptions → infer the right command:
- Deploy/CI →
gh run view <id> --json status -q .statuslooking forcompleted - Health check →
curl -s -o /dev/null -w '%{http_code}' <url>looking for200 - File exists →
test -f <path> && echo EXISTSlooking forEXISTS - Port open →
nc -z localhost <port> 2>&1 && echo OPENlooking forOPEN - Process running/stopped →
pgrep -f <name>(exit code) - K8s pod status →
kubectl get pod <name> -o jsonpath='{.status.phase}'looking forRunning - Command output → run command, grep for pattern
- Deploy/CI →
If the check command or success condition is ambiguous, ask the user to clarify before spawning.
Use the Agent tool with these settings:
model: "haiku"
mode: "auto"
run_in_background: true
name: "monitor-<short-label>" # e.g. "monitor-deploy", "monitor-health"
description: "Monitor <what>"
Build the subagent prompt with:
- The bash polling loop (see template below)
- Instructions to run it via a single Bash tool call
- Instructions to report the outcome after the loop exits
Bash polling loop template:
TIMEOUT={{timeout}}
INTERVAL={{interval}}
START=$(date +%s)
ITER=0
while true; do
ITER=$((ITER + 1))
RESULT=$({{check_command}} 2>&1)
EXIT_CODE=$?
if {{success_test}}; then
echo "CONDITION MET after $(($(date +%s) - START))s ($ITER checks)"
echo "---"
echo "$RESULT"
exit 0
fi
ELAPSED=$(($(date +%s) - START))
if [ $ELAPSED -ge $TIMEOUT ]; then
echo "TIMED OUT after ${ELAPSED}s ($ITER checks)"
echo "---"
echo "Last output:"
echo "$RESULT"
echo "Last exit code: $EXIT_CODE"
exit 1
fi
sleep $INTERVAL
doneWhere {{success_test}} is one of:
echo "$RESULT" | grep -qE '{{pattern}}'— match output against a pattern[ $EXIT_CODE -eq 0 ]— success on zero exit code[ $EXIT_CODE -ne 0 ]— success on non-zero exit code (e.g. process stopped)
Set the Bash tool timeout to min((timeout + 30) * 1000, 600000) ms so the tool doesn't kill the loop prematurely. If the user's timeout exceeds ~9 minutes, instruct the subagent to chain multiple loop invocations with a remaining-time countdown.
You are a fast condition monitor. Run this bash command and report whether the condition was met or timed out.
Run this with the Bash tool (set timeout to {{bash_timeout_ms}}ms):
<the constructed bash loop>
After the command completes, report:
- Whether the condition was MET or TIMED OUT
- How long it took and how many checks were performed
- The relevant output at the time the condition was met (or the last output if timed out)
Keep your report to 2-3 sentences.
After spawning, briefly tell the user what you're monitoring:
Monitoring
<what>every<N>s (timeout<M>s). I'll let you know when it's done.
Then continue with whatever else the user needs. Do NOT block waiting for the monitor.
When the background agent completes, inform the user immediately with the result. If the condition was met, include the key output. If it timed out, say so and ask if they want to retry.
If the user's original instructions imply follow-up work after the condition is met (e.g. "monitor until the deploy finishes then run the migration"), execute that follow-up immediately.
If the user wants to monitor the same or a new condition, spawn a fresh subagent — do not reuse the old one.