Skip to content

Instantly share code, notes, and snippets.

@tburnam
Created March 30, 2026 04:52
Show Gist options
  • Select an option

  • Save tburnam/881b2920ca40a788348fa0450c5d8f20 to your computer and use it in GitHub Desktop.

Select an option

Save tburnam/881b2920ca40a788348fa0450c5d8f20 to your computer and use it in GitHub Desktop.
name monitor
description Spawn a fast-polling haiku subagent to monitor a condition until it's met, then report back

/monitor - Lightweight Live Condition Monitor

Spawn a lightweight haiku subagent that polls a condition at a fast interval and reports back when the condition is met.

Usage

/monitor until the deploy finishes
/monitor gh run status every 10s
/monitor "curl localhost:3000/health" returns 200
/monitor file /tmp/done.txt exists
/monitor "kubectl get pods -l app=web" shows Running every 3s timeout 2m

Step 1 — Parse the request

Extract from the user's input:

Field Default Examples
Check command (infer from context) curl -s url, gh run view, test -f path
Success condition (infer from context) grep pattern, exit code, output match
Interval 5 seconds "every 3s", "every 10s"
Timeout 300 seconds (5 min) "timeout 2m", "timeout 60s"

Parsing heuristics

  • "quoted string" → literal bash command to run
  • until X / shows X / returns X / contains X → success condition
  • every Ns → polling interval
  • timeout Nm / timeout Ns → max duration
  • Bare descriptions → infer the right command:
    • Deploy/CI → gh run view <id> --json status -q .status looking for completed
    • Health check → curl -s -o /dev/null -w '%{http_code}' <url> looking for 200
    • File exists → test -f <path> && echo EXISTS looking for EXISTS
    • Port open → nc -z localhost <port> 2>&1 && echo OPEN looking for OPEN
    • Process running/stopped → pgrep -f <name> (exit code)
    • K8s pod status → kubectl get pod <name> -o jsonpath='{.status.phase}' looking for Running
    • Command output → run command, grep for pattern

If the check command or success condition is ambiguous, ask the user to clarify before spawning.

Step 2 — Spawn the monitor subagent

Use the Agent tool with these settings:

model: "haiku"
mode: "auto"
run_in_background: true
name: "monitor-<short-label>"    # e.g. "monitor-deploy", "monitor-health"
description: "Monitor <what>"

Subagent prompt construction

Build the subagent prompt with:

  1. The bash polling loop (see template below)
  2. Instructions to run it via a single Bash tool call
  3. Instructions to report the outcome after the loop exits

Bash polling loop template:

TIMEOUT={{timeout}}
INTERVAL={{interval}}
START=$(date +%s)
ITER=0
while true; do
  ITER=$((ITER + 1))
  RESULT=$({{check_command}} 2>&1)
  EXIT_CODE=$?
  if {{success_test}}; then
    echo "CONDITION MET after $(($(date +%s) - START))s ($ITER checks)"
    echo "---"
    echo "$RESULT"
    exit 0
  fi
  ELAPSED=$(($(date +%s) - START))
  if [ $ELAPSED -ge $TIMEOUT ]; then
    echo "TIMED OUT after ${ELAPSED}s ($ITER checks)"
    echo "---"
    echo "Last output:"
    echo "$RESULT"
    echo "Last exit code: $EXIT_CODE"
    exit 1
  fi
  sleep $INTERVAL
done

Where {{success_test}} is one of:

  • echo "$RESULT" | grep -qE '{{pattern}}' — match output against a pattern
  • [ $EXIT_CODE -eq 0 ] — success on zero exit code
  • [ $EXIT_CODE -ne 0 ] — success on non-zero exit code (e.g. process stopped)

Set the Bash tool timeout to min((timeout + 30) * 1000, 600000) ms so the tool doesn't kill the loop prematurely. If the user's timeout exceeds ~9 minutes, instruct the subagent to chain multiple loop invocations with a remaining-time countdown.

Full subagent prompt example

You are a fast condition monitor. Run this bash command and report whether the condition was met or timed out.

Run this with the Bash tool (set timeout to {{bash_timeout_ms}}ms):

<the constructed bash loop>

After the command completes, report:
- Whether the condition was MET or TIMED OUT
- How long it took and how many checks were performed
- The relevant output at the time the condition was met (or the last output if timed out)

Keep your report to 2-3 sentences.

Step 3 — Acknowledge and continue

After spawning, briefly tell the user what you're monitoring:

Monitoring <what> every <N>s (timeout <M>s). I'll let you know when it's done.

Then continue with whatever else the user needs. Do NOT block waiting for the monitor.

Step 4 — Report results

When the background agent completes, inform the user immediately with the result. If the condition was met, include the key output. If it timed out, say so and ask if they want to retry.

If the user's original instructions imply follow-up work after the condition is met (e.g. "monitor until the deploy finishes then run the migration"), execute that follow-up immediately.

If the user wants to monitor the same or a new condition, spawn a fresh subagent — do not reuse the old one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment