Skip to content

Instantly share code, notes, and snippets.

@ltw
Created April 17, 2026 15:50
Show Gist options
  • Select an option

  • Save ltw/6796e249b8a25e91372e8b2524ab8681 to your computer and use it in GitHub Desktop.

Select an option

Save ltw/6796e249b8a25e91372e8b2524ab8681 to your computer and use it in GitHub Desktop.
catch-up: a Claude Code slash command that pulls ~30 engineering feeds, ranks them, and emails a digest. Fetch is zero-LLM Bash; LLM only ranks.

catch-up

A Claude Code slash command that pulls fresh writing from ~30 engineering sources (HN, Lobsters, Reddit, RSS feeds from individual writers, company eng blogs, and database vendor blogs), ranks it against your interests, and emails you a digest.

Zero LLM tokens for the fetch step — it's Bash + Python. The LLM is used only to rank the filtered candidates and write short "why-read-it" lines for the top picks.

What you get

  • A /catch-up command you run on demand in Claude Code.
  • Takes ~20 seconds to fetch, ~3-5k LLM tokens to rank and write.
  • Output: an HTML email in your inbox. Also a markdown file at ~/reading/YYYY-MM-DD-catch-up.md.
  • Default window is the last 7 days. Accepts 3d, 2w, YYYY-MM-DD, or deep (more aggressive thresholds).

Install

  1. Clone this somewhere (path used below: ~/projects/catch-up):

    git clone <this-repo> ~/projects/catch-up
    
  2. Install msmtp (one-time):

    brew install msmtp librsvg
    

    (librsvg only needed if you want to re-render any SVG assets.)

  3. Generate a Gmail app password at https://myaccount.google.com/apppasswords and store it in Keychain:

    security add-generic-password -s catch-up-smtp -a you@example.com -w "YOUR-APP-PW"
    
  4. Copy and edit the config:

    cp ~/projects/catch-up/config.example.sh ~/projects/catch-up/config.sh
    $EDITOR ~/projects/catch-up/config.sh
    

    Set your email address at minimum.

  5. Install the skill:

    cp ~/projects/catch-up/catch-up.md ~/.claude/commands/catch-up.md
    
  6. Create the output directory:

    mkdir -p ~/reading
    

Run it

In Claude Code: /catch-up (last 7 days) or /catch-up 2w (last two weeks).

Tuning

The pieces you'll want to tune for your taste:

  • Ranking profile (in catch-up.md): the "what you care about / what bores you" paragraph. This is what the LLM uses to score. Rewrite it to reflect you.
  • Source weights (in catch-up.md): multipliers applied to raw scores based on trust of the source.
  • Feed list (in fetch.sh, the FEEDS array): add your favorite blogs. Each entry is "Name|https://...rss".
  • Thresholds (top of fetch.sh): HN points floor, Reddit score floor, etc. Raise them to reduce noise.

Files

fetch.sh          — pulls candidates from all sources, emits JSONL. Zero LLM.
render.py         — converts the output markdown to HTML for email.
send.sh           — sends the HTML via SMTP using an app password in Keychain.
catch-up.md       — the Claude Code slash command / skill definition.
config.example.sh — copy to config.sh, edit with your details.

Not included

  • Scheduling. Run it when you want it. If you want a cron, set up launchd yourself.
  • Branding / pretty styling. The HTML is intentionally plain. Customize render.py if you care.
  • Non-Gmail SMTP. The send script defaults to Gmail. Edit config.sh / send.sh if you're elsewhere.
  • OS-agnostic secret storage. Uses macOS Keychain. Port to pass or env vars if you're on Linux.

Catch Up

Pull fresh writing from ~30 engineering sources (aggregators, individual writers, newsletters, company eng blogs, database vendors), rank against the reader's interests, and email a plain-HTML digest.

Fetching is done by a Bash script (zero LLM). The LLM is used only to rank the filtered candidates and write short "why-read-it" lines for the top tier.

This skill assumes the project lives at ~/projects/catch-up/. Adjust paths if yours is elsewhere.

Arguments

$ARGUMENTS — optional. Forms:

  • Empty → last 7 days
  • 3d, 2w → relative window
  • YYYY-MM-DD → since that date
  • deep → pass --deep to fetch script (lower thresholds, more candidates)
  • ai, db, systems, langs → restrict final output to a single topical bucket

Steps

1. Compute window

  • Parse $ARGUMENTS. Default: 7 days.
  • DATE = today (YYYY-MM-DD).
  • SINCE_UNIX = epoch seconds for the start of the window.
  • Output directory comes from $CATCHUP_OUTPUT_DIR (default ~/reading/).
  • Output file: {output_dir}/{DATE}-catch-up.md.

2. Fetch candidates

~/projects/catch-up/fetch.sh {SINCE_UNIX} /tmp/catch-up-{DATE}.jsonl [--deep]

The script emits JSONL with title, url, source, published_at, and score/num_comments/comments_url where applicable. Items without a parseable date are dropped.

3. Pre-filter with jq (no LLM)

Dedupe by URL (strip utm_*, ref, fbclid). Drop listicles and beginner content.

jq -s '
  unique_by(.url | sub("[?&](utm_[^&]+|ref|fbclid)=[^&]*"; "") | rtrimstr("/")) |
  map(select(.title | test("^(top |best )[0-9]+|\\b[0-9]+ things\\b|\\bmust-know\\b|\\bbeginner\\b|\\bfor beginners\\b|explained simply"; "i") | not)) |
  map(select(.url | test("geeksforgeeks|tutorialspoint|javatpoint") | not)) |
  .[]
' /tmp/catch-up-{DATE}.jsonl > /tmp/catch-up-{DATE}.filtered.jsonl

4. Rank with LLM

Score each candidate 1-10 against the reader's interests.

Default reader profile — REPLACE WITH YOUR OWN:

Software engineer. Interested in systems design, databases, distributed systems, programming languages, practitioner war stories and post-mortems, skeptical takes on industry hype. Bored by listicles, beginner tutorials, vendor marketing, career advice, ChatGPT productivity hacks. Prefers clarity and concision over academic length.

Source weights (multiply raw score):

  • Dan Luu, Brandur, Martin Kleppmann, Marc Brooker: 1.4
  • Armin Ronacher, Simon Willison, Hillel Wayne, Julia Evans, Geoffrey Huntley, Murat Demirbas, Aleksey Charapko, Morning Paper: 1.3
  • Latent Space, Pragmatic Engineer, Bytebytego, Netflix/Cloudflare/Figma/Meta/Stripe/Discord engineering, DuckDB/Materialize/TigerBeetle/PlanetScale: 1.2
  • Anthropic, Ethan Mollick, Last Week in AWS, HN (≥200 pts), Lobsters (≥40): 1.2
  • HN (below 200), Lobsters (below 40): 1.0
  • Reddit: 0.9

Keep top 18 (or 25 if deep). If arguments name a bucket, restrict to that bucket.

5. Assign each kept item to a bucket

  1. Worth your time — top 3-5 by final score, regardless of topic.
  2. Systems & architecture — distsys, reliability, service design, infra war stories, scaling.
  3. Databases & data platform — query engines, storage, transactions, OLAP/OLTP, streaming.
  4. AI, agents & tooling — LLMs, agents, evals, prompt engineering.
  5. Programming, languages & craft — language features, compilers, type systems, testing.
  6. War stories & post-mortems — outages, migrations, rewrites, incidents.
  7. Skeptical takes & critiques — hype pushback, benchmark gaming, failure modes.
  8. Threads worth the comments — HN/Lobsters/Reddit where the discussion is the value.

6. Write the digest

---
date: {DATE}
window: {start YYYY-MM-DD} to {DATE}
count: {N}
---

# Catch-Up: {start} to {DATE}

## Worth Your Time
- [{Title}]({url}) — {source} · {one-line why}

## Systems & Architecture
- [{Title}]({url}) — {source} · {date}

(and so on for each bucket that has items)

## Threads Worth the Comments
- [{Title}](comments_url) ({score} pts, {N} comments, {site}) — {what the thread is about}

Formatting rules:

  • Only "Worth Your Time" gets why-lines. Other buckets: title — source · date.
  • Threads section includes comments URL and counts.
  • Drop empty buckets entirely.
  • No marketing language. "Revolutionary", "must-read", "deep dive", "game-changing" are banned.

7. Render HTML and send

python3 ~/projects/catch-up/render.py \
  {output_dir}/{DATE}-catch-up.md \
  /tmp/catch-up-{DATE}.html

~/projects/catch-up/send.sh \
  "Reading digest: {start} to {DATE} ({N} links)" \
  /tmp/catch-up-{DATE}.html \
  "Top pick: {top_title}. {N} links across {M} buckets."

The send script pulls the app password from macOS Keychain (service/account from config.sh) and sends via SMTP.

8. Cleanup

rm -f /tmp/catch-up-{DATE}.jsonl /tmp/catch-up-{DATE}.filtered.jsonl /tmp/catch-up-{DATE}.html

Print: Catch-up written: {N} links. Top pick: {top_title}.

# Copy this to config.sh and edit. Sourced by the send script.
# config.sh is gitignored; this file (config.example.sh) is the template.
# Your email address (recipient and From).
export CATCHUP_TO="you@example.com"
export CATCHUP_FROM="you@example.com"
# Gmail SMTP user. For Google Workspace, same as From. For regular Gmail, same as From.
export CATCHUP_SMTP_USER="you@example.com"
# Gmail SMTP defaults. Change if you're on a different provider.
export CATCHUP_SMTP_HOST="smtp.gmail.com"
export CATCHUP_SMTP_PORT="465"
# Keychain service/account used to store the app password.
# Create the entry once:
# security add-generic-password -s catch-up-smtp -a you@example.com -w "YOUR-APP-PW"
export CATCHUP_KEYCHAIN_SERVICE="catch-up-smtp"
export CATCHUP_KEYCHAIN_ACCOUNT="$CATCHUP_FROM"
# Where the markdown digest files are written.
export CATCHUP_OUTPUT_DIR="$HOME/reading"
#!/bin/bash
# catch-up-fetch.sh — Fetch candidate stories from feeds, emit JSONL. Zero LLM.
# Usage: catch-up-fetch.sh SINCE_UNIX OUTPUT_FILE [--deep]
set -u
SINCE=${1:?"need SINCE_UNIX"}
OUT=${2:?"need OUTPUT_FILE"}
DEEP="${3:-}"
SINCE_DATE=$(date -r "$SINCE" +%Y-%m-%d)
UA="Mozilla/5.0 catch-up-fetch"
: > "$OUT"
TMPDIR=$(mktemp -d -t catchup)
trap "rm -rf $TMPDIR" EXIT
HN_MIN_POINTS=150
LOBSTERS_MIN=25
REDDIT_MIN=250
REDDIT_LIMIT=10
if [ "$DEEP" = "--deep" ]; then
HN_MIN_POINTS=75
LOBSTERS_MIN=15
REDDIT_MIN=150
REDDIT_LIMIT=25
fi
# Write parser scripts to tempdir
cat > "$TMPDIR/rss.py" <<'PY'
import sys, json
import xml.etree.ElementTree as ET
from email.utils import parsedate_to_datetime
from datetime import datetime
source = sys.argv[1]
since = sys.argv[2]
data = sys.stdin.buffer.read()
if not data:
sys.exit(0)
try:
root = ET.fromstring(data)
except ET.ParseError:
sys.exit(0)
for el in root.iter():
if '}' in el.tag:
el.tag = el.tag.split('}', 1)[1]
items = root.findall('.//item') or root.findall('.//entry')
for it in items:
title = (it.findtext('title') or '').strip()
link = (it.findtext('link') or '').strip()
if not link:
le = it.find('link')
if le is not None:
link = le.get('href', '')
pub = it.findtext('published') or it.findtext('updated') or it.findtext('pubDate') or ''
date_str = ''
if pub:
# Try ISO 8601 first (2026-04-16T...), fall back to RFC 822 (Thu, 16 Apr 2026...)
for parser in (
lambda s: datetime.fromisoformat(s.replace('Z', '+00:00')),
parsedate_to_datetime,
):
try:
date_str = parser(pub).strftime('%Y-%m-%d')
break
except Exception:
continue
# Require a parseable date within window — feeds without item dates flood otherwise
if not date_str or date_str < since:
continue
if not (title and link):
continue
print(json.dumps({"title": title[:300], "url": link, "source": source, "published_at": date_str}, ensure_ascii=False))
PY
cat > "$TMPDIR/hn.py" <<'PY'
import json, sys
d = json.load(sys.stdin)
for h in d.get("hits", []):
oid = h.get("objectID", "")
comments = f"https://news.ycombinator.com/item?id={oid}"
url = h.get("url") or comments
print(json.dumps({
"title": (h.get("title") or "")[:300],
"url": url,
"source": "HN",
"published_at": (h.get("created_at") or "")[:10],
"score": h.get("points") or 0,
"num_comments": h.get("num_comments") or 0,
"comments_url": comments,
}))
PY
cat > "$TMPDIR/lobsters.py" <<PY
import json, sys
since = "$SINCE_DATE"
minscore = $LOBSTERS_MIN
try:
arr = json.load(sys.stdin)
except Exception:
sys.exit(0)
for it in arr:
score = it.get("score", 0)
if score < minscore:
continue
d = (it.get("created_at") or "")[:10]
if d and d < since:
continue
print(json.dumps({
"title": (it.get("title") or "")[:300],
"url": it.get("url") or it.get("short_id_url"),
"source": "Lobsters",
"published_at": d,
"score": score,
"comments_url": it.get("short_id_url"),
}))
PY
cat > "$TMPDIR/reddit.py" <<PY
import json, sys
from datetime import datetime, timezone
since_unix = $SINCE
minscore = $REDDIT_MIN
sub = sys.argv[1]
try:
d = json.load(sys.stdin)
except Exception:
sys.exit(0)
for c in d.get("data", {}).get("children", []):
p = c.get("data", {})
if (p.get("score") or 0) < minscore:
continue
if (p.get("created_utc") or 0) < since_unix:
continue
date = datetime.fromtimestamp(p.get("created_utc", 0), tz=timezone.utc).strftime("%Y-%m-%d")
url = p.get("url") or ("https://reddit.com" + p.get("permalink", ""))
print(json.dumps({
"title": (p.get("title") or "")[:300],
"url": url,
"source": "r/" + sub,
"published_at": date,
"score": p.get("score", 0),
"num_comments": p.get("num_comments", 0),
"comments_url": "https://reddit.com" + p.get("permalink", ""),
}))
PY
fetch_rss() {
curl -s -L -A "$UA" --max-time 12 "$2" | python3 "$TMPDIR/rss.py" "$1" "$SINCE_DATE" >> "$OUT"
}
fetch_hn() {
curl -s -A "$UA" --max-time 15 \
"https://hn.algolia.com/api/v1/search_by_date?tags=story&numericFilters=points%3E${HN_MIN_POINTS},created_at_i%3E${SINCE}&hitsPerPage=60" \
| python3 "$TMPDIR/hn.py" >> "$OUT"
}
fetch_lobsters() {
curl -s -A "$UA" --max-time 12 "https://lobste.rs/hottest.json" \
| python3 "$TMPDIR/lobsters.py" >> "$OUT"
}
fetch_reddit() {
curl -s -A "$UA" --max-time 12 "https://old.reddit.com/r/$1/top.json?t=week&limit=${REDDIT_LIMIT}" \
| python3 "$TMPDIR/reddit.py" "$1" >> "$OUT"
}
FEEDS=(
"Simon Willison|https://simonwillison.net/atom/everything/"
"Dan Luu|https://danluu.com/atom.xml"
"Julia Evans|https://jvns.ca/atom.xml"
"Brandur|https://brandur.org/articles.atom"
"Drew DeVault|https://drewdevault.com/blog/index.xml"
"Armin Ronacher|https://lucumr.pocoo.org/feed.atom"
"Hillel Wayne|https://buttondown.com/hillelwayne/rss"
"Marc Brooker|https://brooker.co.za/blog/rss.xml"
"Murat Demirbas|https://muratbuffalo.blogspot.com/feeds/posts/default"
"Aleksey Charapko|http://charap.co/feed/"
"Martin Kleppmann|https://martin.kleppmann.com/feed.rss"
"Geoffrey Huntley|https://ghuntley.com/rss/"
"Ethan Mollick|https://www.oneusefulthing.org/feed"
"Latent Space|https://www.latent.space/feed"
"Pragmatic Engineer|https://newsletter.pragmaticengineer.com/feed"
"Bytebytego|https://blog.bytebytego.com/feed"
"Morning Paper|https://blog.acolyer.org/feed/"
"Changelog|https://changelog.com/news/feed"
"Last Week in AWS|https://www.lastweekinaws.com/feed/"
"Netflix Tech|https://netflixtechblog.com/feed"
"Stripe|https://stripe.com/blog/feed.rss"
"Figma|https://www.figma.com/blog/feed/atom.xml"
"Cloudflare|https://blog.cloudflare.com/rss/"
"Discord|https://discord.com/blog/rss.xml"
"Meta Engineering|https://engineering.fb.com/feed/"
"DuckDB|https://duckdb.org/feed.xml"
"Materialize|https://materialize.com/rss.xml"
"TigerBeetle|https://tigerbeetle.com/blog/atom.xml"
"PlanetScale|https://planetscale.com/blog/feed.atom"
)
REDDIT_SUBS=(programming ExperiencedDevs LocalLLaMA ClaudeAI databasedevelopment)
# Serial fetches to avoid OUT race conditions (curl is I/O-bound so overhead is small)
for entry in "${FEEDS[@]}"; do
src="${entry%%|*}"
url="${entry#*|}"
fetch_rss "$src" "$url"
done
fetch_hn
fetch_lobsters
for sub in "${REDDIT_SUBS[@]}"; do
fetch_reddit "$sub"
done
total=$(wc -l < "$OUT" | tr -d ' ')
echo "fetched $total candidates into $OUT" >&2
#!/usr/bin/env python3
"""Convert a catch-up markdown digest to plain HTML for email.
Intentionally unstyled. If you want it pretty, edit this file.
Usage: render.py <input.md> <output.html>
"""
import re
import sys
from pathlib import Path
from html import escape
LINK_RE = re.compile(r'\[((?:[^\[\]]|\[[^\]]*\])+?)\]\(([^)]+)\)')
CODE_RE = re.compile(r'`([^`]+)`')
BOLD_RE = re.compile(r'\*\*([^*]+)\*\*')
def md_inline(s: str) -> str:
s = escape(s)
s = LINK_RE.sub(r'<a href="\2">\1</a>', s)
s = CODE_RE.sub(r'<code>\1</code>', s)
s = BOLD_RE.sub(r'<strong>\1</strong>', s)
return s
def convert(md: str) -> str:
# Strip frontmatter
md = re.sub(r'^---[\s\S]*?---\n', '', md, count=1)
out = []
in_list = False
def close_list():
nonlocal in_list
if in_list:
out.append('</ul>')
in_list = False
for raw in md.splitlines():
line = raw.rstrip()
m = re.match(r'<!--\s*(.*?)\s*-->$', line)
if m:
out.append(f'<p style="color:#888;font-size:12px;">{escape(m.group(1))}</p>')
continue
if line.startswith('# '):
close_list()
out.append(f'<h1 style="font-size:20px;">{escape(line[2:].strip())}</h1>')
continue
if line.startswith('## '):
close_list()
out.append(f'<h2 style="font-size:15px;margin-top:24px;">{escape(line[3:].strip())}</h2>')
continue
if line.startswith('- '):
if not in_list:
out.append('<ul>')
in_list = True
out.append(f'<li style="margin:6px 0;">{md_inline(line[2:])}</li>')
continue
if not line.strip():
close_list()
continue
close_list()
out.append(f'<p>{md_inline(line)}</p>')
close_list()
body = '\n'.join(out)
return f"""<!DOCTYPE html>
<html>
<head><meta charset="utf-8"></head>
<body style="font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Helvetica,Arial,sans-serif;max-width:680px;margin:2em auto;padding:0 1em;color:#222;font-size:14px;line-height:1.5;">
{body}
</body>
</html>"""
def main():
if len(sys.argv) != 3:
print("usage: render.py INPUT.md OUTPUT.html", file=sys.stderr)
sys.exit(2)
Path(sys.argv[2]).write_text(convert(Path(sys.argv[1]).read_text()))
if __name__ == '__main__':
main()
#!/bin/bash
# send.sh — Email an HTML digest via SMTP using an app password from macOS Keychain.
# Usage: send.sh SUBJECT HTML_FILE PLAIN_TEXT_BODY
set -euo pipefail
SUBJECT=${1:?"need SUBJECT"}
HTML=${2:?"need HTML_FILE"}
PLAIN=${3:-"See HTML version."}
# Load config (from alongside this script, overridden by env)
DIR="$(cd "$(dirname "$0")" && pwd)"
if [ -f "$DIR/config.sh" ]; then
# shellcheck disable=SC1091
source "$DIR/config.sh"
fi
: "${CATCHUP_TO:?CATCHUP_TO not set — see config.example.sh}"
: "${CATCHUP_FROM:?CATCHUP_FROM not set}"
: "${CATCHUP_SMTP_USER:?CATCHUP_SMTP_USER not set}"
: "${CATCHUP_SMTP_HOST:=smtp.gmail.com}"
: "${CATCHUP_SMTP_PORT:=465}"
: "${CATCHUP_KEYCHAIN_SERVICE:=catch-up-smtp}"
: "${CATCHUP_KEYCHAIN_ACCOUNT:=$CATCHUP_FROM}"
if ! security find-generic-password -s "$CATCHUP_KEYCHAIN_SERVICE" -a "$CATCHUP_KEYCHAIN_ACCOUNT" -w >/dev/null 2>&1; then
echo "error: no app password in keychain for $CATCHUP_KEYCHAIN_SERVICE / $CATCHUP_KEYCHAIN_ACCOUNT" >&2
echo "run: security add-generic-password -s $CATCHUP_KEYCHAIN_SERVICE -a $CATCHUP_KEYCHAIN_ACCOUNT -w '...'" >&2
exit 1
fi
BOUNDARY="catchup-$(date +%s)-$$"
TMP=$(mktemp -t catchup-mail)
trap "rm -f $TMP" EXIT
{
echo "From: $CATCHUP_FROM"
echo "To: $CATCHUP_TO"
echo "Subject: $SUBJECT"
echo "MIME-Version: 1.0"
echo "Content-Type: multipart/alternative; boundary=\"$BOUNDARY\""
echo ""
echo "--$BOUNDARY"
echo "Content-Type: text/plain; charset=utf-8"
echo "Content-Transfer-Encoding: 8bit"
echo ""
printf '%s\n' "$PLAIN"
echo ""
echo "--$BOUNDARY"
echo "Content-Type: text/html; charset=utf-8"
echo "Content-Transfer-Encoding: 8bit"
echo ""
cat "$HTML"
echo ""
echo "--$BOUNDARY--"
} > "$TMP"
msmtp \
--host="$CATCHUP_SMTP_HOST" \
--port="$CATCHUP_SMTP_PORT" \
--tls=on \
--tls-starttls=off \
--auth=on \
--user="$CATCHUP_SMTP_USER" \
--passwordeval="security find-generic-password -s $CATCHUP_KEYCHAIN_SERVICE -a $CATCHUP_KEYCHAIN_ACCOUNT -w" \
--from="$CATCHUP_FROM" \
"$CATCHUP_TO" < "$TMP"
echo "sent: $SUBJECT → $CATCHUP_TO" >&2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment