My agent's name is Chiti. It runs on Telegram, handles customer support for two SaaS products, drafts tweets, manages invoices, and coordinates with my co-founder across timezones. It's the closest thing I have to a junior employee. And for weeks, it kept forgetting things. Not in a subtle way. I'd spend an hour configuring a daily cron job, switch models, and the next session Chiti would act like we'd never spoken. I'd reference a decision from two days ago and get a blank stare. I'd ask it to continue a task and it would start from scratch. So I stopped building features and spent 5 days whenever I get time, just fixing memory. This is everything I found, everything I broke, and everything that actually worked.
The first problem was simple to describe and painful to diagnose.
| $baseUrl = "http://localhost:5000" | |
| $headers = @{ "Content-Type" = "application/json" } | |
| $sessionId = $null | |
| Clear-Host | |
| Write-Host "=== Chat Client (localhost:5000/chat) ===" -ForegroundColor Cyan | |
| Write-Host "Type your message and press Enter. Type 'quit' to exit." -ForegroundColor DarkGray | |
| Write-Host "" | |
| while ($true) { |
Write a spec for your cron scheduler. Model it in Lean as a pure state machine — simple, obviously correct. Prove the key invariants: one-shot jobs fire at most once, running jobs can't be scheduled concurrently. AI generates the production C# from the spec. Then FsCheck generates thousands of random event sequences — AddJob, TimerTick, Restart, ClockSkip — and runs each one through both the Lean model and the C# code, step by step. After every event, compare the state: which jobs fired, how many times, what's their status. If the C# scheduler says a one-shot job fired twice and the Lean model says that's impossible, the test fails and tells you exactly which event sequence caused the disagreement. The Lean proofs guarantee the model is correct. The property tests guarantee the C# matches.
The full solution — proving C# correct directly in Lean — requires formally modeling C#'s async runtime, null semantics, exception propa
A nightly cron job that reconciles an agent's constitutional memory (memory.md) against
its evolving knowledge graph (Myelin).
This is "Approach A" in the Procedural Memory Graduation design — external reconciliation. It works, but it curates rather than discovers. The emergent graduation mechanism (Approach B) would complement this by auto-promoting stable graph knowledge.
F1ReplayTiming pulls its data from four distinct external sources and uses two storage backends to persist processed data. The primary data source is the FastF1 Python library, which itself wraps the official F1 timing API (livetiming.formula1.com) to supply historical session data — laps, telemetry, weather, race control messages, driver/team metadata, circuit geometry, and event schedules. For live sessions, the app connects directly to the F1 SignalR real-time stream (wss://livetiming.formula1.com/signalrcore) via WebSocket. A photo-based broadcast sync feature uses the OpenRouter AI API (specifically Gemini Flash vision model) to extract leaderboard data from screenshots. Pre-computed session data is stored either on the local filesystem or in Cloudflare R2 (S3-compatible object storage).
Reverse-engineered from source: letta-ai/claude-subconscious v2.0.2
Generated: 2026-03-28
Claude Code is an AI coding assistant that operates in ephemeral sessions. Every session starts from zero — no memory of past conversations, learned user preferences, project context, or unfinished work. Users must re-explain their codebase, repeat preferences, and re-establish context every time.