Sketching Compilers Beyond Code
Every programmer remembers the first time they encountered a compiler not as a tool, but as a system. The pipeline—lexing, parsing, semantics, code generation—was presented as the machinery that turned human-readable source into machine-executable instructions. It felt both arcane and essential, a rite of passage into serious computer science.
But the notion of a compiler has always been broader than C turning into assembly. A compiler is any bridge between intent and execution. It takes structure in one domain, transforms it through rules, and produces something runnable in another.
For decades, that metaphor lived mostly inside programming languages. With Large Language Models, it’s beginning to expand.
⸻
The Sketch of a Compiler
LLMs don’t deliver production-grade compilers. What they offer is a sketchpad: you describe the syntax of a toy language in plain English, and the model generates lexers, parsers, and ASTs. You describe typing rules, and it drafts semantic passes. You describe execution, and it builds a bytecode interpreter. It’s scaffolding, not architecture. But scaffolding lowers the barrier to entry.
And once you see the pattern, you begin to notice it everywhere.
⸻
Compilers for Arbitrary Domains
Think about cooking. A recipe is already a kind of domain-specific language: it has nouns (ingredients), verbs (techniques), and control flow (while stirring, until reduced, for 20 minutes). An LLM makes it natural to imagine writing a grammar for food. You could express it in a BNF-like syntax:
Recipe ::= IngredientList Steps
Steps ::= Step+
Step ::= Action Ingredient (Modifier)*
From there, you could imagine a “cooking compiler” that parses natural language recipes into structured instructions for a kitchen robot, or into step-by-step plans optimized for time, available equipment, or nutrition. The compiler is not turning text into machine code—it’s turning text into an executable plan in the physical world.
This isn’t science fiction; it’s already lurking in the way voice assistants parse commands, or how meal-planning apps transform free text into shopping lists. The LLM provides the sketch, the rules, the translation pipeline. The metaphor of compilation generalizes.
⸻
Other Frontiers
Music can be compiled: notes into MIDI, styles into audio. Business workflows can be compiled: policies into state machines, approvals into transitions. Even social interactions can be thought of this way: rituals and rules compiled into predictable behaviors.
BNF, once a tool for programming languages, becomes a way of articulating structure in any symbolic domain. And with an LLM, the act of drafting those grammars is conversational. You sketch the rules, and the system provides the pipeline.
⸻
Why This Matters
For engineers, the revelation is not just that compilers are easier to tinker with. It’s that compilation as a concept is suddenly accessible in any domain where intent needs to become execution. The compiler is no longer a specialized artifact of systems programming; it’s a general pattern for shaping the relationship between language and reality.
LLMs invite us to build compilers for whatever domains we care about—whether that’s running code, cooking food, or coordinating people. They help us see the world not just as a set of programs, but as a set of languages waiting to be formalized, parsed, and executed.
⸻
The New Literacy
The deeper literacy here is to see compilers not as mysterious tools, but as universal translators. To realize that recipes, workflows, even games can be expressed with grammar, parsed into structure, and executed against a runtime—whether that runtime is a CPU, a kitchen, or a team.
LLMs don’t eliminate the need for rigor. They hand us sketches. It is still the engineer’s job to decide which domains deserve compilers, and what values those compilers should embed.
But the horizon has shifted. For the first time, we can imagine compiling anything.