In recent years, the rise of Large Language Models (LLMs) has birthed a new phenomenon known as "Vibe Coding." Developers describe what they want in natural language, and the AI generates thousands of lines of code. While this allows for rapid prototyping, it often fails in production-grade systems where intent, maintainability, and architectural discipline are paramount.
This project, the TinyGnomes Document Extractor, takes a different path. We utilize a methodology called AI-Assisted Literate Programming (ALP).
Inspired by Donald Knuth’s original concept of Literate Programming, ALP treats the documentation not as a side-car but as the source of truth. It is about intent and discipline—prioritizing design concerns and high-level goals over menial tasks. It enforces a philosophy of understanding before doing.
In this project, the human writes a book describing the system's logic, and the AI acts as the "Agentic Tangle," translating that narrative into production-ready Python code. Because LLMs act as universal translators, the "source code" (the Markdown) can be written in whatever conceptual specification language the developer deems ideal for the task.
In traditional development, code is written for an interpreter to execute; it is the primary focus. In ALP, the focus shifts entirely to the human reader. The developer is writing a cohesive book for other humans to read and understand. As the original author of this project noted, "the program code is just a byproduct, almost a happy accident of a well-written book."
In a traditional workflow, code is written first, and documentation (if it exists) follows. In ALP, we flip this:
- Understand: The developer thinks through the problem and writes a Markdown chapter.
- Specify: The chapter defines the contract (input/output), the error handling (exceptions), and the logic (pseudo-code).
- Tangle: An AI agent reads the Markdown and updates the corresponding Python file.
- Enforce: Any change to the system must start in the Markdown. Editing the Python code directly is strictly forbidden.
This approach ensures that the "Why" is always captured. When a developer (or an AI agent) returns to a module six months later, they don't read obscure code; they read the book.
A common question arises: If the Markdown is the source code, why commit the Python files to version control?
While pure Literate Programming might regenerate the code entirely on build, this project takes a practical approach: AI is not entirely deterministic, and flagship LLMs incur API costs and latency. Committing the Python code acts as an execution cache. The CI/CD pipeline does not need to wait (or pay) for an LLM to re-translate the entire book; the LLM is only invoked to sync changes incrementally when a chapter is edited.
To prove that this isn't just "vibe coding" but a rigorous practice, let's look at the data:
- Documentation Volume: ~6,000 lines of Markdown documentation.
- Code Volume: ~7,400 lines of Python code.
- Documentation-to-Code Ratio: ~81%.
For every 10 lines of code, there are 8 lines of human-readable documentation. In many professional codebases, this ratio is closer to 5-10%. This high density of documentation is what allows the AI to operate with surgical precision during refactors and feature additions.
While ALP has been a massive success for this project, it is not without challenges:
A common failure mode is when the AI generates pseudo-code that is simply a copy-paste of the Python implementation. This defeats the purpose of Literate Programming.
Bad (Too Literal): Focuses on the syntax of the language rather than the intent.
for page in self.reader.pages:
all_text.append(page.extract_text())
return "\n".join(all_text)
Good (Conceptual): Focuses on the goal, leaving the trivial iteration details to the AI's standard library knowledge.
<iterate through reader pages, extract text from each, and return them joined by newlines>
The Fix: We must insist on high-level intent. Good pseudo-code should describe the transformation of data, not the mechanics of the language.
Because Python files are committed to the repository, there is always a temptation to "just fix this one bug" in the code. This is the "broken window" of ALP. The Lesson: The AI agent must be instructed to always consult the documentation first. If a bug exists in the code, it exists in the documentation's logic.
By specifying "Exceptions" and "Contracts" as metadata outside the core pseudo-code block, we decouple the business logic from the boilerplate. The developer focuses on the algorithm; the AI handles the try/except blocks, logging, and type hints.
To see ALP in action, look at how we define the detect_mime_type function in common.md. This function is complex: it must balance content detection (magic bytes) with user hints and extension-based fallbacks. It is not just pseudo-code; it is a multi-layered specification.
The human defines the intent, the requirements, and the logic. Every section here informs the final code:
## Define `detect_mime_type()`
Detects the MIME type of a file using content-based detection...
Parameters:
- `file_path: Path` - The path to the file
- `user_mime_type: str | None` - Optional MIME type hint
- `user_extension: str | None` - Optional file extension hint
Returns: `str` - The detected MIME type
Contract:
- It MUST use `magic.from_file()` to detect content
- If result is `application/zip`, it MUST read 2048 bytes and use `magic.from_buffer()`
- It MUST prioritize content over hints (detailed priority logic omitted for brevity)
- It MUST log all evidence and warnings for mismatches
```pseudo-code
content_mime = magic.from_file(file_path, mime=True)
if content_mime == 'application/zip':
<open file, read 2048 bytes, use magic.from_buffer() on the buffer>
<determine final_mime_type using the priority logic described>
<log info with all evidence>
if <sources don't match>:
<log warning about MIME type mismatch>
return final_mime_type
```The AI assistant (the "Agentic Tangle") parses that metadata to generate production-ready Python. Notice how the Parameters become type hints and arguments, the Returns becomes the return type, and the Contract drives the implementation of the <...> conceptual blocks:
def detect_mime_type(
file_path: Path,
user_mime_type: str | None = None,
user_extension: str | None = None,
) -> str:
"""Detects the MIME type of a file using content-based detection as primary method."""
# From common.md (Contract: MUST use magic.from_file)
content_mime = magic.from_file(str(file_path), mime=True)
# From common.md (Contract: handle zip specific detection)
if content_mime == "application/zip":
with open(file_path, "rb") as f:
content_mime = magic.from_buffer(f.read(2048), mime=True)
# ... logic determined by the Contract prioritizing content ...
# From common.md (Contract: MUST log all evidence)
LOGGER.info(
"MIME type evidence for %s: content=%s, user=%s, extension=%s -> final=%s",
file_path, content_mime, user_mime_type, extension_mime, final_mime_type
)
return final_mime_typeBecause the human provides the high-level design and the AI handles the translation, we achieve a system that is both formally specified and industrially robust.
Because of the successes and lessons highlighted above, we have established the following formal rules to ensure the relationship between Narrative and Code remains strong.
This section formalizes the rules used in this project to instruct AI agents. These rules ensure that the pseudo-code remains readable for humans while being deterministic enough for AI translation.
Used for high-level tasks that don't need literal representation. The AI is responsible for implementing the concept using the best available libraries.
- Example:
<get the user object from the database for the given user_id>
To keep things clean, pseudo-code uses dot notation (e.g., config.host) which the AI translates to the correct Python structure (typically config['host'] or config.host depending on the class).
Any string containing {var} is treated as an f-string. The f prefix is omitted in pseudo-code for brevity.
- Pseudo:
"Processing {file_path}"→ Code:f"Processing {file_path}"
Every function/module documentation block contains:
- Parameters & Returns: Used to generate type hints and signatures.
- Exceptions: A list of what can go wrong. The AI must wrap the pseudo-code in appropriate
try...exceptblocks based on this list, even if the pseudo-code doesn't explicitly show them. - Contract: Strict rules the implementation must follow (e.g., "Must call super()").
The AI is not merely a translator; it is a Guardian of Intent.
- Format Enforcer: Must apply standard formatting (Black, Isort).
- Type Enforcer: Must add full PEP-484 type hints.
- Docstring Enforcer: Every function must have a Google-style or ReST docstring based on the Markdown metadata.
AI-Assisted Literate Programming bridges the gap between the speed of AI and the rigor of production engineering. By elevating documentation to the level of source code, we create systems that are not just "functional," but understandable. In an age where AI writes more code than humans, being able to read and audit the intent is the most valuable skill a developer can have.