What i'm doing with ContentML, in additional to the functionality as shown in WebCompiler is basically building a smart system to embed the timeless principles of copywriting right into the markup itself so the copy almost writes itself. It's like a digital assistant trained in classic advertising, working quietly behind the scenes to shape persuasive, audience-aware content every time you compile.
This document describes the <provision>
element in ContentML, which acts as a declarative hook for generating and embedding copywriting content via LLMs.
The <provision>
element specifies a placeholder within HTML that triggers one-time content generation during the provisioning phase. It uses the current context stack (inherited from surrounding <context>
elements) along with a human-written prompt hint to generate copy for the site.
The result is saved to a dedicated file, ensuring future runs remain idempotent—the model is not re-invoked unless the output file is deleted or regenerated.
-
If the
src
file does not exist:- The system builds a prompt from the current context stack and the inline body of the
<provision>
element. - The LLM is invoked once.
- The result is written to the specified file.
- The content is injected at compile time.
- The system builds a prompt from the current context stack and the inline body of the
-
If the
src
file does exist:- The file is read directly.
- The content is injected without re-invoking the LLM.
This ensures safe, modular provisioning that blends automation with long-term manual refinement.
<provision src="content/homepage/hero.txt">
Hero blurb highlighting core value proposition for the business
</provision>
In this example:
- The inline body provides a short prompt "hint."
- The generated copy is saved to
content/homepage/hero.txt
. - If the file exists, it will be reused as-is.
Attribute | Type | Required | Description |
---|---|---|---|
src |
string |
✅ Yes | Path to the output file where generated content will be saved and reused. |
context |
string |
❌ No | Name of a context profile from the registry. Inherits from parent if omitted. |
format |
string |
❌ No | Output format hint (e.g. inline , markdown , html ). Guides prompt shaping. |
- The inner content of the
<provision>
tag acts as a localized prompt hint and is not included in the final HTML output. - Provisioned files are never overwritten unless deleted manually—allowing manual editing after the initial LLM pass.
- To support editorial workflows or A/B testing, use
<provision-group>
(coming soon).
The <provision>
element provides:
- A declarative way to generate copy content with LLMs.
- Integration with contextual prompts via inherited
<context>
frames. - Persistent outputs stored in versioned files.
- A streamlined way to prototype, publish, and refine page copy.
Below are practical examples showing how to use <provision>
effectively in real-world scenarios.
Provision a short inline description for a key section of a homepage:
<context>
Write in a clear, energetic tone aimed at first-time homeowners.
</context>
<provision src="content/homepage/value-prop.txt">
Concise value proposition explaining why this service is trusted locally.
</provision>
Apply one context frame to multiple content blocks:
<context>
All content below should appeal to working parents looking for reliable, affordable service.
</context>
<provision src="content/homepage/hero.txt">
Hero section intro blurb.
</provision>
<provision src="content/homepage/testimonial.txt">
Sample testimonial from a satisfied customer.
</provision>
Use the format
attribute to shape LLM output:
<provision src="content/homepage/features.md" format="markdown">
Bullet list of 3 key service features.
</provision>
You can point to a predefined context profile from the registry:
<provision src="content/about/story.txt" context="company-story">
Narrative paragraph describing the founding of the company.
</provision>
To regenerate content during development, simply delete the provisioned file:
rm content/homepage/hero.txt
On the next build, the compiler will re-invoke the LLM using the same prompt stack.
This document describes how the <context>
element is used in ContentML to guide LLM-driven content generation by appending to the current context stack.
The <context>
element provides lexical context for downstream <provision>
elements. It allows you to define natural-language instructions or guiding phrases that influence how content is generated, without modifying the underlying HTML structure.
Each <context>
element adds a new frame to the current context stack, which is evaluated top-down during the provisioning phase. These frames can either be defined inline or reference a named context stack from the global context registry using the for
attribute.
-
Appends a
ContextFrame
to the current lexical stack. -
Affects all nested
<provision>
elements within the same scope. -
Stack frames are evaluated top-down, with outer frames appearing earlier in the prompt.
-
<context>
can operate in two modes:- Inline: Defines a new frame from its inner text.
- Reference: Appends a named stack using the
for
attribute.
<context>
Write in a warm, conversational tone suitable for local homeowners.
</context>
<provision src="content/homepage/hero.txt">
Hero blurb introducing the company and core value proposition.
</provision>
<context for="homepage" />
<provision src="content/homepage/cta.txt">
Short call to action encouraging visitors to sign up.
</provision>
In this case:
- The compiler pulls the
"homepage"
context stack from the registry. - Its frames are appended to the current context stack.
- The
<context>
tag body is ignored whenfor
is present.
- Multiple
<context>
elements can be used in sequence or nested form. All are included in the evaluated stack. - When
for="..."
is present, the referenced stack is appended to the current stack. - Inner content is only used in inline mode (i.e. when
for
is not present). - Content should be written in natural language—just as you’d describe a writing assignment to a human copywriter.
The <context>
element allows you to shape the tone, intent, and specificity of LLM-generated content directly in markup. Whether defining a local hint or reusing a named profile, it enables modular, transparent control of how copy is framed across the page structure.