Skip to content

Instantly share code, notes, and snippets.

@disler
Last active March 21, 2025 22:05
Show Gist options
  • Save disler/308edf5cc5df664e72fe9a490836d62e to your computer and use it in GitHub Desktop.
Save disler/308edf5cc5df664e72fe9a490836d62e to your computer and use it in GitHub Desktop.
Four Level Framework for Prompt Engineering

Four Level framework for prompt engineering

Watch the breakdown here in a Q4 2024 prompt engineering update video

LLM library

Ollama

Level 1: Ad hoc prompt

  • Quick, natural language prompts for rapid prototyping
  • Perfect for exploring model capabilities and behaviors
  • Can be run across multiple models for comparison
  • Great for one-off tasks and experimentation

Level 2: Structured prompt

  • Reusable prompts with clear purpose and instructions
  • Uses XML/structured format for better model performance
  • Contains static variables that can be modified
  • Solves well-defined, repeatable problems

Level 3: Structured prompt with example output

  • Builds on Level 2 by adding example outputs
  • Examples guide the model to produce specific formats
  • Increases consistency and reliability of outputs
  • Perfect for when output format matters

Level 4: Structured prompt with dynamic content

  • Production-ready prompts with dynamic variables
  • Can be integrated into code and applications
  • Infinitely scalable through programmatic updates
  • Foundation for building AI-powered tools and agents
Summarize the content with 3 hot takes biased toward the author and 3 hot takes biased against the author
...paste content here...
<purpose>
Summarize the given content based on the instructions and example-output
</purpose>
<instructions>
<instruction>Output in markdown format</instruction>
<instruction>Summarize into 4 sections: High level summary, Main Points, Sentiment, and 3 hot takes biased toward the author and 3 hot takes biased against the author</instruction>
<instruction>Write the summary in the same format as the example-output</instruction>
</instructions>
<content>
{...} <<< update this manually
</content>
<purpose>
Summarize the given content based on the instructions and example-output
</purpose>
<instructions>
<instruction>Output in markdown format</instruction>
<instruction>Summarize into 4 sections: High level summary, Main Points, Sentiment, and 3 hot takes biased toward the author and 3 hot takes biased against the author</instruction>
<instruction>Write the summary in the same format as the example-output</instruction>
</instructions>
<example-output>
# Title
## High Level Summary
...
## Main Points
...
## Sentiment
...
## Hot Takes (biased toward the author)
...
## Hot Takes (biased against the author)
...
</example-output>
<content>
{...} <<< update this manually
</content>
<purpose>
Summarize the given content based on the instructions and example-output
</purpose>
<instructions>
<instruction>Output in markdown format</instruction>
<instruction>Summarize into 4 sections: High level summary, Main Points, Sentiment, and 3 hot takes biased toward the author and 3 hot takes biased against the author</instruction>
<instruction>Write the summary in the same format as the example-output</instruction>
</instructions>
<example-output>
# Title
## High Level Summary
...
## Main Points
...
## Sentiment
...
## Hot Takes (biased toward the author)
...
## Hot Takes (biased against the author)
...
</example-output>
<content>
{{content}} <<< update this dynamically with code
</content>
{
"XML Prompt Block 1": {
"prefix": "px1",
"body": [
"<purpose>",
" $1",
"</purpose>",
"",
"<instructions>",
" <instruction>$2</instruction>",
" <instruction>$3</instruction>",
" <instruction>$4</instruction>",
"</instructions>",
"",
"<${5:block1}>",
"$6",
"</${5:block1}>"
],
"description": "Generate XML prompt block with instructions and block1"
},
"XML Tag Snippet Inline": {
"prefix": "xxi",
"body": [
"<${1:tag}>$2</${1:tag}>",
],
"description": "Create an XML tag with a customizable tag name and content"
}
}
@eemmikail
Copy link

eemmikail commented Feb 8, 2025

"You can be inspired by the example. But if there is a best practice, follow it." I had this problem too but this instruction solved it :)

So, this is an interesting observation. Looks like this XML-based prompt template forces the LLM to take things very literally. Meaning, the output sentence structure is very similar to how the example sentence is structured.

I ran it again, this time, removing the examples block and it handled each description uniquely...and more accurately too.

So, my conclusion right now is that for basic description prompts, don't use any examples sentences ... it causes the LLM to err too much to the structure of your example sentences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment