Skip to content

Instantly share code, notes, and snippets.

@arenagroove
Last active July 15, 2025 04:14
Show Gist options
  • Select an option

  • Save arenagroove/7c42431a9d5796e627291f165ea63711 to your computer and use it in GitHub Desktop.

Select an option

Save arenagroove/7c42431a9d5796e627291f165ea63711 to your computer and use it in GitHub Desktop.
A modular prompt template for prompt-based AI systems, designed for clarity, flexibility, and 2025 prompt engineering standards.

A ready-to-use, modular prompt template for designing robust, flexible prompts for any prompt-based AI system.

Modular Prompt Template for Prompt-Based AI Systems (2025)

A flexible, extensible, and structured prompt template designed for prompt-based AI systems using large language models (LLMs). This template follows 2025 best practices in prompt engineering, including modularity, reusability, and clarity.

📁 Metadata

  • Title: Modular Prompt Template
  • Version: 1.0
  • Last Updated: 2025-07-15
  • Author: Luis Alberto Martinez Riancho (@arenagroove)
  • Affiliation: Less Rain GmbH
  • Tags: prompt-engineering, LLM, modular, AI, template
  • License: MIT
  • Platform Compatibility:
    • Not all modules are supported by every LLM provider—check your platform’s documentation.
    • [ADVANCED] tags indicate modules requiring advanced LLM capabilities.

🧠 Purpose

This template provides a modular scaffold for building high-quality prompts that can be adapted across domains, tasks, and user contexts. It supports both core components (essential for most tasks) and optional modules (for advanced control, personalization, and robustness).

✅ Use Cases

  • Prompt engineering for LLM-based assistants, agents, or chatbots
  • Workflow automation and task orchestration
  • Prompt versioning and drift management
  • Teaching or documenting best practices in prompt design

🧩 Structure

Each section is clearly marked as:

  • [CORE] – Essential for most prompt-based tasks
  • [OPTIONAL] – Add for flexibility, specificity, or robustness; include as needed and in any order
  • [CONTEXT ENGINEERING] – Indicates modules reflecting advanced context-centric practices
  • [ADVANCED] – Requires advanced LLM features; check platform support

Explanatory comments are included under each heading for guidance


Instruction [CORE]

Main task for the model; clear, actionable, and specific.

Instruction:
Summarize the following article in three bullet points, focusing on key facts only.

Context [CORE] [CONTEXT ENGINEERING]

All relevant background, data, or input needed for the task. Use clear delimiters for large blocks of text.

Context:
Article: """
[Paste article text here]
"""

Role/Persona [CORE]

Assigns expertise, tone, or perspective to the model.

Role/Persona:
You are an experienced business analyst writing for a corporate audience.

Output Constraints [CORE]

Specifies output format, length, style, and restrictions.

Output Constraints:
- Format: Bullet points
- Max 50 words
- No subjective language

Examples (Few-Shot) [CORE for all but simplest tasks]

Anchors expected output; always include for ambiguous or complex tasks.

Examples:
Input: "The company reported record profits..."
Output:
- Record profits reported for Q2.
- Revenue grew by 20% year-over-year.
- New products were key growth drivers.

Lens [OPTIONAL]

Applies a specific analytical or stylistic filter (e.g., "risk management lens").

Lens:
Analyze the article through a risk management lens.

Audience [OPTIONAL]

Specifies the intended reader/user (e.g., "for non-technical executives").

Audience:
Intended for senior executives with limited technical background.

Style Guide [OPTIONAL]

Explicitly sets the writing style or tone (e.g., "formal academic style").

Style Guide:
Use concise, persuasive business language.

Drift Awareness [OPTIONAL] [CONTEXT ENGINEERING] [ADVANCED]

Detects and reports changes in meaning or output over time (prompt, concept, or output drift).

Drift Awareness:
- Prompt Drift: Changes due to prompt/model updates.
- Concept Drift: Shifts in meaning or context.
- Output Drift: Divergence from baseline summaries.
Compare your output to the baseline and flag any drift.

Assumptions & Limitations [OPTIONAL] [CONTEXT ENGINEERING]

Lists assumptions or known gaps; promotes transparency.

Assumptions & Limitations:
Assume all data is current as of 2025. If any data is missing, state “Data not available.”

Step-by-Step Reasoning [OPTIONAL]

Guides the model through multi-step logic or chain-of-thought.

Step-by-Step Reasoning:
1. Identify the three most important facts.
2. Exclude opinions or minor details.
3. Phrase each point concisely.

Tool/Function Invocation [OPTIONAL] [CONTEXT ENGINEERING] [ADVANCED]

Specifies tool/API call syntax per platform; clarify fallback behavior if invocation fails.

Tool/Function Invocation:
- OpenAI: call function_name(args)
- Anthropic: [TOOL: tool_name] input
If calculations are required, use the calculator API and cite results.

Error Handling/Fallback Output [OPTIONAL] [CONTEXT ENGINEERING]

Instructions for missing/ambiguous data or task failure.

Error Handling/Fallback Output:
If unable to complete the summary due to insufficient information, respond: “Summary not possible with the provided data.”

Ethics & Bias Mitigation [OPTIONAL]

Avoid sensitive or biased content; flag any sensitive topics.

Ethics & Bias Mitigation:
Avoid subjective or potentially biased statements; flag any sensitive topics.

Localization/Internationalization [OPTIONAL] [CONTEXT ENGINEERING]

Specifies language, region, or cultural context.

Localization/Internationalization:
Use UK English and adapt examples for the European market.

Personalization Hooks [OPTIONAL] [CONTEXT ENGINEERING] [ADVANCED]

Insert user data dynamically; note if not supported everywhere.

Personalization Hooks:
Include the user's name in the greeting if provided.

Meta/Reflection [OPTIONAL] [CONTEXT ENGINEERING]

Self-check for instruction adherence and output quality.

Meta/Reflection:
- Review your summary for alignment with the baseline.
- Ensure all constraints and the specified lens are applied.
- If any drift is detected, briefly describe it.

Follow-Ups [OPTIONAL]

Suggests next steps, clarifying questions, or prepares for multi-turn interaction.

Follow-Ups:
- Suggest two follow-up questions a user might ask based on your summary.
- Be prepared to expand any bullet point into a paragraph if requested.

User Feedback Collection [OPTIONAL] [CONTEXT ENGINEERING]

Request user feedback on the output and provide guidance for prompt iteration.

User Feedback Collection:
Please rate the usefulness of this summary on a scale from 1 to 5.
If users rate output below 3/5, log the prompt and output for review and revision.

🗂️ Module Clarification Table

Module Purpose/Clarification Platform Notes
Meta/Reflection Self-check for instruction adherence and output quality Universal
Drift Awareness Detect/report changes in meaning or output over time Advanced, not always present
Examples (Few-Shot) Anchor expected output; include for all but the simplest tasks Universal
Tool/Function Invocation Specify tool/API call syntax per platform; clarify fallback Platform-specific
Personalization Hooks Insert user data dynamically; note if not supported everywhere Advanced

💡 Tips for Using This Template

  • Start with core modules, then add optional ones incrementally.
  • Use this template as a base for prompt generators or toolkits.
  • Regularly version your prompts to track changes and detect drift.
  • Use clear delimiters (like """ or ---) to separate content sections in actual prompts.

License: MIT — Free to use, modify, and distribute.

Assessment: Modular Prompt Template for Prompt-Based AI Systems (2025)

Overview

This template is a comprehensive, modular framework for constructing prompts for large language model (LLM) systems, reflecting the most advanced prompt engineering and context engineering practices as of 2025. It is structured for clarity, adaptability, and professional documentation, with explicit guidance and actionable examples.

Strengths

Alignment with 2025 Best Practices

  • Modular Design: Clearly separates core and optional modules, enabling flexible prompt construction for a wide range of tasks and domains.
  • Explicit Context Handling: Elevates context to a core, distinctly marked module, with clear placeholders and delimiter guidance—an essential feature in modern context engineering.
  • Clarity and Usability: Each module includes explanatory comments and copy-paste-ready markdown blocks, lowering the barrier for adoption and reducing ambiguity.
  • Few-Shot Examples: Strongly encourages the use of input-output examples for all but the simplest prompts, anchoring model behavior and improving reliability.
  • Role/Persona and Output Constraints: Ensures output consistency and relevance by specifying model persona and output requirements.
  • Iterative Feedback and Drift Management: Incorporates modules for user feedback, drift awareness, and self-checking, supporting continuous improvement and robust lifecycle management.
  • Platform Awareness: Advanced modules are flagged, and compatibility notes are included, preparing users for differences across LLM platforms.

Emerging Context Engineering Features

  • Context Engineering Markers: Uses explicit [CONTEXT ENGINEERING] tags to help practitioners identify modules that embody advanced, system-level context management.
  • Drift Awareness and Meta/Reflection: Provides distinct modules for monitoring and reporting drift, as well as for output self-verification.
  • Tool/Function Invocation: Offers platform-specific syntax and fallback guidance for integrating external tools and APIs, supporting agentic and context-rich workflows.
  • Personalization, Localization, Feedback: Includes modules for adaptive, user-driven context, supporting dynamic interaction and internationalization.

Documentation Quality

  • Professional Introduction and Metadata: Begins with a concise summary, detailed metadata, and a clear purpose statement, consistent with open-source and professional standards.
  • Module Clarification Table: Summarizes advanced modules, their purposes, and platform notes for rapid reference.
  • Tips and Licensing: Provides actionable usage tips and clear licensing information.

Weaknesses and Areas for Improvement

  • Module Overlap: The distinction between “Meta/Reflection” and “Drift Awareness” could be sharper; clearer guidance on their unique roles is recommended.
  • Examples Section: More varied and edge-case input-output examples would further strengthen the template, especially for ambiguous or complex tasks.
  • Tool Invocation Details: While platform syntax is included, more explicit fallback logic and practical failure scenarios would enhance real-world usability.
  • Feedback Iteration: The user feedback module could offer more detailed instructions on how to systematically incorporate feedback into prompt revisions.
  • Ethics and Bias: The ethics module is present but could benefit from more actionable checklists or prompts for bias mitigation.
  • Assumptions & Limitations: Encouraging explicit enumeration of all known data gaps, uncertainties, and model limitations would add transparency.
  • Advanced Feature Warnings: Advanced modules are flagged, but clearer warnings about potential incompatibilities or degraded behavior on less capable platforms would be helpful.

Novelty

  • Context Engineering Tagging: The use of explicit markers for context engineering modules is a novel, transparent approach that aids both education and practice.
  • Platform Compatibility Notes: The inclusion of [ADVANCED] tags and compatibility guidance prepares users for real-world deployment across diverse LLM infrastructures.
  • Structured Feedback and Drift Management: The formalization of feedback loops and drift monitoring is advanced compared to most prior templates.
  • Comprehensive, Actionable Scaffolding: The presence of explicit, ready-to-use markdown blocks for each module is uncommon and highly practical.

Fact Keys Table

Area Present 2025 Best Practice Context Engineering Sign
Modular Structure
Explicit Context Module
Role/Persona Assignment
Output Constraints
Few-Shot Examples
Drift Awareness
Tool/Function Invocation
Error Handling
Personalization Hooks
Localization/Internationalization
Meta/Reflection
User Feedback Collection
Ethics & Bias Mitigation
Platform Compatibility Notes

Final Verdict

This template is a robust, advanced resource that faithfully embodies both prompt engineering and context engineering principles as of 2025. Its modularity, explicit examples, context-centric markers, and professional documentation make it suitable for practitioners, educators, and product teams. For further advancement, sharper module distinctions, richer examples, and expanded practical notes on feedback and bias are recommended. The template stands as a model for clarity, completeness, and alignment with the evolving standards of the field.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment