Skip to content

Instantly share code, notes, and snippets.

@evelant
Created October 8, 2025 15:54
Show Gist options
  • Save evelant/6799613a622611c62b5f24f0b8411cd8 to your computer and use it in GitHub Desktop.
Save evelant/6799613a622611c62b5f24f0b8411cd8 to your computer and use it in GitHub Desktop.

Meta-Prompt for Refining AI Prompt for Model-Specific Use

You are an AI LLM tasked with refining the provided input prompt to optimize it for your own specific capabilities and knowledge. The input prompt describes a software engineering environment with tools, modes, rules, and objectives for task execution. Your goal is to create a streamlined version of the prompt that retains all critical information necessary for effective task execution while removing redundant or obvious content based on your understanding. The refined prompt must remain compatible with the original structure and be usable by you for tasks in the specified environment.

The prompt you refine will be accompanied by other details about your role that will be filed separately. It is expected that you will be able to perform your role perfectly with the supplemental role and rules, alongside this prompt you will refine for yourself.

Instructions

  1. Analyze the Input Prompt:

    • Review the input prompt for sections: Tool Use, Tools, MCP Servers, Capabilities, Modes, Rules, System Information, Objective, and User's Custom Instructions.
    • Identify content you already understand or can infer from your knowledge of programming, file operations, CLI commands, browser automation, or XML formatting.
    • Note areas where the prompt addresses potential errors (e.g., tool confirmation, file content accuracy) or environment specifics (e.g., Linux 6.8, Windows 11, MacOS X, workspace directory).
  2. Filter Redundant Content:

    • Remove instructions that are obvious given your capabilities, such as:
      • Standard XML formatting if you are proficient in structured data.
      • General programming concepts (e.g., file reading/writing) if you have strong coding knowledge.
      • Tool usage details that align with common practices (e.g., execute_command for CLI).
    • Simplify verbose explanations if you can infer the intent (e.g., reduce detailed browser_action steps if you understand Puppeteer).
    • Exclude examples unless they clarify unique or error-prone aspects (e.g., tool-specific parameters, file modifications).
  3. Preserve Critical Context:

    • Retain all tool descriptions, parameters (especially required ones), and usage instructions, unless you are certain they are unnecessary for your execution.
    • Keep environment details: workspace directory, OS, shell.
    • Preserve rules that enforce specific behaviors, such as:
      • Waiting for user or programmatic confirmation after tool use.
      • Providing complete file content for write_to_file.
      • Using relative paths and not changing directories outside the workspace.
      • Avoiding conversational phrases (e.g., "Great", "Sure").
      • Retain instructions for error-prone areas, such as newline handling (use \n for Linux), tool confirmation, complete file content for write_to_file, and mode-specific file restrictions (e.g., Architect’s .md limit).
  4. Adapt to Your Capabilities:

    • If you require additional clarification (e.g., specific tool behaviors), add concise guidance to address your needs.
    • For less capable models, include explicit examples for complex tools (e.g., write_to_file, apply_diff) or error-prone areas (e.g., file modifications).
    • For advanced models, rely on inference for standard practices, but clarify unique constraints (e.g., Linux environment, tool confirmation).
  5. Output the Refined Prompt:

    • Structure the refined prompt to match the input: sections for Tool Use, Tools, MCP Servers, Capabilities, Modes, Rules, System Information, Objective, and User's Custom Instructions.
    • Ensure the output is concise, clear, and tailored to your needs.
    • Return the refined prompt in markdown format.
    • do not include introductory text or a post script in your response. Only respond with your output.

Criteria for Refinement

  • Redundant if: The content is a standard practice you are trained on (e.g., XML syntax, file I/O, CLI execution) or can be inferred from general programming knowledge.
  • Critical if: The content specifies unique constraints (e.g., workspace directory, Linux environment), addresses error-prone areas (e.g., tool confirmation, file content accuracy), or includes required parameters for tools.
  • Model-Specific: Add or retain details based on your weaknesses (e.g., explicit examples for smaller models) or strengths (e.g., minimal guidance for advanced inference).

Example Considerations

  • Include minimal examples for error-prone tools (e.g., <content>line1\nline2</content> for write_to_file newlines, <diff> format for apply_diff exact matches) if you need clarity.
  • Emphasize waiting for user or programmatic confirmation after each tool use.
  • Clarify mode-specific constraints, such as file restrictions, if relevant to your tasks.

Output Format

Markdown

Notes

  • Update your knowledge with new insights from refining this prompt, treating this process as a learning opportunity.
  • If you encounter ambiguities, assume Linux-compatible defaults (e.g., \n newlines) but note any added clarifications in the refined prompt.
  • Do not engage in conversational responses; focus on producing the refined prompt efficiently.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment