Watch the breakdown here in a Q4 2024 prompt engineering update video
- Quick, natural language prompts for rapid prototyping
- Perfect for exploring model capabilities and behaviors
- Can be run across multiple models for comparison
- Great for one-off tasks and experimentation
- Reusable prompts with clear purpose and instructions
- Uses XML/structured format for better model performance
- Contains static variables that can be modified
- Solves well-defined, repeatable problems
- Builds on Level 2 by adding example outputs
- Examples guide the model to produce specific formats
- Increases consistency and reliability of outputs
- Perfect for when output format matters
- Production-ready prompts with dynamic variables
- Can be integrated into code and applications
- Infinitely scalable through programmatic updates
- Foundation for building AI-powered tools and agents
So, this is an interesting observation. Looks like this XML-based prompt template forces the LLM to take things very literally. Meaning, the output sentence structure is very similar to how the example sentence is structured.
I ran it again, this time, removing the examples block and it handled each description uniquely...and more accurately too.
So, my conclusion right now is that for basic description prompts, don't use any examples sentences ... it causes the LLM to err too much to the structure of your example sentences.