Skip to content

Instantly share code, notes, and snippets.

@sellooh
Created June 21, 2025 17:11
Show Gist options
  • Save sellooh/c2bc1eaebe535a80b060bb779e6f0b27 to your computer and use it in GitHub Desktop.
Save sellooh/c2bc1eaebe535a80b060bb779e6f0b27 to your computer and use it in GitHub Desktop.

The Prompt Engineering Playbook for Programmers

Introduction

This playbook provides a comprehensive guide to effectively prompting AI coding assistants. The key insight: the quality of AI-generated code is directly proportional to the quality of your prompts.

The Foundations of Effective Code Prompting

1. Provide Rich Context

AI coding assistants don't know your project's specifics. Always include:

  • Programming language and version
  • Framework or library you're using
  • Details about the specific function or component
  • Complete error messages
  • Expected behavior vs. actual behavior

Example:

"I'm working with a React 18 component using TypeScript. I'm trying to implement a custom hook 
that fetches user data from an API endpoint. The hook should handle loading states and errors, 
and cache the results. I'm getting a TypeScript error about the return type..."

2. Be Specific About Goals

Vague questions yield vague answers. Instead of "Why isn't my code working?", try:

"My useUserData hook is expected to return {data: User | null, loading: boolean, error: Error | null} 
but TypeScript is complaining that the return type doesn't match. Here's my current implementation..."

3. Break Down Complex Tasks

Large problems should be decomposed into smaller, manageable chunks:

First: "Create a basic React hook that fetches data from an API"
Then: "Add error handling to the hook"
Then: "Add caching functionality to prevent redundant API calls"
Finally: "Add TypeScript types for the hook's return value"

4. Include Examples

Concrete examples clarify expectations:

"I want a function that transforms this input:
Input: [{id: 1, name: 'Alice'}, {id: 2, name: 'Bob'}]
Into this output: {1: 'Alice', 2: 'Bob'}

Here's what I have so far..."

5. Leverage Roles and Personas

Set the stage for the type of help you need:

"Act as a senior React developer who specializes in performance optimization. 
Review this component and suggest improvements..."

6. Iterate and Refine

Don't expect perfection in one shot. Build on responses:

Initial: "Create a debounce function"
Follow-up: "Now make it cancelable"
Refinement: "Add TypeScript types with generics"

Prompt Strategies for Common Scenarios

Debugging Prompts

Structure:

  1. Describe the problem clearly
  2. Share relevant code snippets
  3. Include error messages
  4. Explain what you've already tried

Template:

"I'm encountering [specific error/bug] in my [language/framework] code.

Here's the error message:
[Complete error message]

Here's the relevant code:
[Code snippet]

Expected behavior: [What should happen]
Actual behavior: [What's happening]

I've already tried:
- [Attempt 1]
- [Attempt 2]

What could be causing this issue?"

Refactoring Prompts

Structure:

  1. State explicit refactoring goals
  2. Provide complete code context
  3. Request explanations with changes

Template:

"I need to refactor this [language] code to [specific goal: improve readability/performance/maintainability].

Current code:
[Code to refactor]

Specific requirements:
- [Requirement 1]
- [Requirement 2]

Please refactor this code and explain each change you make."

Implementing New Features

Structure:

  1. Start with high-level instructions
  2. Provide project context
  3. Include examples
  4. Build incrementally

Template:

"I need to implement [feature description] in my [framework/language] application.

Project context:
- [Relevant architecture details]
- [Existing patterns to follow]
- [Constraints or requirements]

Example usage:
[How the feature should work]

Please help me implement this step by step."

Common Anti-Patterns to Avoid

❌ Vague Prompts

Bad: "Fix this code" Good: "This function should validate email addresses but it's incorrectly accepting 'test@' as valid"

❌ Overloaded Requests

Bad: "Build me a complete e-commerce site with user auth, payment processing, and inventory management" Good: "Help me create a user registration form with email and password validation"

❌ Missing Clear Questions

Bad: "Here's my code" [dumps code] Good: "I'm trying to optimize this sorting algorithm. It works but takes too long with large datasets. How can I improve its performance?"

❌ Unclear Success Criteria

Bad: "Make this better" Good: "Refactor this to follow SOLID principles, specifically addressing the single responsibility principle"

❌ Ignoring Conversation Flow

Bad: Jumping between unrelated topics Good: Building on previous responses and maintaining context

Advanced Prompting Techniques

Chain of Thought Prompting

Ask the AI to explain its reasoning:

"Walk me through your thought process for solving this algorithm problem step by step. 
Consider time complexity, space complexity, and edge cases."

Few-Shot Prompting

Provide examples of the pattern you want:

"Here are examples of how we format our API responses:
Success: {status: 'success', data: {...}}
Error: {status: 'error', message: '...', code: 'ERROR_CODE'}

Now create an endpoint handler following this pattern..."

Constraint-Based Prompting

Set explicit boundaries:

"Implement this feature with these constraints:
- No external dependencies
- Must work in IE11
- Under 50 lines of code
- Fully typed (no 'any' in TypeScript)"

Rubber Duck Debugging

Use AI as a sounding board:

"I'm going to explain my code logic, and I want you to point out any flaws in my reasoning:
[Explanation of approach]"

Tips for Your Codebase

When working with this project specifically:

  1. Include Effect-TS context: Always mention you're using Effect-TS when asking about async operations or error handling
  2. Reference project structure: Mention the relevant directory (e.g., "in our cluster/domain structure")
  3. Specify runtime: Clarify if you're working with Node, Bun, or browser environments
  4. Include type signatures: For Effect code, always include the full Effect<A, E, R> type

Example for this project:

"I'm working in our cluster/domain/browser module using Effect-TS. 
I need to create a new browser action that captures screenshots.
It should return Effect<Screenshot, BrowserError, BrowserService>.
We're using the Effect.gen syntax with yield*.
How should I structure this following our existing patterns?"

Key Takeaways

  1. Context is King: The more context you provide, the better the response
  2. Specificity Drives Quality: Precise questions get precise answers
  3. Iterate and Build: Don't expect perfect solutions immediately
  4. Learn from Interactions: Each prompt teaches you how to communicate better with AI
  5. Maintain Conversation Flow: Build on previous responses for complex tasks

Remember

  • AI assistants are tools, not magic
  • Good prompts save time and reduce frustration
  • Practice makes perfect - refine your prompting skills over time
  • When in doubt, provide more context rather than less

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment