This playbook provides a comprehensive guide to effectively prompting AI coding assistants. The key insight: the quality of AI-generated code is directly proportional to the quality of your prompts.
AI coding assistants don't know your project's specifics. Always include:
- Programming language and version
- Framework or library you're using
- Details about the specific function or component
- Complete error messages
- Expected behavior vs. actual behavior
Example:
"I'm working with a React 18 component using TypeScript. I'm trying to implement a custom hook
that fetches user data from an API endpoint. The hook should handle loading states and errors,
and cache the results. I'm getting a TypeScript error about the return type..."
Vague questions yield vague answers. Instead of "Why isn't my code working?", try:
"My useUserData hook is expected to return {data: User | null, loading: boolean, error: Error | null}
but TypeScript is complaining that the return type doesn't match. Here's my current implementation..."
Large problems should be decomposed into smaller, manageable chunks:
First: "Create a basic React hook that fetches data from an API"
Then: "Add error handling to the hook"
Then: "Add caching functionality to prevent redundant API calls"
Finally: "Add TypeScript types for the hook's return value"
Concrete examples clarify expectations:
"I want a function that transforms this input:
Input: [{id: 1, name: 'Alice'}, {id: 2, name: 'Bob'}]
Into this output: {1: 'Alice', 2: 'Bob'}
Here's what I have so far..."
Set the stage for the type of help you need:
"Act as a senior React developer who specializes in performance optimization.
Review this component and suggest improvements..."
Don't expect perfection in one shot. Build on responses:
Initial: "Create a debounce function"
Follow-up: "Now make it cancelable"
Refinement: "Add TypeScript types with generics"
Structure:
- Describe the problem clearly
- Share relevant code snippets
- Include error messages
- Explain what you've already tried
Template:
"I'm encountering [specific error/bug] in my [language/framework] code.
Here's the error message:
[Complete error message]
Here's the relevant code:
[Code snippet]
Expected behavior: [What should happen]
Actual behavior: [What's happening]
I've already tried:
- [Attempt 1]
- [Attempt 2]
What could be causing this issue?"
Structure:
- State explicit refactoring goals
- Provide complete code context
- Request explanations with changes
Template:
"I need to refactor this [language] code to [specific goal: improve readability/performance/maintainability].
Current code:
[Code to refactor]
Specific requirements:
- [Requirement 1]
- [Requirement 2]
Please refactor this code and explain each change you make."
Structure:
- Start with high-level instructions
- Provide project context
- Include examples
- Build incrementally
Template:
"I need to implement [feature description] in my [framework/language] application.
Project context:
- [Relevant architecture details]
- [Existing patterns to follow]
- [Constraints or requirements]
Example usage:
[How the feature should work]
Please help me implement this step by step."
Bad: "Fix this code" Good: "This function should validate email addresses but it's incorrectly accepting 'test@' as valid"
Bad: "Build me a complete e-commerce site with user auth, payment processing, and inventory management" Good: "Help me create a user registration form with email and password validation"
Bad: "Here's my code" [dumps code] Good: "I'm trying to optimize this sorting algorithm. It works but takes too long with large datasets. How can I improve its performance?"
Bad: "Make this better" Good: "Refactor this to follow SOLID principles, specifically addressing the single responsibility principle"
Bad: Jumping between unrelated topics Good: Building on previous responses and maintaining context
Ask the AI to explain its reasoning:
"Walk me through your thought process for solving this algorithm problem step by step.
Consider time complexity, space complexity, and edge cases."
Provide examples of the pattern you want:
"Here are examples of how we format our API responses:
Success: {status: 'success', data: {...}}
Error: {status: 'error', message: '...', code: 'ERROR_CODE'}
Now create an endpoint handler following this pattern..."
Set explicit boundaries:
"Implement this feature with these constraints:
- No external dependencies
- Must work in IE11
- Under 50 lines of code
- Fully typed (no 'any' in TypeScript)"
Use AI as a sounding board:
"I'm going to explain my code logic, and I want you to point out any flaws in my reasoning:
[Explanation of approach]"
When working with this project specifically:
- Include Effect-TS context: Always mention you're using Effect-TS when asking about async operations or error handling
- Reference project structure: Mention the relevant directory (e.g., "in our cluster/domain structure")
- Specify runtime: Clarify if you're working with Node, Bun, or browser environments
- Include type signatures: For Effect code, always include the full
Effect<A, E, R>
type
Example for this project:
"I'm working in our cluster/domain/browser module using Effect-TS.
I need to create a new browser action that captures screenshots.
It should return Effect<Screenshot, BrowserError, BrowserService>.
We're using the Effect.gen syntax with yield*.
How should I structure this following our existing patterns?"
- Context is King: The more context you provide, the better the response
- Specificity Drives Quality: Precise questions get precise answers
- Iterate and Build: Don't expect perfect solutions immediately
- Learn from Interactions: Each prompt teaches you how to communicate better with AI
- Maintain Conversation Flow: Build on previous responses for complex tasks
- AI assistants are tools, not magic
- Good prompts save time and reduce frustration
- Practice makes perfect - refine your prompting skills over time
- When in doubt, provide more context rather than less