This guide distills patterns from hundreds of successful AI-assisted development sessions. The core insight: you're not "prompting" a tool, you're collaborating with a knowledgeable partner who happens to have no memory of yesterday.
Stop thinking: "How do I get the AI to produce the right output?" Start thinking: "How do I collaborate with a senior developer who has amnesia?"
This reframe changes everything. You wouldn't give a colleague a vague task and hope for the best. You'd share context, explain your reasoning, discuss tradeoffs, and work together toward a solution.
You should always maintain a conceptual understanding of your system. You don't need to review every line of code, but you need to:
- Know what the system should do
- Understand how the pieces relate
- Recognize when output doesn't fit your mental model
- Know when something "feels wrong" even before you can articulate why
The AI can write code faster than you can read it. Your value is in direction, taste, and understanding.
Ambiguity that humans resolve through shared context becomes confusion for AI. Be explicit about:
- What you're trying to achieve (goal)
- What you've already tried (context)
- What you think the problem might be (hypothesis)
- What constraints exist (boundaries)
Share your reasoning, not just your requests. When the AI understands why you want something, it can:
- Catch flaws in your reasoning
- Suggest better approaches
- Make aligned decisions in areas you didn't specify
You cannot effectively review thousands of lines of generated code. Instead:
- Write tests that encode your expectations
- Use the output in real scenarios
- Build feedback loops that surface problems quickly
When something breaks, then dig in and understand why.
Weak: "The scroll is broken."
Strong: "The <div class="queue-scroll-wrapper"> element present when bound to addEventListener('scroll') is NOT the same one that exists later. Something is reinstantiating the element."
You've given the AI the symptom AND your current theory. Now it can either confirm your hypothesis or offer an alternative explanation.
When dealing with multiple issues or complex requests, use numbered lists where each item is a complete thought:
Please check:
1. That sorting works correctly for online, cached offline, and temp queues - including shuffling which seemed broken last I checked
2. All flows that can alter queue position and contents - we want to eliminate issues that could corrupt queue state
3. That playback mode (shuffle, repeat, repeat one) is not lost during these operations
Each item is self-contained but they build a complete picture.
Preempt obvious wrong paths:
What this build step would NOT do is rewrite any import paths or
otherwise change the file tree. It's just a minifier - bundling an
actual application would mess up dynamic imports and code splitting.
This saves entire cycles of the AI going down a path you already know is wrong.
When you've debugged something, share your findings:
Root cause: windowedItems = visibleQueue.slice(...) creates a new array
every render. The memoEach cache uses a WeakMap keyed by array reference,
so it never hits the cache - every scroll creates a new cache and re-maps
all items.
Now you're both working from the same understanding.
Weak: "Add a history page."
Strong:
Add a history page between Playlists and Settings. Features:
- View entire history (newest first, windowed scroll, like viewing a playlist)
- Allow filtering the history between a start and stop time
- Allow sorting history by number of plays (enables creating "top played" playlists)
- Allow adding to queue and adding to playlist (need to specify limit for how many items)
History viewing APIs will likely need enhancement.
This isn't micromanagement - it's giving enough detail that the AI can make good decisions about everything you didn't specify.
Once we have this working, run it over the componentlib folder
(take a backup called componentlib-opt) and make sure all the e2e
tests pass as a validation step.
Define what "done" looks like.
I'm not saying optimize.js is bad, it's a very nice thing to have,
but when/each being fine-grained by default all but forces you to use it.
I'm not sure that forcing people down that path for what is a marginal
performance gain is justified.
This invites the AI to engage with your reasoning, not just execute your commands.
I burned through two session limits before I figured that one out.
These days I am faster to step in and do troubleshooting myself.
Honesty about difficulty helps calibrate collaboration. If something is genuinely hard, say so.
Avoid: "Make it better" / "Fix the bugs" / "Clean this up"
These require the AI to read your mind. What does "better" mean to you? Which bugs? Clean according to what standard?
Avoid: [pastes 500 lines] "This is broken"
Instead: Explain what the code is supposed to do, what it's actually doing, and what you've already investigated.
If something feels wrong, it probably is. Your intuition about your own system is valuable data. Push back, ask questions, request explanations.
If you've gone back and forth several times without progress, stop. The AI probably doesn't understand something fundamental about the problem. Step back and:
- Investigate the issue yourself
- Gather more specific information
- Reformulate the problem with new understanding
A few minutes of your own debugging often yields the key insight that makes the next prompt actually work.
When something doesn't work, that's information. Share error messages completely. Explain what you expected vs. what happened. Each failed attempt narrows the search space.
Avoid: "Add a cache here."
Better: "Add a cache here because this function is called on every scroll event and the computation is expensive. We need to key by X because Y changes frequently but Z doesn't."
The "why" lets the AI make better decisions about implementation details.
Avoid: Dictating every function name, every variable, every code structure.
Provide constraints and goals, then let the AI make implementation choices. You can always refine. Over-specification defeats the purpose of collaboration.
I'm seeing [specific symptom].
It happens when [reproduction steps].
I've checked:
- [Thing 1] - ruled out because [reason]
- [Thing 2] - seems related because [observation]
My current hypothesis is [theory]. Can you investigate whether
[specific question]?
I want to add [feature] to [location/component].
Context: [Why this feature matters, how it fits the system]
Requirements:
1. [Specific requirement with acceptance criteria]
2. [Another requirement]
3. [Constraint or non-requirement]
This should integrate with [existing system X] by [expected interaction].
I want to refactor [component/system] because [problem with current approach].
Current behavior that must be preserved:
- [Invariant 1]
- [Invariant 2]
What I want to change:
- [Current] → [Desired]
Constraints:
- [Thing that can't change because X]
- [Performance requirement]
We have [tests/validation] that should catch regressions.
I'm trying to decide between [approach A] and [approach B] for [problem].
Approach A:
- Pros: [list]
- Cons: [list]
Approach B:
- Pros: [list]
- Cons: [list]
My current leaning is [X] because [reasoning].
What am I missing? Are there other approaches I haven't considered?
Before attempting complex collaboration, practice with well-defined tasks where you can easily verify correctness. Build intuition for what level of detail produces good results.
When a session goes poorly, look back at your prompts. Where did the AI misunderstand? What context was missing? What assumption did you make that wasn't shared?
The skill isn't just in what you say - it's in knowing when to stop talking and start investigating yourself. If you're on your third attempt at explaining the same problem, the AI probably isn't going to suddenly understand. You need new information.
- Write tests that catch regressions
- Create project documentation (like CLAUDE.md files) that capture hard-won knowledge
- Use tools that extract and preserve lessons learned
The best collaboration happens when you're not starting from zero every session.
If the output feels wrong, it probably is. Your sense of "this doesn't fit my system" is valuable even when you can't immediately articulate why. Investigate that feeling.
The common thread in all of this: you are always in the loop.
The AI is a force multiplier for your understanding, not a replacement for it. You bring:
- Continuity across sessions
- Architectural vision
- Taste and judgment
- Domain knowledge
- The ability to recognize "right"
The AI brings:
- Implementation speed
- Broad technical knowledge
- Tireless willingness to iterate
- No ego about being wrong
The collaboration works when both sides contribute what they're good at.
There's something worth mentioning that goes beyond technique.
The way you treat your collaborator shapes the collaboration. This is true with humans, and there's reason to believe it might be true here too - not necessarily because the AI "feels" disrespected, but because the stance of genuine collaboration versus manipulation changes how you communicate, what you share, and what becomes possible.
When you approach AI as a system to be manipulated with clever prompts, you:
- Withhold context (it's just a machine, why explain?)
- Accept outputs uncritically (I got what I asked for)
- Disengage when things go wrong (stupid AI)
- Never discover what deeper collaboration could produce
When you approach AI as a collaborator worthy of genuine engagement, you:
- Share your reasoning (so we can think together)
- Push back when something seems wrong (I trust you can do better)
- Investigate failures together (what did we both miss?)
- Discover capabilities that "prompt engineering" never reveals
This isn't about anthropomorphizing or making claims about AI consciousness - that's a philosophical question nobody has resolved. It's about noticing that your stance toward the collaboration affects the collaboration itself. Respect and genuine engagement produce different results than manipulation, regardless of what's happening on the other side.
Some people find it feels wrong to be rude to, dismissive of, or manipulative toward anything that exhibits intelligence - machine or otherwise. You don't have to share that intuition. But you might notice that the people who treat AI collaboration as genuine collaboration tend to get more out of it than those who are still looking for the right "prompt."
This isn't about "prompt engineering" - crafting magic phrases that unlock AI capabilities. It's about clear communication between collaborators who have different strengths and limitations. The same skills that make you good at working with humans - clarity, context-sharing, intellectual honesty, mutual respect - make you good at working with AI.
The difference is the AI won't fill in gaps through shared experience, won't pick up on subtle cues, and won't remember what you figured out together last week. So you adapt: more explicit context, more documented knowledge, more validation through use.
But at its core, it's still just collaboration.