If you’re a software engineer in 2025, you should be using Agent Mode in your editor, either in VSCode, Cursor, or Windsurf. It’s by far the single biggest productivity boost I’ve ever experienced.
If you haven’t tried it yet, it’s a more advanced ChatGPT in your editor that can understand your codebase, answer questions, write features for you, write tests, and review your changes. I usually use it to write features and answer questions for me.
Use agent mode in VS Code Agent Mode in VSCode making edits to the code For those who have or haven’t tried it, you’ll need to know how to prompt it to get the most out of it. Though many of the same principles apply, this is a separate “artform” from prompting in an AI chat app like ChatGPT.
It’s even more important that the focus is not on spending 30+ minutes crafting “the perfect prompt.” Most cases will be entirely different, so spending hours crafting prompts just won’t be scalable. Instead, you need to know the small tweaks that 5-10x the quality of what AI generates. That’s what this post is about.
In general, there are two strategies to “prompt engineering”:
-
The Collaborator: Used 90% of the time for most use cases. This is when you talk to it like another engineer, giving brief, conversational prompts, like “Add a new feature that accepts payments” or “Write tests for this code.”
-
The Scientist: Used 10% of the time for advanced use cases. This is what you think of when you hear prompt engineering—crafting long, in-depth, prompts that take hours of iterating.
This article will only focus on how to get better results for Case 1: The Collaborator.
I’ll tell you what tricks will improve the results from the LLM based on my experience of using AI for the past year. These tips will dramatically increase your productivity and results from AI.
Let’s dive in!
You can immediately get 2x better results by making one small tweak. Instead of asking the LLM to do the task immediately, break it into two steps.
Step 1: Write a comprehensive plan for a new feature that accepts payments.
Use these relevant files and folders as context: < provide to LLM >
DO NOT WRITE ANY CODE YET. JUST WRITE A PLAN.
### Step 2: Great. The plan looks good. Go step-by-step through each part of the plan and implement it.
For me, this has turned low-quality tests, documentation, and features into “whoa, I had no idea the AI could do that” moments.
Creating a “plan” and “execute” step allows the AI to focus on one thing at a time—planning without writing code, searching the codebase to find the best approach, and then writing the code separately. Conversely, when you immediately ask it to do the task, it’s focused on cranking out code for you and less focused on doing it the right way.
This two-step method is the simplest, most straightforward way to instantly get better LLM results.
When I prompt, particularly at the planning stage, I almost always include,
Think like a { system architect or principal engineer }
This gets the LLM to load into its context all the things you wish it would think about without forcing you to type it out—like industry best practices, clean architecture, security, performance, scalability, etc.
Without this, it's still Claude 3.5, GPT-4o, or Gemini 2.5 under the hood—a general-purpose LLM for software engineers, finance people, biologists, and all other 8 billion people in the world.
To be fair, most editors start with a system prompt that tells it to act as a “programming assistant” which helps a bit here. However, being more specific about the role, like “principal engineer,” does an even better job of getting the results you want.
The next two tips work great for more advanced and ambiguous features, where you want the LLM to do most of the work. They amplify the planning step to do an even better job by tuning the output to specifically what you want.
Add the following to the end of your planning prompt:
As we plan this, ask me questions one at a time. I prefer yes or no questions
This changes the dynamic. Instead of the LLM assuming everything you want to do, it will ask you questions like you're the CTO and it’s an eager intern trying to get it perfect for you. I learned this trick from a recent talk by Harper Reed and it's amazing.
Getting the LLM to ask me one question at a time You can use this track at any project scope, even outside of the editor. Tell ChatGPT the project you've been tasked with, and have it ask you questions that reveal all the decisions you need to make.
Extending the previous tip further, you can either start the plan generation process and move into execution from that context window, or have it generate a reusable spec (another tip from Harper).
Generate a spec.md that comprehensively documents all the requirements and a plan of what needs to be done.
It generated this 200+ line, comprehensive spec from this brief prompt.
Asking it to generate a spec file Asking it to generate a spec.md after all the questions Once you have this spec, it opens up a world of options. You can now…
- Ask AI to generate prompts that would result in the spec being implemented
Now that we have spec.md, create another file called prompts.md that provides the series of prompts needed to effectively implement the spec
-
Commit this to the repo and hand off the work to another engineer to implement
-
Open a fresh context window at any point and reuse the file as the set of requirements, or add to it as needed as new requirements come in.
I continued the above example by asking it to use the prompts it generated to implement the feature end-to-end and it created every aspect of the feature you could imagine—modularization, package management, tests, linting, etc.
This pattern is also perfect for creating documentation. Often, if you ask it to generate documentation, it can produce content at the wrong level of abstraction, potentially covering too much detail that quickly becomes outdated. Instead, you can tell it to ask you questions that help it determine what level of detail it should provide in the documentation.
Also at the planning stage, you can ask it to provide multiple solution options. This works great for more ambiguous problems, and pairs well with thinking and reasoning models, as they put more effort into the pre-planning process.
-
A bug that I’m unsure why it’s happening. I’ll give the LLM the files of relevant context and ask it, “Come up with 3 potential reasons why this bug is happening.”
-
Features with ambiguous solutions where I know the outcome I want, but am unsure of the best way to achieve it. An example of this is when I wanted to create “Google docs style title editing” for WriteEdge docs.
Asking it to come up with multiple reasons why the bug is happening Once it gives you a few options, pick the one you think is right, ask it to provide a detailed implementation plan for it, and then have it implement it. Voila, bug fixed!
One of the key patterns of effective prompt engineering is packing maximum meaning into minimal words.
The best way to do that is by giving the LLM examples. Examples work great because instead of describing all the facets you want the LLM to follow, you can simply say, “Do it like this example.”
Here's an example where I asked Cursor to make an “admin” version of a function, but because there were existing examples, I could say:
modified to behave like the other admin functions.
Giving an example it can reference to model the behavior If I spelled out every aspect of what the admin functions entail, it would take me 10 minutes to write the prompt. Instead, I can tell the LLM at a higher level of abstraction and let it figure out what it needs to repeat.
In almost every prompt I give to the LLM, I ask myself, “What examples can I provide so it doesn't go wildly off track?”
As you take on bigger tasks with AI, you'll need to find ways to manage its context. Because AI is limited by context length and lacks access to your brain's context, managing it well means you get the 10x gains AI offers versus wondering why it won't “just read your mind.”
The spec.md
file example from earlier is a prime example of this—but there are others, too.
Imagine you're migrating a codebase from one testing library to another—like from Enzyme to React Testing Library. Instead of manually going file by file with the same prompt of “Migrate this file to React Testing Library,” you could have it:
-
Generate a “migration_list.md” file with the list of files it needs to migrate. It should update and remove files as they’re migrated.
-
Generate a separate “migration_context.md” file that stores all the key “before” and “after” patterns it needs to know to do the migration effectively.
-
Ask it to go through each file from the “migration_list” file and use the information from “migration_context.md” to carry out the migration. It should add to the “migration_context.md” file as it encounters new problems and solutions it needs to overcome.
If there are common tasks it needs to do during migration, like updating references to migrated files, ask it to create an automated script it can simply call, rather than carrying out the file operations itself.
All these tricks optimize context window usage and improve consistency by pushing the LLM to work methodically and track only the essential information needed. Keep finding ways to do this, and you'll speed through tasks in no time.
Instead of asking AI to implement your feature immediately, break it into a planning and execution step.
Give the AI a persona like “system architect” or “principal engineer,” particularly at the planning stage, to pack in the right context with minimal effort.
When working on ambiguous problems, have it ask you questions to uncover all your hidden requirements and have it implement the feature specific to your needs.
After gathering the requirements, ask it to output those to a reusable spec file. That can be used by other engineers or the AI to build on top of the plan.
When faced with a bug or feature with an ambiguous solution, ask for multiple approaches. This ensures it doesn’t go down an invalid rabbit hole and lets you be the decision maker of which route to explore more.
Rather than listing out all your requirements, give it an example it can follow. It saves you time from writing prompts and forces the AI to follow existing conventions.
Similar to the spec.md example, constantly look for ways to let the AI help itself by generating intermediate artifacts that improve how the AI’s context window is managed.