Skip to content

Instantly share code, notes, and snippets.

@ilhamsj
Last active September 23, 2025 06:01
Show Gist options
  • Save ilhamsj/c9bcdacd7a8f54f152d7556cf8159e26 to your computer and use it in GitHub Desktop.
Save ilhamsj/c9bcdacd7a8f54f152d7556cf8159e26 to your computer and use it in GitHub Desktop.

Here’s how I’d approach this: think of prompt engineering as building a well-structured pipeline that guides the AI through your product development flow — from PRD to implementation. Right now, you’re imagining:

PRD → Architecture → JTBD → Backend/Frontend Todos

This is a good starting point, but for software teams, there are a few extra layers we should consider to ensure the process produces useful, actionable, and technically consistent outputs.


1. Define the End-to-End Flow

Instead of a single giant prompt, break it into stages where each stage’s output becomes the next stage’s input.

Here’s a refined workflow:

  1. PRD Understanding

    • Input: Raw PRD from product team.
    • Output: A clear, structured summary with explicit requirements, constraints, edge cases, success metrics, and assumptions.
  2. Requirements Breakdown & Architecture Suggestion

    • Input: Structured PRD summary.
    • Output: Suggested system architecture, key services, data flow, and tech stack decisions.
    • Optional: Generate diagrams automatically.
  3. Job-To-Be-Done (JTBD) Creation

    • Input: Requirements + architecture.
    • Output: JTBD per major feature or user interaction, so developers understand the "why".
  4. Task Generation & Allocation

    • Input: JTBD.

    • Output: Actionable tasks split between:

      • Backend
      • Frontend
      • DevOps/Infrastructure (if applicable)
      • QA/Testing
  5. PR Review Checklist & Risk Analysis (optional but very useful)

    • Input: Tasks + architecture.
    • Output: Pre-commit checklist, potential risks, scalability considerations.

2. Prompt Engineering by Stage

Instead of trying to do everything in one step, craft specific prompts for each stage.


Stage 1 — PRD Understanding Prompt

Goal: Extract clean, structured requirements from the PRD.

Prompt example:

"You are a senior technical product manager. I will give you a raw PRD. Extract the following:

  • Core objective (what problem it solves)
  • Success metrics (how we measure success)
  • Functional requirements
  • Non-functional requirements (performance, security, compliance, etc.)
  • Assumptions or open questions
  • Edge cases or failure scenarios Return in structured JSON format."*

This guarantees a machine-readable output for later stages.


Stage 2 — Architecture Suggestion Prompt

Goal: Propose architecture aligned with requirements and constraints.

Prompt example:

"Given the following structured requirements, propose an optimal system architecture. Include:

  • High-level system diagram
  • Key services/modules and their responsibilities
  • Recommended tech stack with justifications
  • Data flow outline
  • Integration points with third-party services"

Stage 3 — JTBD Prompt

Goal: Frame development work in a user-centric way.

Prompt example:

"Based on these requirements and architecture, generate a set of Jobs-To-Be-Done (JTBD) statements that describe what the user is trying to accomplish and why. Format each JTBD as: When [situation], I want to [motivation], so I can [desired outcome]."*


Stage 4 — Task Generation Prompt

Goal: Produce concrete, developer-ready tasks.

Prompt example:

"Using the JTBD list, create a backlog of actionable development tasks. Categorize into:

  • Backend
  • Frontend
  • Infrastructure/DevOps
  • QA/Testing Each task must include: description, dependencies, estimated complexity (S/M/L), and priority."

3. Advanced Enhancements

  • Version Control Hooks

    • Integrate this workflow into GitHub/GitLab pipelines.
    • For example, when a PRD is merged into a /product folder, automatically run Stage 1–4 and output to a /planning folder.
  • Integrate with Jira or Linear

    • Convert generated tasks directly into Jira tickets.
  • Feedback Loop

    • After sprint review, feed real-world outcomes back to the model to improve future breakdowns.
  • Consistency Guardrails

    • Use a style guide and glossary to enforce consistent naming across outputs.
    • Example: ensure "User" is always referred to the same way across PRD, code, and docs.

4. Example End-to-End Flow

Imagine your PRD says:

"We need a new feature where users can upload images, apply filters, and share them with friends. The feature must be mobile-first and support up to 10,000 concurrent users."

Stage 1 Output (Structured):

{
  "objective": "Allow users to upload, edit, and share images",
  "success_metrics": ["90% user adoption in first month", "System handles 10k concurrent users"],
  "functional_requirements": [
    "Image upload",
    "Filter application",
    "Sharing via links"
  ],
  "non_functional_requirements": ["Mobile-first UI", "Scalable architecture"],
  "assumptions": ["Users are already logged in"],
  "edge_cases": ["Upload fails due to network issues"]
}

Stage 2 Output (Architecture):

  • Microservice for image processing (Node.js, AWS Lambda)
  • CDN for image distribution
  • Frontend in Next.js
  • Data stored in PostgreSQL + S3

Stage 3 Output (JTBD):

  • "When I take a photo, I want to enhance it with filters, so I can share a beautiful image with friends."
  • "When my network is unstable, I want my upload to resume automatically, so I don't lose progress."

Stage 4 Output (Tasks):

  • Backend: "Implement image upload API with resumable uploads" (L, high priority)
  • Frontend: "Build filter selection UI" (M, medium priority)
  • Infra: "Configure S3 and CDN for image storage and distribution" (M, high priority)

5. Why This Works

  • Predictable: Each stage has clear input and output formats.
  • Scalable: You can automate or run stages independently.
  • Team-friendly: Outputs map directly to engineering team workflows (e.g., tickets).
  • AI-augmented, not AI-dependent: Engineers still make final decisions but get a huge productivity boost.

Would you like me to design Stage 1 and Stage 2 prompts in full detail so you can start experimenting right away?

AI-First Development Cycle

From Human-Centric Design to Machine-Executable Tasks for an Instagram-like App.

1. Architect Agent: Ingestion & Planning

The process begins with the Architect Agent, which analyzes the initial human-created documents and generates a structured master plan.

Inputs:

  • 📄 Product Requirements Doc (PRD): The formal document describing the application's features and behavior.
  • 🎨 Design Files (Figma): The visual mockups and user interface designs.

⬇️

Generates Master Plan:

  • API Contract: A detailed OpenAPI/Swagger specification.
  • DB Schema: SQL definitions for the database structure.
  • Component List: A breakdown of all required UI components.
  • Task Graph: A dependency map defining the order of execution for all development tasks.

2. Execution: Handoff to Specialist Agents

The Architect Agent delegates tasks from the master plan to specialized coding agents using structured, machine-readable instructions.

🤖 Backend Agent

TASK: BE-005-GetUserProfile

{
"target_agent": "backend",
"dependencies": ["DB-001"],
"prompt_instructions": "Create a GET endpoint for user profiles.",
"context": {
"api_contract": {
"path": "/api/users/{username}",
"method": "GET",
"responses": { "200": { "..."}, "404": {"..."} }
}
},
"acceptance_criteria": [
"Return 200 for existing user.",
"Return 404 for missing user."
]
}

🎨 Frontend Agent

TASK: FE-012-UserProfilePage

{
"target_agent": "frontend",
"dependencies": ["BE-005"],
"prompt_instructions": "Build the user profile page component.",
"context": {
"component_name": "UserProfile",
"props": ["username"],
"api_calls": ["GET /api/users/{username}"]
},
"acceptance_criteria": [
"Display user bio and follower counts.",
"Show a grid of user's posts."
]
}

🔧 DevOps Agent

TASK: CI-001-SetupPipeline

{
"target_agent": "devops",
"dependencies": [],
"prompt_instructions": "Create Dockerfile and a CI/CD pipeline.",
"context": {
"tech_stack": ["React", "Node.js", "PostgreSQL"],
"target_platform": "GCP Cloud Run"
},
"acceptance_criteria": [
"Dockerfile builds successfully.",
"Pipeline triggers on git push.",
"Tests run automatically."
]
}

3. Integration & Testing

QA Agent

  • Automatically runs unit, integration, and end-to-end tests based on the acceptance_criteria from each task.
  • Creates bug-fix tasks on failure and assigns them to the appropriate agent.

4. Deployment

🚀 Live Application

  • Once all tests pass, the DevOps Agent executes the deployment scripts.
  • The updated application is pushed to the production environment and becomes available to users.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment