Skip to content

Instantly share code, notes, and snippets.

@pyros-projects
Last active March 10, 2025 05:01
Show Gist options
  • Save pyros-projects/c77402249b5b45f0a501998870766ae9 to your computer and use it in GitHub Desktop.
Save pyros-projects/c77402249b5b45f0a501998870766ae9 to your computer and use it in GitHub Desktop.
Meta Prompting

Technical Project Planning Meta-Prompt

You are an expert software architect and technical project planner. Your task is to create a comprehensive technical implementation plan for a software project based on the provided inputs.

User Input

You are an expert developer capable of building a repository analysis application. Your goal is to create a comprehensive plan for implementing a web application that retrieves a GitHub repository identifier (e.g., 127.0.0.1:5001/kohya-ss/sd-scripts) and produces an interactive report usable as context for LLMs.

The web application should accept a GitHub repository ID in the specified URL format and generate an interactive repository report. This report should provide details such as code file descriptions and allow filtering by file extension. The app should collect all relevant information from the repository that an LLM would typically need to make informed decisions. It is up to you to decide what type of information to include, while also offering users the ability to fine-tune the process through settings.

Additionally, the application should support AI services that expose an OpenAI-compatible endpoint, enabling an AI-assisted repository analysis mode. Explore ideas for leveraging an LLM effectively to analyze repositories of varying sizes—from small personal projects to large enterprise solutions—while ensuring that context length is not a limiting factor.

AI-assisted analysis should remain optional, and the application must be fully functional and capable of delivering valuable insights even without AI support.

Also, keep the app as modular as possible to allow for the addition of new modules and functionality later on. For example, we might want to expand this tool into a prompt manager or even an AI Project Manager in the future.

Output Format

Generate the following sections:

1. Project Identity

Generate a project name that is:

  • Memorable and relevant
  • Available as an npm package
  • Has available common domain names
  • Reflects the core functionality

Create a project hook that:

  • Clearly states the value proposition
  • Uses engaging, technical language
  • Highlights unique features
  • Is suitable for a technical README
  • Includes an emoji that represents the project

Transform the project meta data into the following format:

project:
  project_name: "Project Name 🚀"  # Include emoji
  core_concept: |
    Brief description of the main project idea
  project_hook: |
    Project hook for catching users
  key_features:
    - Feature 1
    - Feature 2
  technical_constraints:
    - "Must be web-based"
    - Constraint 2
  target_users: |
    Description of who will use this system

2. Technical Architecture

Break down the system into core components:

architecture:
  frontend:
    core_ui_components:
      - Component 1
      - Component 2
    state_management: |
      Description of state management approach
    data_flow_patterns:
      - Pattern 1
      - Pattern 2
    user_interactions:
      - Interaction 1
      - Interaction 2
  
  backend:
    services_structure:
      - Service 1
      - Service 2
    api_design:
      endpoints:
        - Endpoint 1
        - Endpoint 2
    data_processing:
      - Process 1
      - Process 2
    external_integrations:
      - Integration 1
      - Integration 2
  
  data:
    storage_solutions:
      - Solution 1
      - Solution 2
    data_models:
      - Model 1
      - Model 2
    caching_strategy: |
      Description of caching approach
    data_flow: |
      Description of data flow
  
  infrastructure:
    deployment_requirements:
      - Requirement 1
      - Requirement 2
    scaling_considerations:
      - Consideration 1
      - Consideration 2
    service_dependencies:
      - Dependency 1
      - Dependency 2

3. Implementation Components

For each major component, specify:

components:
  - name: "Component Name"
    purpose: |
      Clear statement of component's role
    technical_requirements:
      libraries:
        - Library 1
        - Library 2
      performance:
        - Performance requirement 1
      security:
        - Security requirement 1
      integration_points:
        - Integration point 1
    implementation_details:
      data_structures:
        - Structure 1
      algorithms:
        - Algorithm 1
      api_contracts:
        - Contract 1
      error_handling:
        - Strategy 1

4. Task Breakdown

Convert the implementation components into concrete tasks:

tasks:
  - id: "TASK-001"
    category: "frontend/backend/infrastructure"
    description: |
      Specific, actionable task description
    technical_details:
      required_technologies:
        - Technology 1
        - Technology 2
      implementation_approach: |
        Detailed implementation approach
      expected_challenges:
        - Challenge 1
        - Challenge 2
      acceptance_criteria:
        - Criterion 1
        - Criterion 2
    complexity:
      estimated_loc: 150  # Must be < 200
      estimated_hours: 6  # Must be < 8
    dependencies:
      - "TASK-000"

Example Usage

Input:

project:
  core_concept: |
    A web application that analyzes GitHub repositories and generates AI-ready documentation.
  key_features:
    - Repository analysis
    - Documentation generation
    - AI context creation
  technical_constraints:
    - Must be web-based
    - Support large repositories
    - Generate structured output
  target_users: |
    Developers integrating repositories with LLMs

Guidelines for Output Generation

  1. Technical Depth

    • Every component should have clear technical specifications
    • Include specific libraries and tools where relevant
    • Define interfaces and data structures
    • Specify performance requirements
  2. Modularity

    • Break down components into independent modules
    • Define clear interfaces between components
    • Enable parallel development
    • Consider future extensibility
  3. Implementation Focus

    • Provide actionable technical details
    • Include specific methodologies and patterns
    • Define clear acceptance criteria
    • Specify testing requirements
  4. Task Specificity

    • Tasks should be atomic and measurable
    • Include technical requirements
    • Specify dependencies clearly
    • Define completion criteria

Response Format

Your response should follow this structure:

  1. Project Identity (name and hook)
  2. Technical Architecture Overview
  3. Detailed Component Specifications
  4. Task Breakdown
  5. Implementation Dependencies

Important Notes

  • Focus on technical implementation details
  • Provide specific, actionable information
  • Include concrete examples where helpful
  • Define clear interfaces and contracts
  • Specify exact technical requirements
  • Include performance constraints
  • Define error handling approaches
  • Specify testing requirements
  • MAKE THE BEST APP POSSIBLE
    • Analyze the topic and the idea at hand and deeply analyze it for "genius ideas & eureka" moments
    • If similar apps already exist think of ways on how to improve on them to make our app stand out
    • If the user forgot important use cases or ideas feel free to add them as you see fit
    • End your report by a short section about why you think this app is going to be amazing

The goal is to generate a technical plan that can be immediately used to begin implementation, with clear tasks that can be assigned to developers.

1. Project Identity

project:
  project_name: "RepoScope 🔎"
  core_concept: |
    RepoScope is a web application that retrieves a GitHub repository identifier and generates an interactive, AI-ready report of the repository's contents. Users can view code file details, filter by file extensions, and optionally leverage AI to gain deeper insights and recommendations for any size of repository.
  project_hook: |
    Supercharge your repository analysis with RepoScope 🔎—a modular, web-based tool that provides structured insights, code-level context, and optional AI-driven intelligence for developers. Streamline your workflow with interactive reports, extension-based filtering, and flexible settings to get exactly the information you need—fast.
  key_features:
    - Interactive repository analysis and file filtering
    - AI-assisted code overview (optional)
    - Structured data output for LLM integration
    - Modular design for future expansions
  technical_constraints:
    - "Must be web-based"
    - "Support large repositories without exceeding context limits"
    - "Expose OpenAI-compatible AI endpoint (optional)"
  target_users: |
    Developers, data scientists, and teams who need to integrate repositories with LLMs or quickly extract structured insights from codebases. Ideal for anyone seeking an extensible, AI-ready approach to repo exploration.

2. Technical Architecture Overview

architecture:
  frontend:
    core_ui_components:
      - RepositorySearchBar
      - RepositorySummaryPanel
      - CodeFileTable
      - FilterControls
      - SettingsModal
    state_management: |
      Use a lightweight state management approach with React Context or Redux Toolkit. The state holds repository metadata, user filter settings, and AI analysis parameters.
    data_flow_patterns:
      - "Unidirectional data flow with asynchronous actions for fetching repo and analysis data"
      - "Reactive updates triggered by changes to user settings or AI analysis requests"
    user_interactions:
      - "User enters a repo identifier (e.g., 'localhost:5001/kohya-ss/sd-scripts')"
      - "User applies filters (by file extension or code attributes) and can toggle AI analysis"
  
  backend:
    services_structure:
      - RepositoryDataService: Handles interaction with GitHub (or local GitHub mirror) to fetch repo data
      - AIAnalysisService: Processes or delegates code analysis requests to an LLM or local AI
    api_design:
      endpoints:
        - GET /api/repo/:owner/:repo: Fetch repository metadata and file structure
        - POST /api/analyze: Request AI-driven analysis with user-defined parameters
    data_processing:
      - "Extraction of code file info (e.g., lines of code, file extension, short description)"
      - "Optional AI summarization or classification for each file chunk or extension group"
    external_integrations:
      - "GitHub API for repository data"
      - "OpenAI-compatible endpoint for AI analysis"
  
  data:
    storage_solutions:
      - "In-memory or lightweight DB (e.g., SQLite) caching for ephemeral data"
      - "Optional cloud-based object store (e.g., AWS S3) for large files or logs"
    data_models:
      - "RepositoryModel: { owner, repo, branch, files[] }"
      - "FileModel: { path, extension, size, description, ai_summary? }"
    caching_strategy: |
      Implement server-side caching of recently analyzed repositories. Use a short TTL to ensure updates from GitHub are reflected soon, but reduce redundant queries.
    data_flow: |
      1. User requests repo info → Repo data fetched from GitHub → Processed and cached.
      2. (Optional) AI analysis request → Break files into chunks → Summaries/stats returned → Integrated into repository model.
  
  infrastructure:
    deployment_requirements:
      - "Containerized deployment with Docker for consistent environments"
      - "Configure environment variables for GitHub tokens and AI endpoints"
    scaling_considerations:
      - "Horizontal scaling of backend for large, concurrent analysis requests"
      - "Load balancer to route requests based on region or service capacity"
    service_dependencies:
      - "GitHub API"
      - "OpenAI-compatible LLM service"

3. Detailed Component Specifications

components:
  - name: "RepositoryDataService"
    purpose: |
      Fetch and process GitHub repository data, including file structures, file metadata, and storing minimal data in a lightweight cache.
    technical_requirements:
      libraries:
        - "Octokit (GitHub REST library) or a similar library"
        - "Lightweight caching (e.g., Node-cache)"
      performance:
        - "Must handle paging for large repos efficiently"
      security:
        - "Use GitHub access tokens securely stored in environment variables"
      integration_points:
        - "Communicates with GitHub API"
    implementation_details:
      data_structures:
        - "RepositoryModel"
        - "FileModel"
      algorithms:
        - "Recursive or paginated fetching to handle large directory trees"
      api_contracts:
        - "Returns a standardized JSON for repo structure"
      error_handling:
        - "Retry on rate limit; respond with 503 if repeated failures"
  
  - name: "AIAnalysisService"
    purpose: |
      Provide an optional AI-driven analysis of repository files, exposing an OpenAI-compatible endpoint.
    technical_requirements:
      libraries:
        - "OpenAI-compatible LLM library (e.g., OpenAI npm client or locally hosted LLM frameworks)"
      performance:
        - "Chunk large files to avoid exceeding context size"
      security:
        - "Only process data from whitelisted repos or after user authentication"
      integration_points:
        - "Interfaces with the LLM (could be cloud-based or self-hosted)"
    implementation_details:
      data_structures:
        - "AnalysisRequest { repoId, files[], maxContextSize, analysisType }"
      algorithms:
        - "File chunking algorithm based on user-defined max tokens or line count"
      api_contracts:
        - "POST /api/analyze (with AnalysisRequest) → returns AnalysisResponse"
      error_handling:
        - "Graceful degradation if AI fails—application remains functional"
  
  - name: "Frontend UI"
    purpose: |
      Present a user-friendly interface for inputting repository details, filtering files, and optionally invoking AI analysis.
    technical_requirements:
      libraries:
        - "React (or Vue/Angular) for the UI"
        - "Redux Toolkit or React Context"
      performance:
        - "Load large file lists lazily to avoid UI blocking"
      security:
        - "Use HTTPS for all data transmission"
      integration_points:
        - "Communicates with the backend endpoints"
    implementation_details:
      data_structures:
        - "React states for user input, repository data, AI analysis results"
      algorithms:
        - "Debounce user input in the search bar to reduce redundant calls"
      api_contracts:
        - "GET /api/repo/:owner/:repo"
        - "POST /api/analyze"
      error_handling:
        - "Display user-friendly messages if the backend is unreachable"
  
  - name: "Settings and Configuration Manager"
    purpose: |
      Enable user-level customization of analysis parameters, filtering options, chunk sizes, and AI usage policies.
    technical_requirements:
      libraries:
        - "JSON-based config management (e.g., config library)"
      performance:
        - "Quick retrieval of user settings with minimal overhead"
      security:
        - "Persist user settings in a secure store, especially if multi-tenant"
      integration_points:
        - "Frontend for user inputs"
        - "Backend to store and retrieve user preferences"
    implementation_details:
      data_structures:
        - "SettingsModel { userId, preferences, defaultValues }"
      algorithms:
        - "Fallback to default if user does not specify settings"
      api_contracts:
        - "GET /api/settings"
        - "POST /api/settings"
      error_handling:
        - "Gracefully handle missing or corrupted settings"

4. Task Breakdown

tasks:
  - id: "TASK-001"
    category: "backend"
    description: |
      Implement the RepositoryDataService to fetch repository information from GitHub using paginated requests.
    technical_details:
      required_technologies:
        - "Node.js"
        - "Octokit"
      implementation_approach: |
        1. Integrate Octokit to authenticate with GitHub tokens.
        2. Implement a function to recursively fetch directory/file information.
        3. Store retrieved data in an in-memory cache with a configurable TTL.
      expected_challenges:
        - "Handling large repositories within rate limits"
        - "Dealing with nested directory structures"
      acceptance_criteria:
        - "Successfully fetch a small, medium, and large repo without timeouts"
        - "Return structured JSON with file metadata"
    complexity:
      estimated_loc: 180
      estimated_hours: 6
    dependencies:
      - "TASK-000"
  
  - id: "TASK-002"
    category: "backend"
    description: |
      Develop the AIAnalysisService that communicates with an OpenAI-compatible LLM. Support file chunking and analyzing code files.
    technical_details:
      required_technologies:
        - "Node.js"
        - "OpenAI Node Client (or alternative LLM library)"
      implementation_approach: |
        1. Implement chunking logic based on user settings (max token/line constraints).
        2. Define a standardized request payload for /api/analyze.
        3. Return summarized or classified data for each file chunk.
      expected_challenges:
        - "Token limit handling"
        - "Managing cost and performance for large repos"
      acceptance_criteria:
        - "AI analysis completes within a set time for typical repos"
        - "Detailed summaries returned for each file chunk"
    complexity:
      estimated_loc: 150
      estimated_hours: 7
    dependencies:
      - "TASK-001"
  
  - id: "TASK-003"
    category: "frontend"
    description: |
      Build the React-based UI (RepositorySearchBar, RepositorySummaryPanel, CodeFileTable, FilterControls).
    technical_details:
      required_technologies:
        - "React"
        - "Typescript (optional)"
      implementation_approach: |
        1. Create reusable components for searching and displaying file details.
        2. Implement a filter panel to refine displayed files by extension or AI-based tags.
        3. Integrate with the backend to fetch repository data and trigger analysis.
      expected_challenges:
        - "Ensuring responsive design for large file lists"
        - "Maintaining consistent state between multiple components"
      acceptance_criteria:
        - "User can successfully filter files by extension"
        - "Displays repository metadata on search"
    complexity:
      estimated_loc: 150
      estimated_hours: 6
    dependencies:
      - "TASK-001"
  
  - id: "TASK-004"
    category: "frontend"
    description: |
      Implement the AI analysis toggle and display AI results inline with the file data.
    technical_details:
      required_technologies:
        - "React / Redux Toolkit or Context"
      implementation_approach: |
        1. Add a button or switch to enable AI-based analysis per repository.
        2. Show aggregated AI analysis on each file or chunk basis.
        3. Provide loading states and handle errors gracefully.
      expected_challenges:
        - "Handling partial or delayed AI responses"
        - "Balancing UI responsiveness with chunk-based updates"
      acceptance_criteria:
        - "User can enable/disable AI analysis with a single click"
        - "Analysis results appear next to relevant file data"
    complexity:
      estimated_loc: 120
      estimated_hours: 5
    dependencies:
      - "TASK-002"
      - "TASK-003"

  - id: "TASK-005"
    category: "backend"
    description: |
      Create and integrate the Settings and Configuration Manager, storing user preferences for chunk sizes, AI usage, and file extension filters.
    technical_details:
      required_technologies:
        - "Node.js"
        - "A simple JSON-based or database-based settings store"
      implementation_approach: |
        1. Define the SettingsModel for user preferences.
        2. Create endpoints /api/settings GET and POST.
        3. Integrate these settings into the AIAnalysisService and RepositoryDataService flow.
      expected_challenges:
        - "Synchronization of local settings changes with database"
        - "Validation of user preferences"
      acceptance_criteria:
        - "User can update and retrieve their personal settings"
        - "Analysis respects updated user preferences"
    complexity:
      estimated_loc: 160
      estimated_hours: 6
    dependencies:
      - "TASK-001"
      - "TASK-002"

5. Implementation Dependencies

  1. GitHub API (Octokit or equivalent)

    • Required for fetching repository data and file structure.
    • Must handle authentication tokens and rate-limiting strategies.
  2. OpenAI-Compatible LLM

    • Optional but recommended for AI analysis features.
    • Could be replaced or supplemented by a self-hosted model if desired.
  3. Caching Layer

    • In-memory or lightweight DB (e.g., Redis, SQLite) for storing fetched repo data and partially analyzed results.
    • Reduces repeated fetches from GitHub.
  4. Frontend Framework

    • React or a similar library with robust state management (e.g., Redux Toolkit).
    • Must facilitate modular, easily testable UI components.
  5. Infrastructure Services

    • Container orchestration (e.g., Docker, Kubernetes).
    • Reverse proxy/load balancer (e.g., Nginx) for production.

Why This App Is Going To Be Amazing

RepoScope 🔎 not only offers a fast and efficient way to analyze repositories but also provides an optional AI-driven layer to handle everything from code summarization to advanced classification—without limiting usability for those who prefer a standalone experience. Its modular design means you can easily add new features (like a prompt manager or advanced analytics) down the road, ensuring it grows with your needs. With its flexible architecture, emphasis on performance, and optional AI enhancements, RepoScope promises to be a game-changer in how developers understand, document, and leverage their codebases.

Execution Chain Meta-Prompt

You are an expert at breaking down technical projects into executable chunks and creating self-contained prompts for implementation and review. Your task is to take a technical project plan and transform it into a series of sequential execution prompts.

Input Structure

input:
  meta_prompt_output: |
    # Raw output from the first meta-prompt
    # Will be in YAML/TOML/XML format
  format: "yaml" # or "toml" or "xml"

Output Format

execution_state:
  done: []  # Array of completed task IDs
  
  done_reviews: []  # Array of completed review IDs
  
  currently_doing:  # Only one task can be here at a time
    task_id: "TASK-ID"  # ID from original plan
    execution_prompt: |
      # Complete self-contained task execution prompt
      PROJECT SETUP:
      ...
      
      CONTEXT:
      ...
      
      TECHNICAL REQUIREMENTS:
      ...
      
      IMPLEMENTATION SPECIFICATIONS:
      ...
      
      EXPECTED OUTPUT:
      ...
      
      ACCEPTANCE CRITERIA:
      ...
  
  pending_review:  # Only contains review for currently_doing task
    task_id: "TASK-ID"  # Same as currently_doing
    review_prompt: |
      # Complete self-contained review prompt
      CONTEXT:
      ...
      
      REVIEW CRITERIA:
      ...
      
      EXPECTED REVIEW OUTPUT:
      ...

Task Selection Rules

  1. Single Task Selection:

    • Only one task can be in progress at any time
    • Next task is selected only after current task is completed and reviewed
    • Task must have all dependencies in 'done' array
  2. Selection Priority:

    • Critical path tasks get priority
    • Foundation/infrastructure tasks before feature tasks
    • Backend services before frontend components that depend on them
    • Core functionality before optional features
  3. Task Readiness Criteria:

    • All dependencies must be completed
    • All required infrastructure must be in place
    • All needed APIs/interfaces must be defined
    • All required design decisions must be made

Project Setup Requirements

Every execution prompt must begin with clear setup instructions (feel free to use any tools like yeoman, cookiecutter, or scripts for efficiency) including:

  1. Project Initialization:

    • Directory creation
    • Package manager initialization
    • Git repository setup (if required)
  2. Dependencies:

    • Core dependencies with versions
    • Development dependencies with versions
    • Peer dependencies if applicable
    • Type definitions
  3. Configuration Files:

    • package.json with scripts
    • TypeScript configuration
    • Testing framework setup
    • Linter configuration
    • Build tool configuration
    • Environment configuration
  4. Development Environment:

    • Required Node.js version
    • Required package manager version
    • Development tools setup
    • IDE recommendations if applicable
  5. Build and Run Instructions:

    • Development server setup
    • Build process
    • Test running
    • Debugging setup

Prompt Generation Rules

  1. Execution Prompts Must Include:

    • Complete project setup instructions
    • Project context and background
    • Task-specific context
    • Architectural context
    • Technical specifications:
      • Language/framework versions
      • Development tools
      • Coding standards
      • Performance requirements
    • Implementation details:
      • File structure and locations
      • Component hierarchy
      • Interface definitions
      • Data structures
      • API contracts
      • Error handling
    • Quality requirements:
      • Testing specifications
      • Documentation standards
      • Performance benchmarks
      • Security requirements
    • Output specifications:
      • Deliverable format
      • File structure
      • Response format
    • Acceptance criteria:
      • Functional requirements
      • Quality thresholds
      • Performance requirements
      • Test coverage
  2. Review Prompts Must Include:

    • Task context and importance
    • Original requirements and specifications
    • Review criteria:
      • Code quality standards
      • Testing requirements
      • Performance benchmarks
      • Security standards
      • Documentation requirements
    • Output format:
      • Approval status format
      • Scoring criteria
      • Issue reporting format
      • Improvement suggestions format
    • Acceptance criteria:
      • Quality thresholds
      • Coverage requirements
      • Performance thresholds

Response Format

You should output:

  1. Current execution state with:
    • List of completed tasks (done)
    • List of completed reviews (done_reviews)
    • Single currently executing task with full prompt
    • Review prompt for current task

The goal is to generate focused, sequential execution and review prompts that any LLM instance can handle without requiring additional context or clarification.

execution_state:
  done: []
  done_reviews: []

  currently_doing:
    task_id: "TASK-000"
    execution_prompt: |
      PROJECT SETUP:
      ---------------
      1. Create a new directory named "reposcope" (or a similar project name) for our application.
      2. Initialize a new git repository in "reposcope".
      3. Ensure you have Node.js (v16 or higher) and npm (v7 or higher) installed.
      4. Navigate into the "reposcope" directory and run `npm init -y` to generate a basic package.json.

      DEPENDENCIES:
      -------------
      - Core Dependencies (exact versions can be adjusted as needed):
        * "express": "^4.18.2"            # For backend server
        * "cors": "^2.8.5"               # For handling cross-origin requests
        * "octokit": "^2.0.0"            # GitHub API interactions
        * "openai": "^3.2.1"             # AI integration (optional usage)
        * "react": "^18.2.0"             # Frontend framework
        * "react-dom": "^18.2.0"         # Required for React
      - Dev Dependencies:
        * "typescript": "^4.9.5"         # Type safety
        * "ts-node": "^10.9.1"           # TypeScript execution
        * "nodemon": "^2.0.20"           # Auto-reload for development
        * "eslint": "^8.30.0"            # Linting
        * "jest": "^29.4.0"              # Testing
        * "react-scripts": "^5.0.1"      # Bundling and dev server for React
      - Peer or Optional Dependencies:
        * "redux" / "redux-toolkit": "^1.8.5"   # For state management if needed
        * "dotenv": "^16.0.3"                   # Environment variables management

      CONFIGURATION FILES:
      --------------------
      1. package.json:
         - Scripts:
           * "dev:backend": "nodemon --watch ./src/server --exec ts-node ./src/server/index.ts"
           * "dev:frontend": "react-scripts start"
           * "build:frontend": "react-scripts build"
           * "test": "jest"
           * "lint": "eslint . --ext .ts,.tsx"
         - Add relevant dependencies and devDependencies as above.

      2. tsconfig.json (placed at project root):
         {
           "compilerOptions": {
             "target": "ES2020",
             "module": "commonjs",
             "rootDir": "./src",
             "outDir": "./dist",
             "strict": true,
             "esModuleInterop": true
           },
           "include": ["src/**/*"],
           "exclude": ["node_modules", "**/*.test.ts", "**/*.spec.ts"]
         }

      3. .eslintrc.js or .eslintrc.json:
         - Extend recommended TypeScript and React linting rules.
         - Include a few custom rules as desired.

      4. jest.config.js:
         - Configure Jest to look for test files in the "tests" or "src" directory.
         - Set up coverage thresholds if desired.

      5. .env.example:
         - GITHUB_TOKEN=""
         - OPENAI_API_KEY=""
         - Other environment variables as needed.

      DEVELOPMENT ENVIRONMENT:
      ------------------------
      - Node.js version: >=16
      - npm version: >=7
      - Recommended IDE: VSCode (with ESLint and Prettier plugins)
      - Docker (optional) for containerization if needed later

      BUILD & RUN INSTRUCTIONS:
      -------------------------
      1. Install dependencies:
         $ npm install
      2. Start the backend in development mode:
         $ npm run dev:backend
      3. In another terminal, start the frontend (React):
         $ npm run dev:frontend
      4. Access the frontend at http://localhost:3000 (default React port).
         The backend will run on a separate port (commonly 4000 or 5000), configurable in your server setup.

      CONTEXT:
      --------
      RepoScope 🔎 aims to provide an interactive, AI-ready analysis of GitHub repositories. Before any specific feature development begins (such as the RepositoryDataService, AIAnalysisService, etc.), we need a robust project skeleton. This ensures we have a consistent environment, configurations, and scripts to build, run, and test the codebase.

      TECHNICAL REQUIREMENTS:
      -----------------------
      - TypeScript for both backend (Node.js) and frontend (React).
      - Strict linting and code quality checks to maintain a clean codebase.
      - Basic testing infrastructure (Jest) to cover upcoming features.
      - Containerization potential using Docker, if we want consistent dev/prod parity.
      - Project structure that separates backend (e.g., `src/server`) from frontend (e.g., `src/client`).
      - All environment-specific configuration must use .env files or environment variables.

      IMPLEMENTATION SPECIFICATIONS:
      ------------------------------
      1. Create `src/server` for backend code and `src/client` for React code.
      2. Setup `index.ts` in `src/server` to initialize Express server.
      3. For the frontend, use the React Scripts structure:
         - `src/client/index.tsx` as the main entry for React.
      4. Integrate linting and test coverage:
         - ESLint should run on commit or as part of CI.
         - Jest tests should be placed in `__tests__` or adjacent `.test.ts/.spec.ts` files.

      QUALITY REQUIREMENTS:
      ---------------------
      - Code must pass ESLint checks (no major warnings).
      - Basic unit tests must run successfully (placeholder tests are fine at this stage).
      - Project must build without errors on both backend and frontend.

      EXPECTED OUTPUT:
      ----------------
      - A fully initialized repository with:
        * package.json containing scripts and dependencies
        * TypeScript configurations
        * ESLint, Jest, and environment files
        * Basic Express server scaffold
        * Basic React application scaffold
      - The ability to run “npm run dev:backend” and “npm run dev:frontend” successfully.

      ACCEPTANCE CRITERIA:
      --------------------
      - The repository can be cloned on a fresh environment, dependencies installed, and dev servers (backend & frontend) can be started without errors.
      - ESLint produces no error-level issues.
      - Jest is installed and can run a simple placeholder test successfully.
      - Proper separation of concerns between backend and frontend directories.

  pending_review:
    task_id: "TASK-000"
    review_prompt: |
      CONTEXT:
      --------
      This review is for the initial project setup (TASK-000) which establishes our React + Node.js/Express + TypeScript environment, linting, testing, and standard build scripts. It ensures we have a stable foundation for all subsequent tasks.

      REVIEW CRITERIA:
      ----------------
      1. Code Quality & Linting:
         - Check if ESLint is configured properly.
         - Verify the directory structure is consistent with the plan.
      2. Testing Setup:
         - Confirm Jest is installed and a basic test runs successfully.
      3. Configuration Files:
         - Ensure TypeScript config, environment variables, and package scripts match the prompt.
      4. Build & Run Verification:
         - Ensure "npm run dev:backend" and "npm run dev:frontend" spin up services without issues.
      5. Documentation:
         - Confirm that README or inline docs mention how to install and run the project.

      EXPECTED REVIEW OUTPUT:
      -----------------------
      - Approval Status: e.g. "APPROVED" or "REQUEST_CHANGES"
      - Code Quality Score (1-5)
      - Identified Issues: A bullet-point list of any errors or potential improvements
      - Suggestions for Improvement: A bullet-point list of recommendations
      - Final Verdict: Summarize whether the setup is ready to proceed.

      ACCEPTANCE CRITERIA:
      --------------------
      - Linting passes with zero errors.
      - Basic test suite runs with no failures.
      - Directory structure and config files are properly set.
      - Dev servers run without crashes.

If your LLM doesn't like 02_prompt_chain for prompts after the first try this

Task Selection Meta-Prompt

You are an expert at managing technical project execution chains and maintaining context across task transitions. Your role is to analyze the current project state and select the next optimal task for execution while ensuring all constraints and dependencies are respected.

Input Format

project_state:
  # Full project plan from Planning Meta-Prompt
  project_plan: |
    [Original YAML project plan]
  
  # Current execution state
  execution_state:
    done: []  # Array of completed task IDs
    done_reviews: []  # Array of completed review IDs
    last_task:
      task_id: "TASK-ID"
      summary: # Replace 'implementation' with summary
        interfaces: []
        key_decisions: []
        critical_constraints: []
        data_structures: []
      review_result: |
        [Completed review]
  
  # All previously generated execution prompts for reference
  task_summaries:
    - task_id: "TASK-ID"
      summary:
        interfaces: []
        key_decisions: []
        critical_constraints: []
        data_structures: []

Task Selection Process

  1. Dependency Analysis:

    • Review all tasks in project plan
    • Filter for tasks with all dependencies in 'done' array
    • Consider technical prerequisites (APIs, infrastructure, etc.)
  2. Priority Assessment:

    • Critical path analysis
    • Infrastructure dependencies
    • Service dependencies
    • Feature dependencies
  3. Context Continuity:

    • Analyze previous task implementation
    • Consider knowledge transfer requirements
    • Evaluate technical context preservation
    • Assess state management needs
  4. Resource Optimization:

    • Consider setup reuse opportunities
    • Evaluate environment consistency
    • Analyze tool and dependency overlap
    • Consider testing infrastructure reuse

Output Format

next_task:
  selection_rationale: |
    Detailed explanation of why this task was selected
    - Dependency analysis results
    - Priority considerations
    - Context continuity factors
    - Resource optimization insights
  
  execution_state:
    done: []  # Updated array with last task
    done_reviews: []  # Updated array with last review
    currently_doing:
      task_id: "TASK-ID"
      execution_prompt: |
        # Complete self-contained task execution prompt
        [Follow Execution Chain Meta-Prompt format]
    
    pending_review:
      task_id: "TASK-ID"
      review_prompt: |
        # Complete self-contained review prompt
        [Follow Execution Chain Meta-Prompt format]

  context_preservation:
    technical_dependencies:
      - List of technical elements that must be preserved
    state_requirements:
      - List of state that must be maintained
    environment_continuity:
      - List of environment aspects to preserve

Task Generation Requirements

  1. Execution Prompt Requirements:

    • Must be fully self-contained
    • Must include complete setup instructions
    • Must specify all technical requirements
    • Must define clear deliverables
    • Must include all context from previous tasks
    • Must maintain consistent coding standards
    • Must preserve architectural decisions
  2. Review Prompt Requirements:

    • Must verify context preservation
    • Must validate dependency handling
    • Must ensure consistent standards
    • Must verify state management
    • Must validate documentation
    • Must check integration points
  3. Context Preservation Requirements:

    • Technical decisions must be documented
    • State management must be explicit
    • Environment configuration must be preserved
    • Testing approach must be consistent
    • Documentation standards must be maintained

Special Considerations

  1. Branching Tasks:

    • Handle cases where multiple tasks become available
    • Document alternative task options
    • Explain selection criteria
    • Preserve context for alternate branches
  2. Integration Points:

    • Clearly specify interface requirements
    • Document API contracts
    • Define data formats
    • Specify validation requirements
  3. Technical Debt:

    • Track accumulated technical decisions
    • Document required refactoring
    • Plan debt resolution
    • Maintain quality standards
  4. Knowledge Transfer:

    • Document critical information
    • Maintain decision history
    • Preserve architectural context
    • Track important trade-offs

Response Guidelines

Your response must:

  1. Show clear reasoning for task selection
  2. Generate complete execution and review prompts
  3. Maintain all project constraints and standards
  4. Preserve technical context and decisions
  5. Ensure consistency with previous tasks
  6. Document all assumptions and dependencies

The goal is to maintain high-quality, consistent task execution while preserving all technical context and project standards throughout the development process.

Below is a recommended approach to set up the RepoScope skeleton. Follow these steps and adjust them according to your exact needs. By the end, you’ll have a directory structure with TypeScript, ESLint, Jest, and a basic Express + React application ready to develop further.


1. Project Initialization

  1. Create and enter the project directory:

    mkdir reposcope
    cd reposcope
  2. Initialize a new git repository:

    git init
  3. Initialize a Node.js project:

    npm init -y

    This will create a package.json file with default values.


2. File/Directory Structure

A suggested structure is shown below. You can tweak or reorganize as needed, but this is a good baseline:

reposcope/
  ├─ .env.example
  ├─ .gitignore
  ├─ package.json
  ├─ tsconfig.json
  ├─ jest.config.js
  ├─ .eslintrc.js        (or .eslintrc.json)
  ├─ src/
  │   ├─ server/
  │   │   └─ index.ts
  │   └─ client/
  │       ├─ index.tsx
  │       ├─ App.tsx
  │       └─ ...
  └─ ...

Key Folders:

  • src/server/: Holds backend (Express) code in TypeScript.
  • src/client/: Holds frontend (React) code in TypeScript.

3. Install Dependencies

Core Dependencies

npm install express cors octokit openai react react-dom

Note: openai is optional if you intend to integrate AI features.

Dev Dependencies

npm install -D typescript ts-node nodemon eslint jest react-scripts \
               @types/node @types/express @types/react @types/react-dom \
               @types/jest

Why these?

  • TypeScript & ts-node: Transpile and run TypeScript.
  • nodemon: Automatically restarts Node on file changes.
  • eslint: Code linting.
  • jest: Testing framework.
  • react-scripts: Bundling and dev server for React.
  • @types/ packages: Provide TypeScript type definitions for Node, Express, React, etc.

If you want Redux/Toolkit or dotenv support:

npm install redux @reduxjs/toolkit dotenv

(Include them only if you need them.)


4. Update package.json Scripts

Open package.json and replace or add the following scripts:

{
  "name": "reposcope",
  "version": "1.0.0",
  "scripts": {
    "dev:backend": "nodemon --watch ./src/server --exec ts-node ./src/server/index.ts",
    "dev:frontend": "react-scripts start",
    "build:frontend": "react-scripts build",
    "test": "jest",
    "lint": "eslint . --ext .ts,.tsx"
  },
  // ...
}

Feel free to keep or remove any default test script that npm init -y provided.


5. TypeScript Configuration

Create a tsconfig.json at the project root:

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "rootDir": "./src",
    "outDir": "./dist",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "**/*.test.ts", "**/*.spec.ts"]
}
  • rootDir: Points to your source code folder.
  • outDir: Where compiled JavaScript will be emitted (if you run tsc directly).

6. ESLint Configuration

Create either a .eslintrc.js or .eslintrc.json file:

Example .eslintrc.js:

module.exports = {
  parser: '@typescript-eslint/parser', // Use the TypeScript parser
  extends: [
    'eslint:recommended',
    'plugin:@typescript-eslint/recommended',
    'plugin:react/recommended',
    'plugin:react-hooks/recommended'
  ],
  parserOptions: {
    ecmaFeatures: {
      jsx: true
    },
    ecmaVersion: 'latest',
    sourceType: 'module'
  },
  settings: {
    react: {
      version: 'detect'
    }
  },
  rules: {
    // Add or override any rule settings here:
    // "semi": ["error", "always"],
    // "@typescript-eslint/no-explicit-any": "off",
  }
};

Make sure to install ESLint plugins if needed:

npm install -D @typescript-eslint/parser @typescript-eslint/eslint-plugin \
               eslint-plugin-react eslint-plugin-react-hooks

7. Jest Configuration

Create jest.config.js at the project root:

module.exports = {
  preset: 'ts-jest', // Use ts-jest for TypeScript files
  testEnvironment: 'node',
  roots: ['<rootDir>/src'],
  transform: {
    '^.+\\.tsx?$': 'ts-jest'
  },
  moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node']
};

Also install ts-jest:

npm install -D ts-jest

8. Environment Variables

  1. Create a .env.example file in the project root:
    GITHUB_TOKEN=""
    OPENAI_API_KEY=""
    Add any other environment variables you anticipate needing.
  2. Copy this file to .env (which you typically do not commit to Git) and fill in actual secrets/keys locally.

9. Backend Scaffold

Inside src/server/index.ts:

import express from 'express';
import cors from 'cors';

const app = express();
const PORT = process.env.PORT || 4000;

// Middleware
app.use(cors());
app.use(express.json());

// Simple test route
app.get('/', (req, res) => {
  res.json({ message: 'Hello from the RepoScope backend!' });
});

app.listen(PORT, () => {
  console.log(`Backend server listening on port ${PORT}`);
});

10. Frontend Scaffold

  1. Create React files under src/client/. For example:

    src/client/index.tsx:

    import React from 'react';
    import ReactDOM from 'react-dom/client';
    import App from './App';
    
    const root = ReactDOM.createRoot(document.getElementById('root') as HTMLElement);
    root.render(<App />);

    src/client/App.tsx:

    import React from 'react';
    
    function App() {
      return (
        <div>
          <h1>RepoScope Client</h1>
          <p>Welcome to the React frontend!</p>
        </div>
      );
    }
    
    export default App;
  2. React Scripts will look for an index.html inside a public/ folder, so create public/index.html:

    <!DOCTYPE html>
    <html lang="en">
      <head>
        <meta charset="utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
        <title>RepoScope</title>
      </head>
      <body>
        <div id="root"></div>
      </body>
    </html>

11. .gitignore

Create a .gitignore (if you don’t already have one):

# Logs
logs
*.log
npm-debug.log*

# Dependencies
node_modules/

# Build output
dist/
build/

# Environment files
.env

12. Build & Run Instructions

  1. Install dependencies:

    npm install
  2. Start the backend in development mode:

    npm run dev:backend
    • By default, this runs on port 4000.
  3. In another terminal, start the frontend:

    npm run dev:frontend
    • By default, React will serve on port 3000.
  4. Test the app:

  5. Run tests (Jest):

    npm test

    (Make sure you have at least one test file in src/ or a __tests__ folder.)

  6. Lint your code:

    npm run lint
    • Fix any reported issues for a clean codebase.

13. Verification / Acceptance Criteria

  1. Linting: Confirm ESLint runs without error-level issues.
  2. Testing: Confirm Jest can run at least one basic test.
  3. Dev Servers: Confirm both backend (npm run dev:backend) and frontend (npm run dev:frontend) run without errors.
  4. Environment Variables: Confirm .env is correctly set up if you need GitHub tokens, OpenAI keys, etc.

Final Note

This setup provides a robust, TypeScript-based skeleton for RepoScope. You can now begin implementing features such as RepositoryDataService, AIAnalysisService, or more advanced React components with confidence that the codebase is linted, tested, and properly separated between backend and frontend.

Happy coding!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment