Skip to content

Instantly share code, notes, and snippets.

@anthonycoffey
Last active October 8, 2025 19:54
Show Gist options
  • Select an option

  • Save anthonycoffey/7d9d877b33fb31e6667228e6cce18e32 to your computer and use it in GitHub Desktop.

Select an option

Save anthonycoffey/7d9d877b33fb31e6667228e6cce18e32 to your computer and use it in GitHub Desktop.
Explanation of How the Quote is Calculated

Pricing Logic Documentation

The pricing system uses a hybrid approach that combines a fast, rules-based engine for standard requests with a powerful Generative AI (LLM) fallback to handle ambiguity, misspellings, or vehicles not in our standard database.

This ensures both accuracy for known vehicles and flexibility for unpredictable user input.


Pricing Calculation Flow

The process can be visualized with the following flow diagram.

graph TD
    A[Start: POST /quote] --> B{Input: year, make, model, service_time};
    B --> C[1. Normalize Time];
    C --> D{Vehicle in Database?};
    D -- Yes --> E[Path A: Rules Engine];
    D -- No --> F[Path B: LLM Engine];

    subgraph Path A: Rules Engine
        E --> G[2a. Find Vehicle Category];
        G --> H[3a. Get Base Price from Pricing Table];
        H --> I[4a. Calculate Surcharges];
        I --> J[5a. Assemble Final Quote];
    end

    subgraph Path B: LLM Engine
        F --> K[2b. Prepare LLM Prompt];
        K --> L[3b. LLM Classifies Vehicle & Gets Base Price];
        L --> M[4b. Calculate Surcharges];
        M --> N[5b. Assemble Final Quote];
    end

    J --> O[End: Return Quote source: rules];
    N --> P[End: Return Quote source: llm];

Loading

Step-by-Step Logic Breakdown

1. Time Normalization (service_time)

The system first standardizes the service_time input.

  • If the input is a standard format (like an ISO 8601 string or Unix timestamp), it is used directly.
  • If the input is a natural language string (e.g., "tonight at 10pm", "tomorrow morning"), it is sent to the Gemini LLM, which converts it into a precise ISO 8601 timestamp. This ensures that time-based rules can be applied reliably.

2. Vehicle Lookup

The system attempts to find an exact match for the user's vehicle using the pricingRepository.findVehicle function.

  • It normalizes the make and model (e.g., "Honda", "accord") and searches vehicles.csv.

Path A: Rules-Based Engine (Vehicle Found)

If an exact vehicle match is found, the calculation follows a deterministic, rules-based path:

  • 3a. Get Base Price: The vehicle's Category (e.g., "Full-Size Car") is retrieved from the database. The system then finds the matching rule in vehicle_pricing.csv for that category and year to determine the Base_Price.
  • 4a. Calculate Surcharges:
    • Time Surcharge: The normalized service_time is checked against the time_surcharges.csv table. If the time falls within a surcharge window (e.g., late night), the corresponding fee is added.
    • Luxury Uplift: The vehicle's make is checked against luxury_makes.csv. If it's listed (e.g., "Cadillac", "Mercedes-Benz"), a fixed $10 uplift is added.
  • 5a. Assemble Final Quote: The final quote is calculated as Base_Price + Time_Surcharge + Luxury_Uplift. The response is returned with source: "rules" and confidence: 1.0.

Path B: LLM Fallback Engine (Vehicle Not Found)

If no exact vehicle match is found (due to a typo, abbreviation like "Chevy", or an unlisted vehicle), the system delegates the task to the Gemini LLM:

  • 2b. Prepare LLM Prompt: A detailed prompt is constructed for the LLM. This prompt includes the user's original input, a list of all valid Vehicle Classes, and the entire contents of the vehicle_pricing.csv table.
  • 3b. LLM Classifies and Prices: The LLM is instructed to:
    1. Normalize the user's input (e.g., correct "Chevy" to "Chevrolet").
    2. Classify the vehicle into one of the provided Vehicle Classes.
    3. Look up the correct base_price from the CSV data provided in the prompt.
    4. Return a structured JSON object containing the vehicle_class, base_price, confidence score, and notes about its reasoning.
  • 4b. Calculate Surcharges: Using the base_price from the LLM, the system applies the Time Surcharge and Luxury Uplift using the same logic as the rules-based path.
  • 5b. Assemble Final Quote: The final quote is LLM_Base_Price + Time_Surcharge + Luxury_Uplift. The response is returned with source: "llm" and the confidence score from the model.

LLM Prompt

  • including this entire function because it best demonstrates how the prompt is engineered
  • we can modify this prompt to process bundled pricing, etc.
async function getLlmQuote(userInput) {
  console.log('[PricingService] getLlmQuote called with user input:', JSON.stringify(userInput, null, 2));
  const systemMessage = `You are an expert vehicle classification and pricing assistant. Your sole purpose is to return a structured JSON object.

- You MUST return only a single, valid JSON object without any explanatory text, code block formatting, or markdown.
- You MUST select a 'vehicle_class' from the provided list.
- You MUST determine the 'base_price' by finding the correct range in the provided vehicle pricing table.
- If input is ambiguous, use a 'confidence' score below 1.0.
- Your goal is to normalize messy user input into a clean, structured format.
- All fields in the user input, including 'service_time', are important context. Acknowledge them in your 'notes' if they influence your decision.`;

  const userMessage = `Please process the following vehicle service request.

**User Input:**
- Structured Data: ${JSON.stringify(userInput)}

**Context Data:**

**Vehicle Classes:**
[${pricingRepository.getVehicleCategories().join(", ")}]

**Vehicle Pricing Table (CSV Format):**
"""
${pricingRepository.getVehiclePricingCSV()}
"""

**Task:**
Return a JSON object with the following structure.
- If 'service_time' is a natural language string (e.g., "tonight at 10pm"), convert it into a full ISO 8601 timestamp and place it in the 'normalized_service_time' field.
- If 'service_time' is already a valid timestamp, copy it to 'normalized_service_time'.

{
  "vehicle_class": "string",
  "normalized": { "make": "string", "model": "string", "year": number },
  "base_price": number,
  "confidence": number,
  "notes": "string",
  "normalized_service_time": "string"
}`;

  try {
    console.log('[PricingService] Calling LLM for quote generation...');
    const result = await llmModel.generateContent([systemMessage, userMessage]);
    const responseText = result.response.text();
    console.log('[PricingService] LLM raw response:', responseText);
    const cleanedJson = responseText.replace(/```json/g, "").replace(/```/g, "").trim();
    const parsedJson = JSON.parse(cleanedJson);
    console.log('[PricingService] LLM parsed response:', JSON.stringify(parsedJson, null, 2));
    return parsedJson;
  } catch (error) {
    console.error("[PricingService] LLM API call failed:", error);
    throw new Error("LLM quote generation failed.");
  }
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment