The pricing system uses a hybrid approach that combines a fast, rules-based engine for standard requests with a powerful Generative AI (LLM) fallback to handle ambiguity, misspellings, or vehicles not in our standard database.
This ensures both accuracy for known vehicles and flexibility for unpredictable user input.
The process can be visualized with the following flow diagram.
graph TD
A[Start: POST /quote] --> B{Input: year, make, model, service_time};
B --> C[1. Normalize Time];
C --> D{Vehicle in Database?};
D -- Yes --> E[Path A: Rules Engine];
D -- No --> F[Path B: LLM Engine];
subgraph Path A: Rules Engine
E --> G[2a. Find Vehicle Category];
G --> H[3a. Get Base Price from Pricing Table];
H --> I[4a. Calculate Surcharges];
I --> J[5a. Assemble Final Quote];
end
subgraph Path B: LLM Engine
F --> K[2b. Prepare LLM Prompt];
K --> L[3b. LLM Classifies Vehicle & Gets Base Price];
L --> M[4b. Calculate Surcharges];
M --> N[5b. Assemble Final Quote];
end
J --> O[End: Return Quote source: rules];
N --> P[End: Return Quote source: llm];
The system first standardizes the service_time input.
- If the input is a standard format (like an ISO 8601 string or Unix timestamp), it is used directly.
- If the input is a natural language string (e.g.,
"tonight at 10pm","tomorrow morning"), it is sent to the Gemini LLM, which converts it into a precise ISO 8601 timestamp. This ensures that time-based rules can be applied reliably.
The system attempts to find an exact match for the user's vehicle using the pricingRepository.findVehicle function.
- It normalizes the
makeandmodel(e.g.,"Honda","accord") and searchesvehicles.csv.
If an exact vehicle match is found, the calculation follows a deterministic, rules-based path:
- 3a. Get Base Price: The vehicle's
Category(e.g., "Full-Size Car") is retrieved from the database. The system then finds the matching rule invehicle_pricing.csvfor that category andyearto determine theBase_Price. - 4a. Calculate Surcharges:
- Time Surcharge: The normalized
service_timeis checked against thetime_surcharges.csvtable. If the time falls within a surcharge window (e.g., late night), the corresponding fee is added. - Luxury Uplift: The vehicle's
makeis checked againstluxury_makes.csv. If it's listed (e.g., "Cadillac", "Mercedes-Benz"), a fixed$10uplift is added.
- Time Surcharge: The normalized
- 5a. Assemble Final Quote: The final quote is calculated as
Base_Price + Time_Surcharge + Luxury_Uplift. The response is returned withsource: "rules"andconfidence: 1.0.
If no exact vehicle match is found (due to a typo, abbreviation like "Chevy", or an unlisted vehicle), the system delegates the task to the Gemini LLM:
- 2b. Prepare LLM Prompt: A detailed prompt is constructed for the LLM. This prompt includes the user's original input, a list of all valid
Vehicle Classes, and the entire contents of thevehicle_pricing.csvtable. - 3b. LLM Classifies and Prices: The LLM is instructed to:
- Normalize the user's input (e.g., correct "Chevy" to "Chevrolet").
- Classify the vehicle into one of the provided
Vehicle Classes. - Look up the correct
base_pricefrom the CSV data provided in the prompt. - Return a structured JSON object containing the
vehicle_class,base_price,confidencescore, andnotesabout its reasoning.
- 4b. Calculate Surcharges: Using the
base_pricefrom the LLM, the system applies theTime SurchargeandLuxury Upliftusing the same logic as the rules-based path. - 5b. Assemble Final Quote: The final quote is
LLM_Base_Price + Time_Surcharge + Luxury_Uplift. The response is returned withsource: "llm"and the confidence score from the model.
- including this entire function because it best demonstrates how the prompt is engineered
- we can modify this prompt to process bundled pricing, etc.
async function getLlmQuote(userInput) {
console.log('[PricingService] getLlmQuote called with user input:', JSON.stringify(userInput, null, 2));
const systemMessage = `You are an expert vehicle classification and pricing assistant. Your sole purpose is to return a structured JSON object.
- You MUST return only a single, valid JSON object without any explanatory text, code block formatting, or markdown.
- You MUST select a 'vehicle_class' from the provided list.
- You MUST determine the 'base_price' by finding the correct range in the provided vehicle pricing table.
- If input is ambiguous, use a 'confidence' score below 1.0.
- Your goal is to normalize messy user input into a clean, structured format.
- All fields in the user input, including 'service_time', are important context. Acknowledge them in your 'notes' if they influence your decision.`;
const userMessage = `Please process the following vehicle service request.
**User Input:**
- Structured Data: ${JSON.stringify(userInput)}
**Context Data:**
**Vehicle Classes:**
[${pricingRepository.getVehicleCategories().join(", ")}]
**Vehicle Pricing Table (CSV Format):**
"""
${pricingRepository.getVehiclePricingCSV()}
"""
**Task:**
Return a JSON object with the following structure.
- If 'service_time' is a natural language string (e.g., "tonight at 10pm"), convert it into a full ISO 8601 timestamp and place it in the 'normalized_service_time' field.
- If 'service_time' is already a valid timestamp, copy it to 'normalized_service_time'.
{
"vehicle_class": "string",
"normalized": { "make": "string", "model": "string", "year": number },
"base_price": number,
"confidence": number,
"notes": "string",
"normalized_service_time": "string"
}`;
try {
console.log('[PricingService] Calling LLM for quote generation...');
const result = await llmModel.generateContent([systemMessage, userMessage]);
const responseText = result.response.text();
console.log('[PricingService] LLM raw response:', responseText);
const cleanedJson = responseText.replace(/```json/g, "").replace(/```/g, "").trim();
const parsedJson = JSON.parse(cleanedJson);
console.log('[PricingService] LLM parsed response:', JSON.stringify(parsedJson, null, 2));
return parsedJson;
} catch (error) {
console.error("[PricingService] LLM API call failed:", error);
throw new Error("LLM quote generation failed.");
}
}