This document outlines the end-to-end process that occurs when a request is sent to the POST /voice-quote endpoint. The system is designed to be robust, handling both exact data matches and ambiguous inputs through a combination of a rules-based engine and a fallback to a Generative AI model (LLM).
graph TD
A[Start: POST /voice-quote] --> B{1. Validate `service_type`};
B -- Missing --> C[Send 400 Error];
B -- Present --> D[2. Find Best Service Match];
D -- Not Found --> E[Send 500 Error: Service Not Found];
D -- Found --> F[Augment Request with `service_ids`];
F --> G[3. Delegate to Core `getQuote` Logic];
G --> H{4. Normalize `service_time` via LLM};
H --> I{5. Search for Vehicle in Database};
I -- Found --> J[Path A: Rules-Based Engine];
I -- Not Found --> K[Path B: LLM Fallback];
subgraph Path A
J --> L[Find Pricing Rule by Category & Year];
L --> M[Calculate Base Price];
M --> N[Calculate Time & Luxury Surcharges];
N --> O[Add Cost for Add-on Services];
O --> P["Construct Final Quote (source: 'rules')"];
end
subgraph Path B
K --> Q[Prepare Prompt for LLM];
Q --> R[LLM Determines Base Price & Vehicle Class];
R --> S[Calculate Time & Luxury Surcharges];
S --> T[Add Cost for Add-on Services];
T --> U["Construct Final Quote (source: 'llm')"];
end
P --> V[6. Send 200 OK Response with Quote];
U --> V;
C --> W((End));
E --> W;
V --> W;
-
Initial Request & Validation:
- The process begins when a
POSTrequest is sent to/voice-quote. - The controller first checks if the
service_typefield is present in the request body. If it's missing, a400 Bad Requesterror is immediately returned.
- The process begins when a
-
Service Matching (Fuzzy Search):
- The
service_typestring (e.g., "Unlock my car") is passed to theserviceMatcherService. - This service uses the Fuse.js library to perform a fuzzy search against a list of all available services in the database.
- The search is weighted, prioritizing matches against the service's
name(70% weight) and its associatedkeywords(30% weight). - If a sufficiently close match is found (based on a predefined threshold), the corresponding service object is returned. If no match is found, a
500 Internal Server Erroris returned, indicating the service could not be identified.
- The
-
Request Augmentation & Delegation:
- Once the best-matching service is identified, its ID is added to the request body as
service_ids: [id]. - The entire request is then passed internally to the
getQuotecontroller logic, which handles all standard price quoting.
- Once the best-matching service is identified, its ID is added to the request body as
-
Time Normalization:
- The
pricingServicereceives the request. Its first action is to normalize theservice_timefield. - It uses a helper function,
normalizeTimeWithLlm, which can interpret natural language strings (e.g.,"tonight at 10pm","now") and convert them into a standard ISO 8601 timestamp using an LLM (Google's Gemini). If the timestamp is already in a valid format, it is used as-is.
- The
-
Pricing Logic Fork (Rules vs. LLM):
-
The system now attempts to find the vehicle (
makeandmodel) in its internal database. This is the critical branching point. -
Path A: Rules-Based Engine (Vehicle Found)
- If the vehicle is found, the system proceeds with its deterministic rules engine.
- It retrieves the vehicle's
Category(e.g., "Sedan", "SUV"). - It looks up the
Base_Pricefrom a pricing table using the vehicle'sCategoryandyear. - It calculates any applicable surcharges:
- Time Surcharge: Based on the normalized
service_time. - Luxury Surcharge: A flat
$10fee is added if the vehicle'smakeis on the luxury list.
- Time Surcharge: Based on the normalized
- It adds a flat fee (
$35) for any additional "add-on" services requested. - The final quote is assembled with
source: "rules"andconfidence: 1.0.
-
Path B: LLM Fallback (Vehicle Not Found)
- If the vehicle is not in the database (e.g., due to a typo, an obscure model, or a new vehicle), the system falls back to the LLM.
- It constructs a detailed prompt for the Gemini model, providing the user's original input, a list of all possible vehicle classes, and the entire vehicle pricing table for context.
- The LLM's task is to analyze the input and return a structured JSON object containing a
vehicle_class, abase_price, and aconfidencescore. - The system then uses the
base_pricereturned by the LLM and calculates the time and luxury surcharges in the same way as the rules-based engine. - The final quote is assembled with
source: "llm"and the confidence score provided by the model.
-
-
Final Response:
- Regardless of the path taken, the system constructs a final JSON object containing the total
quote, abreakdownof the costs (base price, surcharges, add-ons), thesourceof the quote (rulesorllm), and aconfidencescore. - This JSON object is returned to the client with a
200 OKstatus code.
- Regardless of the path taken, the system constructs a final JSON object containing the total