Multi-shot prompting (also called few-shot prompting) is a powerful technique to improve an LLM's ability to extract information accurately by providing multiple examples within the prompt. This helps the model understand the task, context, and desired output format better than zero-shot or single-shot prompting. Below, I’ll explain multi-shot prompting for routing function calls and extracting information from text, with clear examples tailored to your use case.
Since you mentioned issues with LLMs not extracting information properly for routing function calls, I’ll focus on scenarios where the model needs to parse user input and map it to specific functions, including handling ambiguous or noisy text. I’ll also include varied examples to demonstrate robustness.
Multi-shot prompting involves giving the LLM several examples of input-output pairs in the prompt before asking it to process the actual input. Each example shows the expected task (e.g., extracting parameters from text and routing to a function). This helps the model learn the pattern and generalize to new inputs, improving accuracy.
For routing function calls, multi-shot prompting is particularly useful because:
- It clarifies how to handle vague, incomplete, or differently phrased inputs.
- It teaches the model to extract specific parameters needed for function calls.
- It reduces errors in mapping user intent to the correct function.
I’ll provide three scenarios to illustrate multi-shot prompting for routing function calls. Each scenario includes a different type of text input and function routing task, with multi-shot examples to guide the LLM. I’ll also explain why each example is structured the way it is.
Task: The user provides a natural language query about the weather, and the LLM needs to extract the city and date to route to a get_weather(city, date)
function. The input might be ambiguous or phrased differently.
Multi-Shot Prompt:
You are an assistant that extracts parameters from user input to route to the `get_weather(city, date)` function. The function requires a city name and a date (in YYYY-MM-DD format). If the date is not specified, assume today's date, 2025-04-10. If the input is ambiguous, choose the most likely city and clarify assumptions. Return the output as a JSON object with `function`, `city`, and `date` fields.
Here are examples:
Example 1:
Input: "What's the weather like in Paris tomorrow?"
Output: {
"function": "get_weather",
"city": "Paris",
"date": "2025-04-11"
}
Example 2:
Input: "Tell me about the weather in New York"
Output: {
"function": "get_weather",
"city": "New York",
"date": "2025-04-10"
}
Example 3:
Input: "How's it going to be in London on Friday?"
Output: {
"function": "get_weather",
"city": "London",
"date": "2025-04-11"
}
Example 4:
Input: "Weather for Chicago next week"
Output: {
"function": "get_weather",
"city": "Chicago",
"date": "2025-04-17"
}
Now, process this input:
Input: "What's the forecast for Seattle this weekend?"
Output:
{
"function": "get_weather",
"city": "Seattle",
"date": "2025-04-12"
}
Why This Works:
- Multiple Examples: The four examples cover different ways users might phrase their request (e.g., "tomorrow," no date, "on Friday," "next week"), teaching the model to handle variations.
- Edge Cases: Example 4 handles a vague time frame ("next week") by picking a reasonable date, showing the model how to make assumptions.
- Output Format: The JSON structure is consistent across examples, reinforcing the expected format.
- Assumption Clarification: The instruction to assume today’s date (2025-04-10) if none is specified ensures the model doesn’t fail on incomplete inputs.
- New Input: "This weekend" is interpreted as Saturday (2025-04-12), following the pattern of picking a specific day for vague terms like Example 3 ("Friday") and Example 4 ("next week").
Task: The user submits a customer support query, and the LLM needs to route it to one of three functions: create_ticket(topic)
, check_order_status(order_id)
, or cancel_order(order_id)
. The input might be conversational or unclear.
Multi-Shot Prompt:
You are an assistant that routes customer support queries to one of three functions: `create_ticket(topic)`, `check_order_status(order_id)`, or `cancel_order(order_id)`. Extract the relevant parameters from the input and return a JSON object with `function` and the required parameter (`topic` or `order_id`). If the input is unclear, choose the most likely function and explain your reasoning in a `note` field.
Here are examples:
Example 1:
Input: "I have a problem with my account login"
Output: {
"function": "create_ticket",
"topic": "account login issue"
}
Example 2:
Input: "Can you tell me the status of my order 12345?"
Output: {
"function": "check_order_status",
"order_id": "12345"
}
Example 3:
Input: "Please cancel my order number 67890"
Output: {
"function": "cancel_order",
"order_id": "67890"
}
Example 4:
Input: "Something’s wrong with my delivery, it hasn’t arrived"
Output: {
"function": "create_ticket",
"topic": "delivery issue",
"note": "Assumed a ticket is needed since no order ID was provided"
}
Example 5:
Input: "Where’s my package? It’s order 54321"
Output: {
"function": "check_order_status",
"order_id": "54321"
}
Now, process this input:
Input: "I want to know what’s happening with order 98765"
Output:
{
"function": "check_order_status",
"order_id": "98765"
}
Why This Works:
- Variety of Inputs: The examples include straightforward requests (Examples 2, 3), vague complaints (Example 4), and mixed intents (Example 5), preparing the model for diverse inputs.
- Parameter Extraction: The model learns to pull out
order_id
when numbers are mentioned and default tocreate_ticket
for general issues. - Error Handling: Example 4 includes a
note
field to show how to handle ambiguity, encouraging the model to explain assumptions. - Consistency: The JSON format is strictly followed, reducing output errors.
- New Input: The input clearly mentions "order 98765" and a status request, matching the pattern in Examples 2 and 5, so the model routes correctly.
Task: The user asks to schedule a meeting, and the LLM needs to extract the time, date, and participant to call schedule_meeting(time, date, participant)
. Inputs might lack details or use informal language.
Multi-Shot Prompt:
You are an assistant that extracts parameters from user input to call the `schedule_meeting(time, date, participant)` function. The `time` should be in HH:MM format (24-hour), `date` in YYYY-MM-DD format, and `participant` is a single name. If the time is missing, assume 09:00. If the date is missing, assume tomorrow (2025-04-11). If the participant is unclear, use "Unknown" and note the assumption. Return the output as a JSON object with `function`, `time`, `date`, and `participant` fields.
Here are examples:
Example 1:
Input: "Schedule a meeting with Alice tomorrow at 3 PM"
Output: {
"function": "schedule_meeting",
"time": "15:00",
"date": "2025-04-11",
"participant": "Alice"
}
Example 2:
Input: "Book a meeting with Bob on April 15"
Output: {
"function": "schedule_meeting",
"time": "09:00",
"date": "2025-04-15",
"participant": "Bob"
}
Example 3:
Input: "Set up a call next Monday morning"
Output: {
"function": "schedule_meeting",
"time": "09:00",
"date": "2025-04-14",
"participant": "Unknown",
"note": "No participant specified, assumed Unknown"
}
Example 4:
Input: "Meeting with Charlie at 10 in the morning tomorrow"
Output: {
"function": "schedule_meeting",
"time": "10:00",
"date": "2025-04-11",
"participant": "Charlie"
}
Example 5:
Input: "Can you arrange something for Wednesday with no one specific"
Output: {
"function": "schedule_meeting",
"time": "09:00",
"date": "2025-04-16",
"participant": "Unknown",
"note": "Participant explicitly unspecified, used Unknown"
}
Now, process this input:
Input: "Arrange a meeting with Sarah next Tuesday at 2 in the afternoon"
Output:
{
"function": "schedule_meeting",
"time": "14:00",
"date": "2025-04-15",
"participant": "Sarah"
}
Why This Works:
- Diverse Phrasing: Examples cover explicit times ("3 PM"), missing times ("on April 15"), vague dates ("next Monday"), and unclear participants ("no one specific").
- Default Values: Instructions to assume 09:00 for missing times and tomorrow (2025-04-11) for missing dates prevent incomplete outputs (Examples 2, 3).
- Ambiguity Handling: Examples 3 and 5 show how to use "Unknown" for participants, with a
note
to explain reasoning. - Time Conversion: The model learns to convert "3 PM" to "15:00" and "10 in the morning" to "10:00", as seen in Examples 1 and 4.
- New Input: "Next Tuesday" is calculated as 2025-04-15 (from 2025-04-10), and "2 in the afternoon" becomes "14:00", following Example 1’s pattern.
Based on your issue with LLMs not extracting information properly, here are strategies to optimize multi-shot prompting for function routing:
-
Include Diverse Examples:
- Use inputs with different phrasing, levels of detail, and ambiguity (e.g., "tomorrow" vs. "next week" in Scenario 1).
- Cover edge cases like missing parameters (Scenario 3’s missing time) or unclear intent (Scenario 2’s vague complaint).
-
Specify Output Format Clearly:
- Always show the exact JSON structure in examples (all scenarios use consistent JSON keys).
- Avoid variations in output format to prevent the model from generating incorrect structures.
-
Handle Ambiguity Explicitly:
- Provide defaults for missing data (e.g., today’s date in Scenario 1, "Unknown" participant in Scenario 3).
- Include a
note
field in examples to show how to explain assumptions (Scenarios 2 and 3).
-
Use 3–5 Examples:
- Too few examples (1–2) may not generalize well; too many (6+) can overwhelm the model or increase token usage.
- All scenarios use 4–5 examples to balance coverage and conciseness.
-
Test with Noisy Inputs:
- Include examples with conversational or irrelevant details (e.g., "How’s it going to be in London" in Scenario 1).
- This trains the model to focus on relevant parameters.
-
Reinforce Instructions:
- Restate key rules in the prompt (e.g., "If the date is not specified, assume today’s date" in Scenario 1).
- This helps the model prioritize instructions over conflicting patterns.
-
Validate Date and Time Logic:
- For time-sensitive tasks (Scenarios 1 and 3), ensure examples show how to interpret relative dates ("tomorrow," "next Monday") and times ("3 PM," "morning").
- Use the current date (2025-04-10) consistently to anchor calculations.
You mentioned that LLMs often fail to extract information properly for routing function calls. Multi-shot prompting helps by:
- Teaching Patterns: Examples show how to map varied inputs to the correct function and parameters, reducing misinterpretation.
- Reducing Ambiguity: Clear examples with defaults and assumptions guide the model when inputs are incomplete or vague.
- Improving Robustness: Diverse inputs in examples prepare the model for real-world, noisy user queries.
- Enforcing Structure: Consistent output formats in examples minimize errors in JSON or parameter extraction.
If you’re still seeing issues, you might try:
- Chain-of-Thought (CoT) Prompting: Ask the model to explain its reasoning before outputting the JSON (e.g., "First, identify the city, then the date...").
- Fine-Tuning: If you have a specific dataset of inputs and expected outputs, fine-tuning a model could outperform prompting alone.
- More Examples: If certain input types (e.g., very long queries) fail, add targeted examples to the prompt.
If you’d like, I can process another input for any of these scenarios or a new one you provide. For example, you could give me a tricky user query, and I’ll show how to handle it with multi-shot prompting. Just let me know!
Would you like me to expand on any part (e.g., more examples, a different task, or combining CoT with multi-shot)? Or, if you have a specific input or function routing task you’re struggling with, share it, and I’ll craft a tailored prompt.