Version: 1.1 Date: May 22 2025
This document outlines the data schema for observing agentic AI applications. It follows a top-down approach, starting from the overall application context (resource level), moving to the specific types of operations (spans) that constitute an agent’s execution, and detailing the attributes associated with each. The schema heavily relies on OpenTelemetry (OTEL) semantic conventions, including the specialized conventions for Generative AI (GenAI).
The goal is to provide a standardized way to capture telemetry data, enabling effective visualization, monitoring, debugging, and performance analysis of agentic AI systems.
These attributes define the service or application performing the operations.
| Attribute | Type | Description | Source / Convention |
|---|---|---|---|
service.name |
String | REQUIRED. The logical name of the service. (e.g., customer-support-agent, data-analysis-bot). |
OTEL Semantic Convention |
service.version |
String | The version string of the service API or SDK. (e.g., 1.0.2, 2.3.0-alpha). |
OTEL Semantic Convention |
service.instance.id |
String | A unique identifier of the service instance. (e.g., a Kubernetes pod UID, a process ID). | OTEL Semantic Convention |
deployment.environment |
String | The name of the deployment environment (e.g., staging, production, development). |
OTEL Semantic Convention |
telemetry.sdk.name |
String | The name of the OpenTelemetry SDK. (e.g., opentelemetry). |
OTEL Semantic Convention |
telemetry.sdk.language |
String | The language of the OpenTelemetry SDK. (e.g., python, java, nodejs). |
OTEL Semantic Convention |
telemetry.sdk.version |
String | The version string of the OpenTelemetry SDK. (e.g., 1.22.0). |
OTEL Semantic Convention |
ai.platform |
String | The AI platform or framework being used, if applicable (e.g., LangChain, LlamaIndex, AutoGen, custom). |
OTEL GenAI Convention – emerging/common practice |
| Category | Details |
|---|---|
| Automatically set | Most are populated by the OTEL SDK and its resource detectors. |
| User configured | service.name, service.version, and any custom ai.platform value are typically supplied by the developer at startup. |
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "customer-support-agent"}},
{"key": "service.version", "value": {"stringValue": "1.1.2"}},
{"key": "service.instance.id", "value": {"stringValue": "customer-support-agent-7b5d8f9c4-h2j4k"}},
{"key": "deployment.environment", "value": {"stringValue": "production"}},
{"key": "telemetry.sdk.name", "value": {"stringValue": "opentelemetry"}},
{"key": "telemetry.sdk.language", "value": {"stringValue": "python"}},
{"key": "telemetry.sdk.version", "value": {"stringValue": "1.22.0"}},
{"key": "ai.platform", "value": {"stringValue": "LangGraph"}}
]
}
}An agentic AI application’s execution is represented as a trace, a collection of causally related spans. The top-level span is typically an Agent Session Span, encompassing the entire task; it branches into spans for planning, reasoning, tool usage, LLM interactions, and so on.
Summary table of span types (4.1 → 4.13)
| # | Span name | Description |
|---|---|---|
| 1 | Agent Session Span | Represents the entire lifecycle of an agent’s task, from initiation to completion. |
| 2 | Planning Span | Captures the phase where the agent formulates a plan or strategy to achieve its goal. |
| 3 | Reasoning Span | Tracks the agent’s internal decision-making process, including evaluations and logic. |
| 4 | LLM Prompt Span | Encompasses the creation and submission of a prompt to a language model. |
| 5 | LLM Response Span | Covers the reception and processing of the language model’s response to the prompt. |
| 6 | Tool Call Span | Represents the invocation of an external tool or API by the agent. |
| 7 | Tool Response Span | Captures the response received from the external tool or API. |
| 8 | Multi-Agent Coordination Span | Tracks interactions and coordination between multiple agents working on a task. |
| 9 | Error Handling Span | Records the detection and management of errors encountered during agent execution. |
| 10 | Streaming Response Span | Represents responses that are received in a streaming fashion rather than a single batch. |
| 11 | Retry Span | Logs attempts to re-execute a failed operation or request. |
| 12 | Recursive Call Span | Captures instances where an agent recursively calls itself or part of its own logic. |
| 13 | Final Output Span | Represents the final output generation and delivery phase of the agent’s task. |
The following diagram illustrates common parent-child relationships between the defined span types. This is a conceptual representation and specific implementations might vary.
Agent Session Span (Root)
├── Planning Span
│ └── Reasoning Span
│ ├── LLM Prompt Span
│ │ └── LLM Response Span (often same span, or direct child)
│ │ └── (Optional) Streaming Response Span (Events on LLM Response or separate span)
│ └── Tool Call Span
│ └── Tool Response Span (often same span, or direct child)
│ ├── (Optional) Further Reasoning Span (processing tool output)
│ └── (Optional) Error Handling Span
│ └── (Optional) Retry Span
│ └── Tool Call Span (re-attempt)
├── Reasoning Span (direct child of Agent Session for simpler agents or sub-steps)
│ ├── LLM Prompt Span
│ │ └── LLM Response Span
│ └── Tool Call Span
│ └── Tool Response Span
├── LLM Prompt Span (direct child, e.g., for initial user interaction refinement)
│ └── LLM Response Span
│ └── (Optional) Streaming Response Span
├── Tool Call Span (direct child, e.g., for direct actions not requiring extensive planning)
│ └── Tool Response Span
├── Multi-Agent Coordination Span (can be child of Agent Session or Reasoning)
│ └── (Potentially linked to another Agent Session Span in a separate trace via context propagation)
├── Error Handling Span (can be child of any span that fails)
│ ├── (Optional) LLM Prompt Span (e.g., asking an LLM for recovery advice)
│ │ └── LLM Response Span
│ └── (Optional) Retry Span
│ └── (Re-attempted operation span, e.g., Tool Call Span)
├── Recursive Call Span (child of Agent Session or Reasoning, indicating self-invocation)
│ └── (This would then contain its own sequence of Planning, Reasoning, etc. spans)
└── Final Output Span (typically one of the last children of the Agent Session)
Format for each 4.x section:
Description paragraph
Connections table (Parent / Child)
Attribute tables: Standard OTEL automatically-set fields, then User/Framework-set fields
JSON example
Represents the entire lifecycle of an agent’s task, from initiation to completion. This span is usually the trace’s root.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | None (or an external trigger span, e.g., an HTTP request span that initiated the agent). |
| Potential child | Planning Span, Reasoning Span, LLM Prompt Span, Tool Call Span, Multi-Agent Coordination Span, Error Handling Span, Final Output Span, Recursive Call Span |
Standard OTEL span fields (auto)
| Field | Type | Description |
|---|---|---|
span_id |
String | Unique identifier for this span. |
trace_id |
String | Unique identifier for the entire trace this span belongs to. |
parent_span_id |
String | Identifier of the parent span, if any. Null for root spans. |
name |
String | A human-readable name for the span, e.g., AgentSession.process_request. |
kind |
Enum | Span kind, typically INTERNAL, or SERVER if triggered by an external request. |
start_time_unix_nano |
BigInt/Long | Epoch nanoseconds when the operation started. |
end_time_unix_nano |
BigInt/Long | Epoch nanoseconds when the operation ended. |
status.code |
Enum | Status code of the operation (e.g., OK, ERROR). |
status.message |
String | Optional status message, particularly for ERROR status. |
User- or Framework-set attributes
| Attribute | Type | Description |
|---|---|---|
agent.id |
String | A unique identifier for the agent instance or type (e.g., customer_support_agent_v2.1). |
session.id |
String | A unique identifier for this particular session or task (e.g., a UUID for the conversation). |
input.value |
String | The initial input, query, or goal provided to the agent (potentially truncated if large). |
output.value |
String | The final output, result, or summary from the agent session (potentially truncated if large). |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "s1p_4a5b6c7d8e9f0a1b",
"parent_span_id": null,
"name": "AgentSession.handle_order_query",
"kind": "SPAN_KIND_SERVER",
"start_time_unix_nano": "1716400000000000000",
"end_time_unix_nano": "1716400008500000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "agent.id", "value": {"stringValue": "customer_support_agent_v2.1"}},
{"key": "session.id", "value": {"stringValue": "uuid_123e4567-e89b-12d3-a456-426614174000"}},
{"key": "input.value", "value": {"stringValue": "Hi, I'd like to know the status of my recent order, #ORD12345."}}
]
}Captures the phase where the agent formulates a plan or strategy to achieve its goal.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Agent Session Span, Reasoning Span, Recursive Call Span |
| Potential child | Reasoning Span, LLM Prompt Span (if planning consults an LLM), Tool Call Span (if planning queries capabilities) |
Standard OTEL span fields (auto)
(Same core fields as 4.1, with name example AgentPlanning.create_plan; kind typically INTERNAL)
User- or Framework-set attributes
| Attribute | Type | Description |
|---|---|---|
gen_ai.planning.input |
String | The input, goal, or problem statement for the planning phase. |
gen_ai.planning.output |
String | The generated plan, sequence of steps, or strategy (can be a string or JSON representation). |
gen_ai.planning.steps_count |
Int | The number of discrete steps identified in the generated plan. |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "pln_b2c3d4e5f6a7b8c9",
"parent_span_id": "s1p_4a5b6c7d8e9f0a1b",
"name": "AgentPlanning.create_order_status_retrieval_plan",
"kind": "SPAN_KIND_INTERNAL",
"start_time_unix_nano": "1716400000100000000",
"end_time_unix_nano": "1716400000450000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "gen_ai.planning.input", "value": {"stringValue": "Goal: Retrieve and report status for order #ORD12345"}},
{"key": "gen_ai.planning.output", "value": {"stringValue": "[\"Identify order ID from input: #ORD12345\", \"Call 'get_order_details_api' tool with order ID\", \"Extract status, shipping info, and ETA from tool response\", \"Format summary for user\"]"}},
{"key": "gen_ai.planning.steps_count", "value": {"intValue": 4}}
]
}Tracks the agent’s internal decision-making process, including evaluations, logic application, and selection of next steps or tools.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Agent Session Span, Planning Span, Recursive Call Span, Tool Response Span |
| Potential child | LLM Prompt Span, Tool Call Span, other Reasoning Spans (sub-reasoning) |
Standard OTEL span fields (auto)
(Same core fields as 4.1, with name example AgentReasoning.evaluate_options; kind typically INTERNAL)
User- or Framework-set attributes
| Attribute | Type | Description |
|---|---|---|
gen_ai.reasoning.input |
String | The context, data, or observations the reasoning process is based on. |
gen_ai.reasoning.logic |
String | A description of the logic, rules, heuristics, or evaluation criteria applied. |
gen_ai.reasoning.output |
String | The outcome, decision, or next action selected by the reasoning process. |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "rsn_c3d4e5f6a7b8c9d0",
"parent_span_id": "pln_b2c3d4e5f6a7b8c9",
"name": "AgentReasoning.confirm_tool_for_order_details",
"kind": "SPAN_KIND_INTERNAL",
"start_time_unix_nano": "1716400000500000000",
"end_time_unix_nano": "1716400000580000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "gen_ai.reasoning.input", "value": {"stringValue": "Current plan step: \"Call 'get_order_details_api' tool with order ID\". Available tools: [\"get_order_details_api v1.2\", \"update_customer_profile v1.0\", \"search_faq v2.0\"]"}},
{"key": "gen_ai.reasoning.logic", "value": {"stringValue": "Match plan step action to available tool names and capabilities. Confirmed 'get_order_details_api v1.2' is appropriate for fetching order data."}},
{"key": "gen_ai.reasoning.output", "value": {"stringValue": "Selected tool: 'get_order_details_api v1.2'. Parameters: {\"order_id\": \"ORD12345\"}"}}
]
}Encompasses the creation and submission of a prompt to a large language model (LLM).
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Agent Session Span, Planning Span, Reasoning Span, Recursive Call Span |
| Potential child | LLM Response Span (often attributes are merged into this span once response is received) |
Standard OTEL span fields (auto)
(Same core fields as 4.1, with name example openai.chat_completions or <llm_system>.invoke; kind typically CLIENT)
GenAI / user attributes
| Attribute | Type | Description |
|---|---|---|
gen_ai.system |
String | REQUIRED. The LLM system or provider (e.g., openai, anthropic, cohere, vertex_ai). |
gen_ai.request.model |
String | REQUIRED. The specific model name being invoked (e.g., gpt-4-turbo, claude-2, text-bison@001). |
gen_ai.request.temperature |
Double | The temperature setting for the LLM request. |
gen_ai.request.top_p |
Double | The top-p (nucleus sampling) setting for the LLM request. |
gen_ai.request.max_tokens |
Int | The maximum number of tokens requested for the LLM response. |
gen_ai.prompt_template.content |
String | The raw prompt template content before variable substitution. |
gen_ai.prompt_template.variables |
Map (String:String) | A map of variables and their resolved values used in the prompt template. |
gen_ai.prompt |
String / Array of Strings | The actual prompt(s) sent to the LLM. For chat models, this is often an array of message objects/strings. |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "llm_d4e5f6a7b8c9d0e1",
"parent_span_id": "rsn_c3d4e5f6a7b8c9d0",
"name": "openai.chat_completions.user_greeting",
"kind": "SPAN_KIND_CLIENT",
"start_time_unix_nano": "1716400000600000000",
"end_time_unix_nano": "1716400000700000000",
"status": {"code": "STATUS_CODE_UNSET"},
"attributes": [
{"key": "gen_ai.system", "value": {"stringValue": "openai"}},
{"key": "gen_ai.request.model", "value": {"stringValue": "gpt-3.5-turbo-instruct"}},
{"key": "gen_ai.request.temperature", "value": {"doubleValue": 0.5}},
{"key": "gen_ai.request.max_tokens", "value": {"intValue": 100}},
{"key": "gen_ai.prompt_template.content", "value": {"stringValue": "You are a friendly customer support assistant. The user asked about order {order_id}. Briefly acknowledge their request and tell them you will look it up."}},
{"key": "gen_ai.prompt_template.variables", "value": {"mapValue": { "fields": { "order_id": {"stringValue": "#ORD12345"} }}}},
{"key": "gen_ai.prompt.0.role", "value": {"stringValue": "system"}},
{"key": "gen_ai.prompt.0.content", "value": {"stringValue": "You are a friendly customer support assistant."}},
{"key": "gen_ai.prompt.1.role", "value": {"stringValue": "user"}},
{"key": "gen_ai.prompt.1.content", "value": {"stringValue": "The user asked about order #ORD12345. Briefly acknowledge their request and tell them you will look it up."}}
]
}Covers the reception and processing of the LLM’s response. Often, attributes from the LLM Prompt Span are merged here.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | LLM Prompt Span (logical child or attributes merged into the same span ID) |
| Potential child | Optional downstream processing spans (e.g., a Reasoning Span to evaluate the LLM output). |
GenAI / user attributes (in addition to request fields carried over from LLM Prompt Span)
| Attribute | Type | Description |
|---|---|---|
gen_ai.response.id |
String | A unique identifier for the LLM response, if provided by the model/API. |
gen_ai.response.finish_reasons |
Array of Strings | Reason(s) why the LLM finished generating tokens (e.g., stop, length, tool_calls). |
gen_ai.completion |
String / Array of Strings | The generated completion(s) from the LLM. For chat models, this is often an array of message objects/strings. |
gen_ai.usage.prompt_tokens |
Int | The number of tokens in the prompt. |
gen_ai.usage.completion_tokens |
Int | The number of tokens in the generated completion. |
gen_ai.usage.total_tokens |
Int | The total number of tokens processed (prompt + completion). |
JSON example (shared-ID pattern with LLM Prompt Span)
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "llm_d4e5f6a7b8c9d0e1",
"parent_span_id": "rsn_c3d4e5f6a7b8c9d0",
"name": "openai.chat_completions.user_greeting",
"kind": "SPAN_KIND_CLIENT",
"start_time_unix_nano": "1716400000600000000",
"end_time_unix_nano": "1716400002800000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "gen_ai.system", "value": {"stringValue": "openai"}},
{"key": "gen_ai.request.model", "value": {"stringValue": "gpt-3.5-turbo-instruct"}},
{"key": "gen_ai.request.temperature", "value": {"doubleValue": 0.5}},
{"key": "gen_ai.request.max_tokens", "value": {"intValue": 100}},
{"key": "gen_ai.prompt.0.role", "value": {"stringValue": "system"}},
{"key": "gen_ai.prompt.0.content", "value": {"stringValue": "You are a friendly customer support assistant."}},
{"key": "gen_ai.prompt.1.role", "value": {"stringValue": "user"}},
{"key": "gen_ai.prompt.1.content", "value": {"stringValue": "The user asked about order #ORD12345. Briefly acknowledge their request and tell them you will look it up."}},
{"key": "gen_ai.response.id", "value": {"stringValue": "chatcmpl-দানিKlmNoPqRsTuVwXyZ12ABC"}},
{"key": "gen_ai.response.finish_reasons.0", "value": {"stringValue": "stop"}},
{"key": "gen_ai.completion.0.role", "value": {"stringValue": "assistant"}},
{"key": "gen_ai.completion.0.content", "value": {"stringValue": "Hello! Thanks for reaching out about your order #ORD12345. I'll look up the details for you right away."}},
{"key": "gen_ai.usage.prompt_tokens", "value": {"intValue": 45}},
{"key": "gen_ai.usage.completion_tokens","value": {"intValue": 32}},
{"key": "gen_ai.usage.total_tokens", "value": {"intValue": 77}}
]
}Represents the invocation of an external tool, API, or function by the agent.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Agent Session Span, Planning Span, Reasoning Span, Recursive Call Span |
| Potential child | Tool Response Span (attributes often merged here), or further nested spans if the tool call itself is instrumented (e.g., HTTP client spans, DB client spans). |
Standard OTEL span fields (auto)
(Same core fields as 4.1, with name example ToolCall.get_order_details or <tool_system>.<tool_name>; kind typically CLIENT or INTERNAL if calling a local function)
Tool-specific / user attributes
| Attribute | Type | Description |
|---|---|---|
gen_ai.tool.name |
String | REQUIRED. The name of the tool being called (e.g., get_order_details, weather_api). |
gen_ai.tool.description |
String | A brief description of the tool's purpose. |
gen_ai.tool.parameters |
String | A JSON string representing the parameters passed to the tool. |
db.system |
String | If the tool interacts with a database, the type of database (e.g., mysql, postgresql). (OTEL DB Convention) |
db.statement |
String | The database statement executed by the tool. (OTEL DB Convention) |
http.request.method |
String | If the tool makes an HTTP call, the HTTP method (e.g., GET, POST). (OTEL HTTP Convention) |
url.full |
String | If the tool makes an HTTP call, the full URL. (OTEL HTTP Convention) |
server.address |
String | Hostname or IP address of the server contacted by the tool. (OTEL HTTP/Network Convention) |
code.function |
String | If the tool is a local function call, the name of the function. (OTEL Code Convention) |
code.namespace |
String | If the tool is a local function call, the namespace or class of the function. (OTEL Code Convention) |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "tlc_e5f6a7b8c9d0e1f2",
"parent_span_id": "rsn_c3d4e5f6a7b8c9d0",
"name": "ToolCall.get_order_details_api",
"kind": "SPAN_KIND_CLIENT",
"start_time_unix_nano": "1716400003000000000",
"end_time_unix_nano": "1716400003050000000",
"status": {"code": "STATUS_CODE_UNSET"},
"attributes": [
{"key": "gen_ai.tool.name", "value": {"stringValue": "get_order_details_api v1.2"}},
{"key": "gen_ai.tool.description", "value": {"stringValue": "Fetches complete order details, including status, items, and estimated delivery, from the backend Orders API."}},
{"key": "gen_ai.tool.parameters", "value": {"stringValue": "{\"order_id\": \"ORD12345\", \"include_line_items\": true}"}},
{"key": "http.request.method", "value": {"stringValue": "POST"}},
{"key": "url.full", "value": {"stringValue": "https://api.internal.example.com/v1.2/orders/details"}},
{"key": "server.address", "value": {"stringValue": "api.internal.example.com"}},
{"key": "network.protocol.version", "value": {"stringValue": "1.1"}}
]
}Captures the response received from the external tool, API, or function. Attributes from Tool Call Span are often merged here.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Tool Call Span (attributes often merged into the same span ID) |
| Potential child | Reasoning Span (for processing the tool’s output), Error Handling Span. |
Response-specific attributes (in addition to request fields carried over from Tool Call Span)
| Attribute | Type | Description |
|---|---|---|
gen_ai.tool.name |
String | REQUIRED (mirrored from Tool Call). The name of the tool. |
gen_ai.tool.output |
String | The output or data returned by the tool (potentially truncated if large). |
http.response.status_code |
Int | For HTTP-based tools, the HTTP response status code (e.g., 200, 404). (OTEL HTTP Convention) |
JSON example (merged into the same span as Tool Call)
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "tlc_e5f6a7b8c9d0e1f2",
"parent_span_id": "rsn_c3d4e5f6a7b8c9d0",
"name": "ToolCall.get_order_details_api",
"kind": "SPAN_KIND_CLIENT",
"start_time_unix_nano": "1716400003000000000",
"end_time_unix_nano": "1716400004500000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "gen_ai.tool.name", "value": {"stringValue": "get_order_details_api v1.2"}},
{"key": "gen_ai.tool.description", "value": {"stringValue": "Fetches complete order details, including status, items, and estimated delivery, from the backend Orders API."}},
{"key": "gen_ai.tool.parameters", "value": {"stringValue": "{\"order_id\": \"ORD12345\", \"include_line_items\": true}"}},
{"key": "http.request.method", "value": {"stringValue": "POST"}},
{"key": "url.full", "value": {"stringValue": "https://api.internal.example.com/v1.2/orders/details"}},
{"key": "server.address", "value": {"stringValue": "api.internal.example.com"}},
{"key": "network.protocol.version", "value": {"stringValue": "1.1"}},
{"key": "http.response.status_code", "value": {"intValue": 200}},
{"key": "gen_ai.tool.output", "value": {"stringValue": "{\"order_id\": \"ORD12345\", \"customer_id\": \"cust_jdoe789\", \"status\": \"Shipped\", \"shipping_carrier\": \"ExampleParcelService\", \"tracking_number\": \"EPS9876543210\", \"estimated_delivery_date\": \"2025-05-28\", \"items\": [{\"sku\": \"ITEM001\", \"name\": \"Wireless Mouse\", \"quantity\": 1}, {\"sku\": \"ITEM002\", \"name\": \"Keyboard\", \"quantity\": 1}], \"total_amount\": 99.98}"}}
]
}Tracks interactions, message passing, or coordination efforts between multiple distinct agents working on a task.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Agent Session Span, Reasoning Span (of the sending/initiating agent). |
| Potential child | Agent Session Span (of a receiving agent, often linked via propagated context rather than direct parent/child trace relationship), or internal spans within the initiating agent related to processing the coordination outcome. |
Attributes
| Attribute | Type | Description |
|---|---|---|
agent.source.id |
String | The identifier of the agent initiating or sending the coordination message/request. |
agent.target.id |
String | The identifier of the target agent(s) involved in the coordination. Can be a single ID or a list/group identifier. |
coordination.type |
String | The type of coordination (e.g., message_pass, shared_state_update, rpc_call, task_delegation). |
message.id |
String | A unique identifier for the message or coordination event, if applicable. |
message.payload |
String | The content of the message or coordination data (potentially truncated). |
messaging.system |
String | If a messaging system is used for coordination, its identifier (e.g., rabbitmq, kafka). (OTEL Messaging Convention) |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "mag_f6a7b8c9d0e1f2a3",
"parent_span_id": "s1p_4a5b6c7d8e9f0a1b",
"name": "MultiAgent.delegate_fraud_check",
"kind": "SPAN_KIND_PRODUCER",
"start_time_unix_nano": "1716400004600000000",
"end_time_unix_nano": "1716400004680000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "agent.source.id", "value": {"stringValue": "customer_support_agent-7b5d8f9c4-h2j4k"}},
{"key": "agent.target.id", "value": {"stringValue": "fraud_detection_agent_pool_worker_03"}},
{"key": "coordination.type", "value": {"stringValue": "rpc_call_with_callback"}},
{"key": "message.id", "value": {"stringValue": "delegate_msg_c1a0e3d7-b0f8-4a9b-934d-3e22a1c6e8b4"}},
{"key": "message.payload", "value": {"stringValue": "{\"task_type\": \"order_fraud_assessment\", \"order_details\": {\"order_id\": \"ORD12345\", \"customer_id\": \"cust_jdoe789\", \"total_amount\": 99.98, \"ip_address\": \"203.0.113.45\"}, \"callback_topic\": \"fraud_results_ORD12345\"}"}},
{"key": "messaging.system", "value": {"stringValue": "rabbitmq"}}
]
}Records the detection, management, and outcome of errors encountered during agent execution.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Any span type where an error was caught and handled. |
| Potential child | LLM Prompt Span (e.g., for seeking recovery advice), Retry Span, or other corrective action spans. |
Attributes
| Attribute | Type | Description |
|---|---|---|
error.original_span_id |
String | The identifier of the span where the error originally occurred or was detected. |
error.type |
String | A high-level classification of the error (e.g., ToolExecutionError, LLMResponseParseError, PlanningFailure). |
error.message |
String | The error message associated with the failure. |
error.handled_outcome |
String | Describes the action taken to handle the error (e.g., retried_operation, informed_user_of_failure, switched_to_fallback_plan, ignored). |
exception.type |
String | The type of the exception that occurred (e.g., ValueError, requests.exceptions.HTTPError). (OTEL Exception Convention) |
exception.message |
String | The message of the exception. (OTEL Exception Convention) |
exception.stacktrace |
String | The stack trace of the exception. (OTEL Exception Convention) |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "err_0a1b2c3d4e5f6a7b",
"parent_span_id": "tlc_e5f6a7b8c9d0e1f2",
"name": "Agent.handle_tool_api_error_get_order_details",
"kind": "SPAN_KIND_INTERNAL",
"start_time_unix_nano": "1716400005000000000",
"end_time_unix_nano": "1716400005080000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "error.original_span_id", "value": {"stringValue": "tlc_e5f6a7b8c9d0e1f2_attempt_1"}},
{"key": "error.type", "value": {"stringValue": "ToolAPIUnavailableError"}},
{"key": "error.message", "value": {"stringValue": "Tool 'get_order_details_api v1.2' failed with HTTP 503: Service Temporarily Unavailable"}},
{"key": "error.handled_outcome", "value": {"stringValue": "scheduled_retry_with_backoff"}},
{"key": "exception.type", "value": {"stringValue": "requests.exceptions.HTTPError"}},
{"key": "exception.message", "value": {"stringValue": "503 Server Error: Service Temporarily Unavailable for url: https://api.internal.example.com/v1.2/orders/details"}},
{"key": "exception.stacktrace", "value": {"stringValue": "Traceback (most recent call last):\n File \"/app/agent/tool_executor.py\", line 123, in execute\n response.raise_for_status()\n File \"/usr/local/lib/python3.9/site-packages/requests/models.py\", line 943, in raise_for_status\n raise HTTPError(http_error_msg, response=self)\nrequests.exceptions.HTTPError: 503 Server Error..."}}
]
}Represents responses that are received in a streaming fashion (e.g., token-by-token from an LLM, or chunked data from a tool). Individual chunks are typically recorded as events on this span.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | LLM Prompt Span, Tool Call Span. |
| Potential child | Not typically separate spans for chunks; individual chunks are events. Downstream processing of the fully assembled stream might be a child span. |
Key attributes
| Attribute | Type | Description |
|---|---|---|
gen_ai.system |
String | Mirrored from the parent LLM/Tool call: The system providing the stream. |
gen_ai.response.model |
String | Mirrored from the parent LLM call: The model generating the stream. |
gen_ai.response.is_streaming |
Boolean | Should be true to indicate a streaming response. |
gen_ai.usage.completion_tokens |
Int | The final total count of completion tokens once the stream is finished (for LLM streams). |
Event attributes (recorded on the Streaming Response Span)
| Event name | Attribute | Type | Description |
|---|---|---|---|
gen_ai.response.chunk |
gen_ai.response.chunk.content |
String | The content of the individual data chunk. |
gen_ai.response.chunk |
gen_ai.response.chunk.sequence_number |
Int | The sequence number or order of this chunk in the stream. |
gen_ai.response.chunk |
gen_ai.response.chunk.metadata |
String | Optional metadata associated with the chunk (e.g., finish reason for the final LLM chunk). |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "stm_1b2c3d4e5f6a7b8c",
"parent_span_id": "llm_d4e5f6a7b8c9d0e1",
"name": "openai.chat_completions.order_summary_stream",
"kind": "SPAN_KIND_CLIENT",
"start_time_unix_nano": "1716400005500000000",
"end_time_unix_nano": "1716400007800000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "gen_ai.system", "value": {"stringValue": "openai"}},
{"key": "gen_ai.request.model", "value": {"stringValue": "gpt-4-turbo-stream"}},
{"key": "gen_ai.prompt.0.content", "value": {"stringValue": "Summarize these order details for the user: ..."}},
{"key": "gen_ai.response.id", "value": {"stringValue": "chatcmpl_stream_zxYxWwVuTsRqPoNmLkJiHgFeDcBa"}},
{"key": "gen_ai.response.is_streaming","value": {"boolValue": true}},
{"key": "gen_ai.response.model", "value": {"stringValue": "gpt-4-turbo-stream"}},
{"key": "gen_ai.usage.prompt_tokens", "value": {"intValue": 150}},
{"key": "gen_ai.usage.completion_tokens","value": {"intValue": 220}}
],
"events": [
{
"time_unix_nano": "1716400005800000000",
"name": "gen_ai.response.chunk",
"attributes": [
{"key": "gen_ai.response.chunk.content", "value": {"stringValue": "Okay, I've found your order "}},
{"key": "gen_ai.response.chunk.sequence_number", "value": {"intValue": 0}}
]
},
{
"time_unix_nano": "1716400006100000000",
"name": "gen_ai.response.chunk",
"attributes": [
{"key": "gen_ai.response.chunk.content", "value": {"stringValue": "#ORD12345. It has been "}},
{"key": "gen_ai.response.chunk.sequence_number", "value": {"intValue": 1}}
]
},
{
"time_unix_nano": "1716400007750000000",
"name": "gen_ai.response.chunk",
"attributes": [
{"key": "gen_ai.response.chunk.content", "value": {"stringValue": " on 2025-05-28."}},
{"key": "gen_ai.response.chunk.sequence_number", "value": {"intValue": 25}},
{"key": "gen_ai.response.chunk.metadata", "value": {"stringValue": "{\"finish_reason\": \"stop\"}"}}
]
}
]
}Logs attempts to re-execute a failed operation or request, often as part of an error handling strategy.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Error Handling Span, or sometimes directly the failed span itself if retry logic is embedded. |
| Potential child | The re-attempted operation span (e.g., a new LLM Prompt Span or Tool Call Span for the retry). |
Attributes
| Attribute | Type | Description |
|---|---|---|
retry.attempt_number |
Int | The current attempt number for this retry cycle (e.g., 1, 2, 3). |
retry.max_attempts |
Int | The maximum number of attempts configured for this operation. |
retry.original_span_id |
String | The identifier of the first span in the sequence of attempts that failed. |
retry.delay_ms |
Int | The delay in milliseconds before this retry attempt was made. |
retry.strategy |
String | The retry strategy employed (e.g., exponential_backoff, fixed_delay). |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "rty_2c3d4e5f6a7b8c9d",
"parent_span_id": "err_0a1b2c3d4e5f6a7b",
"name": "Agent.retry_tool_call_get_order_details_attempt_2",
"kind": "SPAN_KIND_INTERNAL",
"start_time_unix_nano": "1716400005100000000",
"end_time_unix_nano": "1716400005105000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "retry.attempt_number", "value": {"intValue": 2}},
{"key": "retry.max_attempts", "value": {"intValue": 3}},
{"key": "retry.original_span_id", "value": {"stringValue": "tlc_e5f6a7b8c9d0e1f2"}},
{"key": "retry.delay_ms", "value": {"intValue": 2000}},
{"key": "retry.strategy", "value": {"stringValue": "exponential_backoff"}}
]
}Captures instances where an agent, or a significant part of its logic (like a sub-agent or a complex function), recursively calls itself to solve a sub-problem or iterate.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Agent Session Span, Reasoning Span, or another Recursive Call Span. |
| Potential child | Any agent-level spans that constitute the recursive execution (e.g., Planning Span, Reasoning Span, LLM Prompt Span for the sub-problem). |
Attributes
| Attribute | Type | Description |
|---|---|---|
recursion.depth |
Int | The current depth of recursion (e.g., 1 for the first recursive call, 2 for a call within that, etc.). |
recursion.input |
String | The input or parameters provided to this specific recursive call (potentially truncated). |
recursion.output |
String | The output or result returned by this specific recursive call (potentially truncated). |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "rec_3d4e5f6a7b8c9d0e",
"parent_span_id": "s1p_4a5b6c7d8e9f0a1b",
"name": "Agent.recursive_clarify_user_need_shipping",
"kind": "SPAN_KIND_INTERNAL",
"start_time_unix_nano": "1716400006000000000",
"end_time_unix_nano": "1716400007500000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "recursion.depth", "value": {"intValue": 1}},
{"key": "recursion.input", "value": {"stringValue": "{\"original_query\": \"Is shipping fast?\", \"context\": \"User is asking about order #ORD12345, already confirmed shipped.\"}"}},
{"key": "recursion.output", "value": {"stringValue": "{\"clarified_need\": \"User wants to know the estimated delivery date and carrier for #ORD12345.\", \"next_action\": \"Provide ETA and carrier from tool response.\"}"}}
]
}Represents the phase where the agent generates and delivers its final output or response for the given task or session.
Connections
| Connection type | Span names |
|---|---|
| Potential parent | Agent Session Span, Reasoning Span (that concluded the task). |
| Potential child | Typically none related to further agent logic; possibly spans related to the delivery mechanism if instrumented (e.g., an HTTP client span if sending output via API, a DB span if saving output). |
Attributes
| Attribute | Type | Description |
|---|---|---|
output.content |
String | The final generated output, response, or summary provided by the agent (potentially truncated). |
output.format |
String | The format of the output (e.g., plain_text, json, markdown, html). |
output.recipient |
String | An identifier for the intended recipient or destination of the output (e.g., user_chat_interface, api_caller_service_X, database_table_Y). |
output.satisfaction_rating_expected |
Double | If applicable, an estimated or predicted satisfaction rating (e.g., on a scale of 0-1) for this output. |
JSON example
{
"trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6",
"span_id": "fout_4e5f6a7b8c9d0e1f",
"parent_span_id": "s1p_4a5b6c7d8e9f0a1b",
"name": "Agent.generate_final_order_status_summary",
"kind": "SPAN_KIND_INTERNAL",
"start_time_unix_nano": "1716400008000000000",
"end_time_unix_nano": "1716400008200000000",
"status": {"code": "STATUS_CODE_OK"},
"attributes": [
{"key": "output.content", "value": {"stringValue": "Okay, I've found your order #ORD12345. It has been shipped via ExampleParcelService (Tracking: EPS9876543210) and is estimated to arrive on 2025-05-28. It includes a Wireless Mouse and a Keyboard, totaling $99.98."}},
{"key": "output.format", "value": {"stringValue": "formatted_plain_text"}},
{"key": "output.recipient", "value": {"stringValue": "user_chat_interface_session_uuid_123e4567-e89b-12d3-a456-426614174000"}},
{"key": "output.satisfaction_rating_expected", "value": {"doubleValue": 0.85}}
]
}| Source category | Examples |
|---|---|
| Automatically from Framework / OTEL SDK | Core span identifiers and timing, status, HTTP/DB client attributes, many GenAI fields if the framework auto-instruments LLM calls, propagation of session.id or agent.id. |
| User-set (manual instrumentation) | More descriptive span names, custom business attributes, detailed inputs/outputs (gen_ai.planning.output, gen_ai.reasoning.logic, etc.), GenAI fields when using raw SDKs with no auto-instrumentation, all attributes on custom spans like Planning, Reasoning, Error Handling, etc. |
Adhering to these conventions maximizes interoperability with observability platforms.