Semantic Conventions for GenAI events
Status: Experimental
GenAI instrumentations MAY capture user inputs sent to the model and responses received from it as events.
Note: Event API is experimental and not yet available in some languages. Check spec-compliance matrix to see the implementation status in corresponding language.
Instrumentations MAY capture inputs and outputs if and only if application has enabled the collection of this data. This is for three primary reasons:
- Data privacy concerns. End users of GenAI applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
- Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemetry systems. Some GenAI systems allow for extremely large context windows that end users may take full advantage of.
- Performance concerns. Sending large amounts of data to a telemetry backend may cause performance issues for the application.
Body fields that contain user input, model output, or other potentially sensitive and verbose data SHOULD NOT be captured by default.
Semantic conventions for individual systems which extend content events SHOULD document all additional body fields and specify whether they should be captured by default or need application to opt into capturing them.
Telemetry consumers SHOULD expect to receive unknown body fields.
Instrumentations SHOULD NOT capture undocumented body fields and MUST follow the documented defaults for known fields. Instrumentations MAY offer configuration options allowing to disable events or allowing to capture all fields.
Common attributes
The following attributes apply to all GenAI events.
Attribute | Type | Description | Examples | Requirement Level | Stability |
---|---|---|---|---|---|
gen_ai.system | string | The Generative AI product as identified by the client or server instrumentation. [1] | openai | Recommended |
[1]: The gen_ai.system
describes a family of GenAI models with specific model identified
by gen_ai.request.model
and gen_ai.response.model
attributes.
The actual GenAI product may differ from the one identified by the client.
For example, when using OpenAI client libraries to communicate with Mistral, the gen_ai.system
is set to openai
based on the instrumentation’s best knowledge.
For custom model, a custom friendly name SHOULD be used.
If none of these options apply, the gen_ai.system
SHOULD be set to _OTHER
.
gen_ai.system
has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
Value | Description | Stability |
---|---|---|
anthropic | Anthropic | |
cohere | Cohere | |
openai | OpenAI | |
vertex_ai | Vertex AI |
System event
This event describes the instructions passed to the GenAI model.
The event name MUST be gen_ai.system.message
.
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
role | string | The actual role of the message author as passed in the message. | "system" , "instructions" | Conditionally Required : if available and not equal to system |
content | AnyValue | The contents of the system message. | "You're a friendly bot that answers questions about OpenTelemetry." | Opt-In |
User event
This event describes the prompt message specified by the user.
The event name MUST be gen_ai.user.message
.
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
role | string | The actual role of the message author as passed in the message. | "user" , "customer" | Conditionally Required : if available and if not equal to user |
content | AnyValue | The contents of the user message. | What telemetry is reported by OpenAI instrumentations? | Opt-In |
Assistant event
This event describes the assistant message.
The event name MUST be gen_ai.assistant.message
.
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
role | string | The actual role of the message author as passed in the message. | "assistant" , "bot" | Conditionally Required : if available and if not equal to assistant |
content | AnyValue | The contents of the assistant message. | Spans, events, metrics defined by the GenAI semantic conventions. | Opt-In |
tool_calls | ToolCall[] | The tool calls generated by the model, such as function calls. | [{"id":"call_mszuSIzqtI65i1wAUOE8w5H4", "function":{"name":"get_link_to_otel_semconv", "arguments":{"semconv":"gen_ai"}}, "type":"function"}] | Conditionally Required : if available |
ToolCall
object
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
id | string | The id of the tool call | call_mszuSIzqtI65i1wAUOE8w5H4 | Required |
type | string | The type of the tool | function | Required |
function | Function | The function that the model called | {"name":"get_link_to_otel_semconv", "arguments":{"semconv":"gen_ai"}} | Required |
Function
object
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
name | string | The name of the function to call | get_link_to_otel_semconv | Required |
arguments | AnyValue | The arguments to pass the the function | {"semconv": "gen_ai"} | Opt-In |
Tool event
This event describes the output of the tool or function submitted back to the model.
The event name MUST be gen_ai.tool.message
.
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
role | string | The actual role of the message author as passed in the message. | "tool" , "function" | Conditionally Required : if available and if not equal to tool |
content | AnyValue | The contents of the tool message. | opentelemetry.io | Opt-In |
id | string | Tool call that this message is responding to. | call_mszuSIzqtI65i1wAUOE8w5H4 | Required |
Choice event
This event describes model-generated individual chat response (choice). If GenAI model returns multiple choices, each choice SHOULD be recorded as an individual event.
When response is streamed, instrumentations that report response events MUST reconstruct and report the full message and MUST NOT report individual chunks as events.
If the request to GenAI model fails with an error before content is received, instrumentation SHOULD report an event with truncated content (if enabled). If finish_reason
was not received, it MUST be set to error
.
The event name MUST be gen_ai.choice
.
Choice event body has the following fields:
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
finish_reason | string | The reason the model stopped generating tokens. | stop , tool_calls , content_filter | Required |
index | int | The index of the choice in the list of choices. | 1 | Required |
message | Message | GenAI response message | {"content":"The OpenAI semantic conventions are available at opentelemetry.io"} | Recommended |
Message
object
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
role | string | The actual role of the message author as passed in the message. | "assistant" , "bot" | Conditionally Required : if available and if not equal to assistant |
content | AnyValue | The contents of the assistant message. | Spans, events, metrics defined by the GenAI semantic conventions. | Opt-In |
tool_calls | ToolCall[] | The tool calls generated by the model, such as function calls. | [{"id":"call_mszuSIzqtI65i1wAUOE8w5H4", "function":{"name":"get_link_to_otel_semconv", "arguments":"{\"semconv\":\"gen_ai\"}"}, "type":"function"}] | Conditionally Required : if available |
Custom events
System-specific events that are not covered in this document SHOULD be documented in corresponding Semantic Conventions extensions and
SHOULD follow gen_ai.{gen_ai.system}.*
naming pattern for system-specific events.
Examples
Chat completion
This example covers the following scenario:
user requests chat completion from OpenAI GPT-4 model for the following prompt:
- System message:
You're a friendly bot that answers questions about OpenTelemetry.
- User message:
How to instrument GenAI library with OTel?
- System message:
The model responds with
"Follow GenAI semantic conventions available at opentelemetry.io."
message
Span:
Attribute name | Value |
---|---|
Span name | "chat gpt-4" |
gen_ai.system | "openai" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 47 |
gen_ai.usage.input_tokens | 52 |
gen_ai.response.finish_reasons | ["stop"] |
Events:
gen_ai.system.message
.Property Value gen_ai.system
"openai"
Event body {"content": "You're a friendly bot that answers questions about OpenTelemetry."}
gen_ai.user.message
Property Value gen_ai.system
"openai"
Event body {"content":"How to instrument GenAI library with OTel?"}
gen_ai.choice
Property Value gen_ai.system
"openai"
Event body (with content enabled) {"index":0,"finish_reason":"stop","message":{"content":"Follow GenAI semantic conventions available at opentelemetry.io."}}
Event body (without content) {"index":0,"finish_reason":"stop","message":{}}
Tools
This example covers the following scenario:
Application requests chat completion from OpenAI GPT-4 model and provides a function definition.
- Application provides the following prompt:
- User message:
How to instrument GenAI library with OTel?
- User message:
- Application defines a tool (a function) names
get_link_to_otel_semconv
with single string argument namedsemconv
- Application provides the following prompt:
The model responds with a tool call request which application executes
The application requests chat completion again now with the tool execution result
Here’s the telemetry generated for each step in this scenario:
Chat completion resulting in a tool call.
Attribute name Value Span name "chat gpt-4"
gen_ai.system
"openai"
gen_ai.request.model
"gpt-4"
gen_ai.request.max_tokens
200
gen_ai.request.top_p
1.0
gen_ai.response.id
"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"
gen_ai.response.model
"gpt-4-0613"
gen_ai.usage.output_tokens
17
gen_ai.usage.input_tokens
47
gen_ai.response.finish_reasons
["tool_calls"]
Events parented to this span:
gen_ai.user.message
(not reported when capturing content is disabled)Property Value gen_ai.system
"openai"
Event body {"content":"How to instrument GenAI library with OTel?"}
gen_ai.choice
Property Value gen_ai.system
"openai"
Event body (with content) {"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv","arguments":"{\"semconv\":\"GenAI\"}"},"type":"function"}]}
Event body (without content) {"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv"},"type":"function"}]}
Application executes the tool call. Application may create span which is not covered by this semantic convention.
Final chat completion call
Attribute name Value Span name "chat gpt-4"
gen_ai.system
"openai"
gen_ai.request.model
"gpt-4"
gen_ai.request.max_tokens
200
gen_ai.request.top_p
1.0
gen_ai.response.id
"chatcmpl-call_VSPygqKTWdrhaFErNvMV18Yl"
gen_ai.response.model
"gpt-4-0613"
gen_ai.usage.output_tokens
52
gen_ai.usage.input_tokens
47
gen_ai.response.finish_reasons
["stop"]
Events parented to this span: (in this example, the event content matches the original messages, but applications may also drop messages or change their content)
gen_ai.user.message
(not reported when capturing content is not enabled)Property Value gen_ai.system
"openai"
Event body {"content":"How to instrument GenAI library with OTel?"}
gen_ai.assistant.message
Property Value gen_ai.system
"openai"
Event body (content enabled) {"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv","arguments":"{\"semconv\":\"GenAI\"}"},"type":"function"}]}
Event body (content not enabled) {"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv"},"type":"function"}]}
gen_ai.tool.message
Property Value gen_ai.system
"openai"
Event body (content enabled) {"content":"opentelemetry.io/semconv/gen-ai","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}
Event body (content not enabled) {"id":"call_VSPygqKTWdrhaFErNvMV18Yl"}
gen_ai.choice
Property Value gen_ai.system
"openai"
Event body (content enabled) {"index":0,"finish_reason":"stop","message":{"content":"Follow OTel semconv available at opentelemetry.io/semconv/gen-ai"}}
Event body (content not enabled) {"index":0,"finish_reason":"stop","message":{}}
Chat completion with multiple choices
This example covers the following scenario:
user requests 2 chat completion from OpenAI GPT-4 model for the following prompt:
- System message:
You're a friendly bot that answers questions about OpenTelemetry.
- User message:
How to instrument GenAI library with OTel?
- System message:
The model responds with two choices
"Follow GenAI semantic conventions available at opentelemetry.io."
message"Use OpenAI instrumentation library."
message
Span:
Attribute name | Value |
---|---|
Span name | "chat gpt-4" |
gen_ai.system | "openai" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 77 |
gen_ai.usage.input_tokens | 52 |
gen_ai.response.finish_reasons | ["stop"] |
Events:
gen_ai.system.message
: the same as in the Chat Completion examplegen_ai.user.message
: the same as in the previous examplegen_ai.choice
Property Value gen_ai.system
"openai"
Event body (content enabled) {"index":0,"finish_reason":"stop","message":{"content":"Follow GenAI semantic conventions available at opentelemetry.io."}}
gen_ai.choice
Property Value gen_ai.system
"openai"
Event body (content enabled) {"index":1,"finish_reason":"stop","message":{"content":"Use OpenAI instrumentation library."}}
Feedback
Was this page helpful?
Thank you. Your feedback is appreciated!
Please let us know how we can improve this page. Your feedback is appreciated!