Semantic Conventions for GenAI events

Status: Experimental

GenAI instrumentations MAY capture user inputs sent to the model and responses received from it as events.

Note: Event API is experimental and not yet available in some languages. Check spec-compliance matrix to see the implementation status in corresponding language.

Instrumentations MAY capture inputs and outputs if and only if application has enabled the collection of this data. This is for three primary reasons:

  1. Data privacy concerns. End users of GenAI applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
  2. Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemetry systems. Some GenAI systems allow for extremely large context windows that end users may take full advantage of.
  3. Performance concerns. Sending large amounts of data to a telemetry backend may cause performance issues for the application.

Body fields that contain user input, model output, or other potentially sensitive and verbose data SHOULD NOT be captured by default.

Semantic conventions for individual systems which extend content events SHOULD document all additional body fields and specify whether they should be captured by default or need application to opt into capturing them.

Telemetry consumers SHOULD expect to receive unknown body fields.

Instrumentations SHOULD NOT capture undocumented body fields and MUST follow the documented defaults for known fields. Instrumentations MAY offer configuration options allowing to disable events or allowing to capture all fields.

Event: gen_ai.system.message

Status: Experimental

The event name MUST be gen_ai.system.message.

This event describes the system instructions passed to the GenAI model.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.systemstringThe Generative AI product as identified by the client or server instrumentation. [1]openaiRecommendedExperimental

[1] gen_ai.system: The gen_ai.system describes a family of GenAI models with specific model identified by gen_ai.request.model and gen_ai.response.model attributes.

The actual GenAI product may differ from the one identified by the client. Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client libraries. In such cases, the gen_ai.system is set to openai based on the instrumentation’s best knowledge, instead of the actual system. The server.address attribute may help identify the actual system in use for openai.

For custom model, a custom friendly name SHOULD be used. If none of these options apply, the gen_ai.system SHOULD be set to _OTHER.


gen_ai.system has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
anthropicAnthropicExperimental
aws.bedrockAWS BedrockExperimental
az.ai.inferenceAzure AI InferenceExperimental
az.ai.openaiAzure OpenAIExperimental
cohereCohereExperimental
deepseekDeepSeekExperimental
geminiGeminiExperimental
groqGroqExperimental
ibm.watsonx.aiIBM Watsonx AIExperimental
mistral_aiMistral AIExperimental
openaiOpenAIExperimental
perplexityPerplexityExperimental
vertex_aiVertex AIExperimental
xaixAIExperimental

Body fields:

Body FieldTypeDescriptionExamplesRequirement LevelStability
contentundefinedThe contents of the system message.You're a helpful botOpt-InExperimental
rolestringThe actual role of the message author as passed in the message.system; instructionConditionally Required if available and not equal to system.Experimental

Event: gen_ai.user.message

Status: Experimental

The event name MUST be gen_ai.user.message.

This event describes the user message passed to the GenAI model.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.systemstringThe Generative AI product as identified by the client or server instrumentation. [1]openaiRecommendedExperimental

[1] gen_ai.system: The gen_ai.system describes a family of GenAI models with specific model identified by gen_ai.request.model and gen_ai.response.model attributes.

The actual GenAI product may differ from the one identified by the client. Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client libraries. In such cases, the gen_ai.system is set to openai based on the instrumentation’s best knowledge, instead of the actual system. The server.address attribute may help identify the actual system in use for openai.

For custom model, a custom friendly name SHOULD be used. If none of these options apply, the gen_ai.system SHOULD be set to _OTHER.


gen_ai.system has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
anthropicAnthropicExperimental
aws.bedrockAWS BedrockExperimental
az.ai.inferenceAzure AI InferenceExperimental
az.ai.openaiAzure OpenAIExperimental
cohereCohereExperimental
deepseekDeepSeekExperimental
geminiGeminiExperimental
groqGroqExperimental
ibm.watsonx.aiIBM Watsonx AIExperimental
mistral_aiMistral AIExperimental
openaiOpenAIExperimental
perplexityPerplexityExperimental
vertex_aiVertex AIExperimental
xaixAIExperimental

Body fields:

Body FieldTypeDescriptionExamplesRequirement LevelStability
contentundefinedThe contents of the user message.What's the weather in Paris?Opt-InExperimental
rolestringThe actual role of the message author as passed in the message.user; customerConditionally Required if available and not equal to user.Experimental

Event: gen_ai.assistant.message

Status: Experimental

The event name MUST be gen_ai.assistant.message.

This event describes the assistant message passed to GenAI system.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.systemstringThe Generative AI product as identified by the client or server instrumentation. [1]openaiRecommendedExperimental

[1] gen_ai.system: The gen_ai.system describes a family of GenAI models with specific model identified by gen_ai.request.model and gen_ai.response.model attributes.

The actual GenAI product may differ from the one identified by the client. Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client libraries. In such cases, the gen_ai.system is set to openai based on the instrumentation’s best knowledge, instead of the actual system. The server.address attribute may help identify the actual system in use for openai.

For custom model, a custom friendly name SHOULD be used. If none of these options apply, the gen_ai.system SHOULD be set to _OTHER.


gen_ai.system has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
anthropicAnthropicExperimental
aws.bedrockAWS BedrockExperimental
az.ai.inferenceAzure AI InferenceExperimental
az.ai.openaiAzure OpenAIExperimental
cohereCohereExperimental
deepseekDeepSeekExperimental
geminiGeminiExperimental
groqGroqExperimental
ibm.watsonx.aiIBM Watsonx AIExperimental
mistral_aiMistral AIExperimental
openaiOpenAIExperimental
perplexityPerplexityExperimental
vertex_aiVertex AIExperimental
xaixAIExperimental

Body fields:

Body FieldTypeDescriptionExamplesRequirement LevelStability
contentundefinedThe contents of the tool message.The weather in Paris is rainy and overcast, with temperatures around 57°FOpt-InExperimental
rolestringThe actual role of the message author as passed in the message.assistant; botConditionally Required if available and not equal to assistant.Experimental
tool_calls:map[]The tool calls generated by the model, such as function calls.Conditionally Required if availableExperimental
  function:mapThe function call.RequiredExperimental
    argumentsundefinedThe arguments of the function as provided in the LLM response. [1]{\"location\": \"Paris\"}Opt-InExperimental
    namestringThe name of the function.get_weatherRequiredExperimental
  idstringThe id of the tool call.call_mszuSIzqtI65i1wAUOE8w5H4RequiredExperimental
  typeenumThe type of the tool.functionRequiredExperimental

[1]: Models usually return arguments as a JSON string. In this case, it’s RECOMMENDED to provide arguments as is without attempting to deserialize them. Semantic conventions for individual systems MAY specify a different type for arguments field.

type has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
functionFunctionExperimental

Event: gen_ai.tool.message

Status: Experimental

The event name MUST be gen_ai.tool.message.

This event describes the response from a tool or function call passed to the GenAI model.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.systemstringThe Generative AI product as identified by the client or server instrumentation. [1]openaiRecommendedExperimental

[1] gen_ai.system: The gen_ai.system describes a family of GenAI models with specific model identified by gen_ai.request.model and gen_ai.response.model attributes.

The actual GenAI product may differ from the one identified by the client. Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client libraries. In such cases, the gen_ai.system is set to openai based on the instrumentation’s best knowledge, instead of the actual system. The server.address attribute may help identify the actual system in use for openai.

For custom model, a custom friendly name SHOULD be used. If none of these options apply, the gen_ai.system SHOULD be set to _OTHER.


gen_ai.system has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
anthropicAnthropicExperimental
aws.bedrockAWS BedrockExperimental
az.ai.inferenceAzure AI InferenceExperimental
az.ai.openaiAzure OpenAIExperimental
cohereCohereExperimental
deepseekDeepSeekExperimental
geminiGeminiExperimental
groqGroqExperimental
ibm.watsonx.aiIBM Watsonx AIExperimental
mistral_aiMistral AIExperimental
openaiOpenAIExperimental
perplexityPerplexityExperimental
vertex_aiVertex AIExperimental
xaixAIExperimental

Body fields:

Body FieldTypeDescriptionExamplesRequirement LevelStability
contentundefinedThe contents of the tool message.rainy, 57°FOpt-InExperimental
idstringTool call id that this message is responding to.call_mszuSIzqtI65i1wAUOE8w5H4RequiredExperimental
rolestringThe actual role of the message author as passed in the message.tool; functionConditionally Required if available and not equal to tool.Experimental

Event: gen_ai.choice

Status: Experimental

The event name MUST be gen_ai.choice.

This event describes the Gen AI response message.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.systemstringThe Generative AI product as identified by the client or server instrumentation. [1]openaiRecommendedExperimental

[1] gen_ai.system: The gen_ai.system describes a family of GenAI models with specific model identified by gen_ai.request.model and gen_ai.response.model attributes.

The actual GenAI product may differ from the one identified by the client. Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client libraries. In such cases, the gen_ai.system is set to openai based on the instrumentation’s best knowledge, instead of the actual system. The server.address attribute may help identify the actual system in use for openai.

For custom model, a custom friendly name SHOULD be used. If none of these options apply, the gen_ai.system SHOULD be set to _OTHER.


gen_ai.system has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
anthropicAnthropicExperimental
aws.bedrockAWS BedrockExperimental
az.ai.inferenceAzure AI InferenceExperimental
az.ai.openaiAzure OpenAIExperimental
cohereCohereExperimental
deepseekDeepSeekExperimental
geminiGeminiExperimental
groqGroqExperimental
ibm.watsonx.aiIBM Watsonx AIExperimental
mistral_aiMistral AIExperimental
openaiOpenAIExperimental
perplexityPerplexityExperimental
vertex_aiVertex AIExperimental
xaixAIExperimental

Body fields:

Body FieldTypeDescriptionExamplesRequirement LevelStability
finish_reasonenumThe reason the model stopped generating tokens.stop; tool_calls; content_filterRequiredExperimental
indexintThe index of the choice in the list of choices.0; 1RequiredExperimental
message:mapGenAI response message.RecommendedExperimental
  contentundefinedThe contents of the assistant message.The weather in Paris is rainy and overcast, with temperatures around 57°FOpt-InExperimental
  rolestringThe actual role of the message author as passed in the message.assistant; botConditionally Required if available and not equal to assistant.Experimental
tool_calls:map[]The tool calls generated by the model, such as function calls.Conditionally Required if availableExperimental
  function:mapThe function that the model called.RequiredExperimental
    argumentsundefinedThe arguments of the function as provided in the LLM response. [1]{\"location\": \"Paris\"}Opt-InExperimental
    namestringThe name of the function.get_weatherRequiredExperimental
  idstringThe id of the tool call.call_mszuSIzqtI65i1wAUOE8w5H4RequiredExperimental
  typeenumThe type of the tool.functionRequiredExperimental

[1]: Models usually return arguments as a JSON string. In this case, it’s RECOMMENDED to provide arguments as is without attempting to deserialize them. Semantic conventions for individual systems MAY specify a different type for arguments field.

finish_reason has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
content_filterContent FilterExperimental
errorErrorExperimental
lengthLengthExperimental
stopStopExperimental
tool_callsTool CallsExperimental

type has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
functionFunctionExperimental

Custom events

System-specific events that are not covered in this document SHOULD be documented in corresponding Semantic Conventions extensions and SHOULD follow gen_ai.{gen_ai.system}.* naming pattern for system-specific events.

Examples

Chat completion

This is an example of telemetry generated for a chat completion call with system and user messages.

%%{init:
{
  "sequence": { "messageAlign": "left", "htmlLabels":true },
  "themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
}
}%%
sequenceDiagram
    participant A as Application
    participant I as Instrumented Client
    participant M as Model
    A->>+I: #U+200D
    I->>M: gen_ai.system.message: You are a helpful bot<br/>gen_ai.user.message: Tell me a joke about OpenTelemetry
    Note left of I: GenAI Client span
    I-->M: gen_ai.choice: Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!
    I-->>-A: #U+200D

GenAI Client span:

Attribute nameValue
Span name"chat gpt-4"
gen_ai.system"openai"
gen_ai.request.model"gpt-4"
gen_ai.request.max_tokens200
gen_ai.request.top_p1.0
gen_ai.response.id"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"
gen_ai.response.model"gpt-4-0613"
gen_ai.usage.output_tokens47
gen_ai.usage.input_tokens52
gen_ai.response.finish_reasons["stop"]

Events:

  1. gen_ai.system.message

    PropertyValue
    gen_ai.system"openai"
    Event body (with content enabled){"content": "You're a helpful bot"}
  2. gen_ai.user.message

    PropertyValue
    gen_ai.system"openai"
    Event body (with content enabled){"content":"Tell me a joke about OpenTelemetry"}
  3. gen_ai.choice

    PropertyValue
    gen_ai.system"openai"
    Event body (with content enabled){"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}
    Event body (without content){"index":0,"finish_reason":"stop","message":{}}

Tools

This is an example of telemetry generated for a chat completion call with user message and function definition that results in a model requesting application to call provided function. Application executes a function and requests another completion now with the tool response.

%%{init:
{
  "sequence": { "messageAlign": "left", "htmlLabels":true },
  "themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
}
}%%
sequenceDiagram
    participant A as Application
    participant I as Instrumented Client
    participant M as Model
    A->>+I: #U+200D
    I->>M: gen_ai.user.message: What's the weather in Paris?
    Note left of I: GenAI Client span 1
    I-->M: gen_ai.choice: Call to the get_weather tool with Paris as the location argument.
    I-->>-A: #U+200D
    A -->> A: parse tool parameters<br/>execute tool<br/>update chat history
    A->>+I: #U+200D
    I->>M: gen_ai.user.message: What's the weather in Paris?<br/>gen_ai.assistant.message: get_weather tool call<br/>gen_ai.tool.message: rainy, 57°F
    Note left of I: GenAI Client span 2
    I-->M: gen_ai.choice: The weather in Paris is rainy and overcast, with temperatures around 57°F
    I-->>-A: #U+200D

Here’s the telemetry generated for each step in this scenario:

GenAI Client span 1:

Attribute nameValue
Span name"chat gpt-4"
gen_ai.system"openai"
gen_ai.request.model"gpt-4"
gen_ai.request.max_tokens200
gen_ai.request.top_p1.0
gen_ai.response.id"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"
gen_ai.response.model"gpt-4-0613"
gen_ai.usage.output_tokens17
gen_ai.usage.input_tokens47
gen_ai.response.finish_reasons["tool_calls"]

Events:

All the following events are parented to the GenAI chat span 1.

  1. gen_ai.user.message (not reported when capturing content is disabled)

    PropertyValue
    gen_ai.system"openai"
    Event body{"content":"What's the weather in Paris?"}
  2. gen_ai.choice

    PropertyValue
    gen_ai.system"openai"
    Event body (with content){"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}
    Event body (without content){"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}

GenAI Client span 2:

Attribute nameValue
Span name"chat gpt-4"
gen_ai.system"openai"
gen_ai.request.model"gpt-4"
gen_ai.request.max_tokens200
gen_ai.request.top_p1.0
gen_ai.response.id"chatcmpl-call_VSPygqKTWdrhaFErNvMV18Yl"
gen_ai.response.model"gpt-4-0613"
gen_ai.usage.output_tokens52
gen_ai.usage.input_tokens47
gen_ai.response.finish_reasons["stop"]

Events:

All the following events are parented to the GenAI chat span 2.

In this example, the event content matches the original messages, but applications may also drop messages or change their content.

  1. gen_ai.user.message

    PropertyValue
    gen_ai.system"openai"
    Event body{"content":"What's the weather in Paris?"}
  2. gen_ai.assistant.message

    PropertyValue
    gen_ai.system"openai"
    Event body (content enabled){"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}
    Event body (content not enabled){"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}
  3. gen_ai.tool.message

    PropertyValue
    gen_ai.system"openai"
    Event body (content enabled){"content":"rainy, 57°F","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}
    Event body (content not enabled){"id":"call_VSPygqKTWdrhaFErNvMV18Yl"}
  4. gen_ai.choice

    PropertyValue
    gen_ai.system"openai"
    Event body (content enabled){"index":0,"finish_reason":"stop","message":{"content":"The weather in Paris is rainy and overcast, with temperatures around 57°F"}}
    Event body (content not enabled){"index":0,"finish_reason":"stop","message":{}}

Chat completion with multiple choices

This example covers the scenario when user requests model to generate two completions for the same prompt :

%%{init:
{
  "sequence": { "messageAlign": "left", "htmlLabels":true },
  "themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
}
}%%
sequenceDiagram
    participant A as Application
    participant I as Instrumented Client
    participant M as Model
    A->>+I: #U+200D
    I->>M: gen_ai.system.message - "You are a helpful bot"<br/>gen_ai.user.message - "Tell me a joke about OpenTelemetry"
    Note left of I: GenAI Client span
    I-->M: gen_ai.choice - Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!<br/>gen_ai.choice - Why did OpenTelemetry get promoted? It had great span of control!
    I-->>-A: #U+200D

GenAI Client Span:

Attribute nameValue
Span name"chat gpt-4"
gen_ai.system"openai"
gen_ai.request.model"gpt-4"
gen_ai.request.max_tokens200
gen_ai.request.top_p1.0
gen_ai.response.id"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"
gen_ai.response.model"gpt-4-0613"
gen_ai.usage.output_tokens77
gen_ai.usage.input_tokens52
gen_ai.response.finish_reasons["stop", "stop"]

Events:

All events are parented to the GenAI chat span above.

  1. gen_ai.system.message: the same as in the Chat Completion example

  2. gen_ai.user.message: the same as in the Chat Completion example

  3. gen_ai.choice

    PropertyValue
    gen_ai.system"openai"
    Event body (content enabled){"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}
  4. gen_ai.choice

    PropertyValue
    gen_ai.system"openai"
    Event body (content enabled){"index":1,"finish_reason":"stop","message":{"content":"Why did OpenTelemetry get promoted? It had great span of control!"}}