semantic-conventions
semantic-conventions copied to clipboard
LLM: initial semconv definition
trafficstars
Define initial semantic conventions that establishes a foundation with essential attributes, including a vendor-specific example, and basic event definitions:
- [x] pick a namespace:
gen_ai(genai?) - [x] define basic request and response attributes.
gen_ai.system- Name of the LLM foundation model system or vendorgen_ai.request.max_tokens- Max number of tokens the LLM to generate per requestgen_ai.request.model- Name of the LLM model used for the requestgen_ai.request.temperature- Temperature setting for the LLM requestgen_ai.request.top_p- Top_p sampling setting for the LLM requestgen_ai.response.model- Name of the LLM model used for the responsegen_ai.response.finish_reason- Reason why the LLM stopped generating tokensgen_ai.response.id- Unique identifier for the response
- [x] define usage attributes
- requirement levels, if they belong on spans or events
gen_ai.usage.completion_tokens- Number of tokens used in the LLM responsegen_ai.usage.prompt_tokens- Number of tokens used in the LLM prompt ~~gen_ai.usage.total_tokens- Total number of tokens used in both prompt and response~~
- [ ] include at least one vendor example (
openai) - @drewby - #1385gen_ai.openai.request.logit_bias- The logit_bias used in the requestgen_ai.openai.request.presence_penalty- The presence_penalty used in the requestgen_ai.openai.request.seed- Seed used in request to improve determinism.gen_ai.openai.request.response_format- Format of the LLM's response, e.g., text or JSONgen_ai.openai.response.created- UNIX timestamp of when the response was created
- [ ] event definitions - @lmolkova - #980
- sensitivity, requirement level, attributes
gen_ai.content.prompt- Captures the full prompt string sent to an LLMgen_ai.content.completion- Captures the full response string from an LLM
- [x] general attributes - @lmolkova - #1297
server.address- Address of the server hosting the LLMserver.port- Port number used by the servererror.type
- [ ] metrics
- [ ] streaming