semantic-conventions
semantic-conventions copied to clipboard
Pick a set of LLM systems to support/prototype
In the #825 we only mention openai, but we should pick a set of vendors/systems we want to support and prototype/validate if attributes/events are applicable to them.
When talking about "LLM Systems" we may want to consider:
- Vendor: Google, OpenAI, Cohere, ... (could be used to determine issues with specific vendor, cost calculation,...)
- Model Family: Gemini, GPT-4 (may not be necessary but could be used for aggregation)
- Model: gemini-1-0-pro, gpt-4-0125-preview (for detailed understanding of model used to generate response)
AWS Bedrock supports multiple models, I can help to create the prototype against Bedrock LLM interactions with OTel Java SDK auto-instrumentation (and Python later), and adhering to span semantic convention definitions.
- https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html
NViDIA supports AI Foundation endpoints (https://www.nvidia.com/en-us/ai-data-science/foundation-models/) that we would like to support for generating open telemetry based traces using these semantic conventions.
Note that NVIDIA generative AI examples repo showcases adding open telemetry based observability for Python based generative AI applications using langchain and llama index. Please see: https://github.com/NVIDIA/GenerativeAIExamples/blob/main/docs/observability.md
We will explore modifying these traces to adhere to the proposed semantics.
In a side note, we would like to present the current work performed for supporting open telemetry based testing for llm, RAG, vector databases etc detailed in the documentation above. Is there a periodic sync up of this community where we can present?
@bhanupisupati please check https://docs.google.com/document/d/1EKIeDgBGXQPGehUigIRLwAUpRGa7-1kXB736EaYuJ2M/edit#heading=h.ylazl6464n0c for meeting details.
Thank you!
On Wednesday, May 8, 2024, Guangya Liu @.***> wrote:
@bhanupisupati https://github.com/bhanupisupati please check https://docs.google.com/document/d/1EKIeDgBGXQPGehUigIRLwAUpRGa7- 1kXB736EaYuJ2M/edit#heading=h.ylazl6464n0c for meeting details.
— Reply to this email directly, view it on GitHub https://github.com/open-telemetry/semantic-conventions/issues/839#issuecomment-2101200917, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALZTDSTTWANBBJEAH4M25ALZBJWKVAVCNFSM6AAAAABFJ7LOMSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBRGIYDAOJRG4 . You are receiving this because you were mentioned.Message ID: @.***>
I have reviewed the API docs of Anthropic, Cohere and Google and also the new model spec introduced by OpenAI that has some changes. Described below is the summary of my findings along with proposed recommendations for the same. After discussing this on the WG call, I am happy to make a PR for the same.
- OpenAI introduced a new model spec where they are renaming ‘system’ role to ‘developer’ role.
Proposal:
Rename gen_ai.system.message -> gen_ai.developer.message
- OpenAI introduced a new role called ‘tool’ which has the content generated by a tool like a program.
Proposal: Introduce gen_ai.tool.message
- Clarification required
Difference between gen_ai.assistant.message vs gen_ai.choice
Should we just stick to gen_ai.assistant.message?
Proposal: stick to gen_ai_assistant.message
- Stop Sequences
We need to instrument stop which is used for providing up to 4 sequences where the API will stop generating further tokens. This is provided by other LLM vendors like anthropic and cohere aswell.
Proposal: Introduce gen_ai.request.stop
- Top K
Anthropic also provides an option to specify top_k. This option lets the developer only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses.
Proposal: Introduce gen_ai.request.top_k
- Anthropic
Equivalent attributes(Maps from OpenAI -> Anthropic):
- finish_reason -> stop_reason
- stop -> stop_sequences
These attributes need to be mapped accordingly by the instrumentation library.
- Cohere
Equivalent attributes(Maps from OpenAI -> Cohere):
-
Role: Assistant -> CHATBOT
-
Role: System -> Preamble
There is an additional “preamble” field in addition to the system role. Preamble adds content to the top of the messages fed to the LLM and adjusts the model behavior for the entire conversation while system message is part of the message history.
Proposal: introduce a new field gen_ai.request.preamble
- chat_history is a list of messages(objects)
Proposal: Should be transformed and mapped to gen_ai.user.message as a list of json objects
- Connectors This is a list of objects used for connecting to different data sources to fetch additional data from and passing it to the LLM. Ex: web search.
Proposal: Introduce gen_ai.request.connectors
- Documents A list of relevant documents that the model can cite to generate a more accurate reply. Each document is a string-string dictionary.
Proposal: Introduce gen_ai.request.documents
- frequency_penalty and presence_penalty
Proposal: Introduce
gen_ai.request.frequency_penaltyandgen_ai.request.presence_penalty.
- Safety Settings
Proposal: Introduce
gen_ai.request.safety_settingswhich is a list of objects.
Let's close this one. We have OpenAI, Cohere, Vertex AI, Azure AI Inference