[Feature]: Support `metadata` on OpenAI
The Feature
We should support the new store, metadata, and service_tier parameters on OpenAI requests.
https://platform.openai.com/docs/api-reference/chat/create#chat-create-store
Motivation, pitch
I want to use store with OpenAI with LiteLLM.
Twitter / LinkedIn details
https://www.linkedin.com/in/davidmanouchehri/
Hey @Manouchehri store and service_tier should already work with litellm - i believe they'd just be treated as provider-specific params, and passed straight through - https://docs.litellm.ai/docs/completion/provider_specific_params
Oh interesting, you're right, just tested store and it works.
Found this closed issue via search, but the metadata field was not addressed and I'm encountering an issue with it.
store=true successfully saves messages to OpenAI's Chat Completions, however metadata values (tag k:v) do not appear to be recorded. Using the format indicated in OpenAI's docs for model distillation.
Simple example:
completion_args = {
"model": OPENAI_GPT_4O_MINI,
"messages": [
{"role": "user", "content": system}
],
"metadata": {
"task": "litellm_completion",
},
"store": True
}
response = litellm.completion(**completion_args)
The metadata field looks like it is processed by litellm, adding hidden_params. It appears many litellm integrations look to metadata for specific values. Are the tags set in metadata actually passed to OpenAI?
pprint(completion_args)
{'messages': […],
'metadata': {
'hidden_params': {'additional_headers': {'llm_provider-access-control-expose-headers': 'X-Request-ID', … },},
'task': 'litellm_completion'
},
'model': 'openai/gpt-4o-mini',
'store': True}
Screenshot of entry in OpenAI Chat Completions.
- migrate internal litellm logic to use
litellm_metadatanotmetadataparam
Could we get this metadata pass-through merged for OpenAI? Huge blocker for evaluation tagging
acknowledging this @piersonmarks
had a few attempts but since we use the 'metadata' on litellm, i'm trying to figure out how to parse out all our internal params from the logged openai metadata.
I think we can try to do something like bedrock with a provider-specific dict of params, to prevent us from mutating the initial metadata
Hi.It seems to pass the metadata while using the completion function on aws bedrock is breaking the authentication.
Getting this error "The request signature we calculated does not match the signature you provided. Check your key and signing method."
I just want to pass an additional header as metadata in the completion method.How to do that ?
@viveklistenus this seems like a separate issue?
Can you file a separate ticket, with a sample script to repro the issue. Feel free to @ me on there.
It seems like passing metadata doesn't work yet. I'm using litellm over smolagents
modified_query_response = self.query_model(
query_messages,
metadata ={"eval": {"session_id": session_id, "step": "modify_query"}} if session_id else None
)
query_model is a LiteLLMModel https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L551
{
"detail": "LiteLLMModel.__call__() got an unexpected keyword argument 'metadata'"
}
Is this a LiteLLM issue or a smolagents issue?
LiteLLM issue
How should we handle the migration? @harlanlewis @Manouchehri @JanWerder
Since metadata is originally a liteLLM Param - used for sending information to langfuse, etc. , just sending the call to openai might cause unexpected data to be logged there
Using the original metadata would be the most correct and a breaking change isn't that bad if communicated in the changelogs. An alternative would be only pass the sub-array _openai to the openai endpoint.
So i.e. with my example from earlier.
modified_query_response = self.query_model(
query_messages,
metadata ={ "_openai": { "eval": {"session_id": session_id, "step": "modify_query"}}} if session_id else None
)
Using the original metadata would be the most correct and a breaking change isn't that bad if communicated in the changelogs
got it. on semvar would you consider this a MAJOR bump (1.x.x -> 2.x.x.) or MINOR bump (x.Y.z -> x.Z.z) ?
I would tend to minor, but since sending data to third parties is involved, other might disagree.
@krrishdholakia How would you judge it. Is this something that can be adressed quickly or is a major discussion needed? I'm trying to judge wheter litellm can fit my requirement or if I would have to switch libraries.
Hi @JanWerder i can do a preview version by EOD. Would appreciate any help qa'ing it
Sounds great, sure thing, I can help with that.
Nice, looks good to me. 👍
@JanWerder Just added support for openai metadata param (in preview)
here's how to use it
import litellm
litellm.enable_preview_features = True
resp = litellm.completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content", "Hey, how's it going?"}], metadata={"key": "value"})
print(resp)
please let me know if anything unexpected gets logged - I imagine this might require some work as we try to migrate to litellm_metadata for any of our internal logic