litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Feature]: Support `metadata` on OpenAI

Open Manouchehri opened this issue 1 year ago • 4 comments

The Feature

We should support the new store, metadata, and service_tier parameters on OpenAI requests.

https://platform.openai.com/docs/api-reference/chat/create#chat-create-store

Motivation, pitch

I want to use store with OpenAI with LiteLLM.

Twitter / LinkedIn details

https://www.linkedin.com/in/davidmanouchehri/

Manouchehri avatar Oct 02 '24 19:10 Manouchehri

Hey @Manouchehri store and service_tier should already work with litellm - i believe they'd just be treated as provider-specific params, and passed straight through - https://docs.litellm.ai/docs/completion/provider_specific_params

krrishdholakia avatar Oct 02 '24 20:10 krrishdholakia

Oh interesting, you're right, just tested store and it works.

Manouchehri avatar Oct 02 '24 20:10 Manouchehri

Found this closed issue via search, but the metadata field was not addressed and I'm encountering an issue with it.

store=true successfully saves messages to OpenAI's Chat Completions, however metadata values (tag k:v) do not appear to be recorded. Using the format indicated in OpenAI's docs for model distillation.

Simple example:

completion_args = {
    "model": OPENAI_GPT_4O_MINI,
    "messages": [
        {"role": "user", "content": system}
    ],
    "metadata": {
      "task": "litellm_completion",
    },
    "store": True
}
response = litellm.completion(**completion_args)

The metadata field looks like it is processed by litellm, adding hidden_params. It appears many litellm integrations look to metadata for specific values. Are the tags set in metadata actually passed to OpenAI?

pprint(completion_args)
{'messages': […],
 'metadata': {
                'hidden_params': {'additional_headers': {'llm_provider-access-control-expose-headers': 'X-Request-ID', … },},
                'task': 'litellm_completion'
              },
 'model': 'openai/gpt-4o-mini',
 'store': True}

Screenshot of entry in OpenAI Chat Completions. Monosnap Chat Completions - OpenAI API 2024-10-07 14-24-04

harlanlewis avatar Oct 07 '24 21:10 harlanlewis

  • migrate internal litellm logic to use litellm_metadata not metadata param

krrishdholakia avatar Oct 08 '24 19:10 krrishdholakia

Could we get this metadata pass-through merged for OpenAI? Huge blocker for evaluation tagging

piersonmarks avatar Dec 09 '24 17:12 piersonmarks

acknowledging this @piersonmarks

had a few attempts but since we use the 'metadata' on litellm, i'm trying to figure out how to parse out all our internal params from the logged openai metadata.

I think we can try to do something like bedrock with a provider-specific dict of params, to prevent us from mutating the initial metadata

krrishdholakia avatar Dec 09 '24 18:12 krrishdholakia

Hi.It seems to pass the metadata while using the completion function on aws bedrock is breaking the authentication.

Getting this error "The request signature we calculated does not match the signature you provided. Check your key and signing method."

I just want to pass an additional header as metadata in the completion method.How to do that ?

viveklistenus avatar Dec 09 '24 18:12 viveklistenus

@viveklistenus this seems like a separate issue?

Can you file a separate ticket, with a sample script to repro the issue. Feel free to @ me on there.

krrishdholakia avatar Dec 09 '24 19:12 krrishdholakia

It seems like passing metadata doesn't work yet. I'm using litellm over smolagents

modified_query_response = self.query_model(
    query_messages,
    metadata ={"eval": {"session_id": session_id, "step": "modify_query"}} if session_id else None
)

query_model is a LiteLLMModel https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L551

{
    "detail": "LiteLLMModel.__call__() got an unexpected keyword argument 'metadata'"
}

Is this a LiteLLM issue or a smolagents issue?

JanWerder avatar Jan 24 '25 13:01 JanWerder

LiteLLM issue

How should we handle the migration? @harlanlewis @Manouchehri @JanWerder

Since metadata is originally a liteLLM Param - used for sending information to langfuse, etc. , just sending the call to openai might cause unexpected data to be logged there

krrishdholakia avatar Jan 24 '25 15:01 krrishdholakia

Using the original metadata would be the most correct and a breaking change isn't that bad if communicated in the changelogs. An alternative would be only pass the sub-array _openai to the openai endpoint. So i.e. with my example from earlier.

modified_query_response = self.query_model(
    query_messages,
    metadata ={ "_openai": { "eval": {"session_id": session_id, "step": "modify_query"}}} if session_id else None
)

JanWerder avatar Jan 24 '25 16:01 JanWerder

Using the original metadata would be the most correct and a breaking change isn't that bad if communicated in the changelogs

got it. on semvar would you consider this a MAJOR bump (1.x.x -> 2.x.x.) or MINOR bump (x.Y.z -> x.Z.z) ?

krrishdholakia avatar Jan 24 '25 17:01 krrishdholakia

I would tend to minor, but since sending data to third parties is involved, other might disagree.

JanWerder avatar Jan 24 '25 18:01 JanWerder

@krrishdholakia How would you judge it. Is this something that can be adressed quickly or is a major discussion needed? I'm trying to judge wheter litellm can fit my requirement or if I would have to switch libraries.

JanWerder avatar Jan 27 '25 14:01 JanWerder

Hi @JanWerder i can do a preview version by EOD. Would appreciate any help qa'ing it

krrishdholakia avatar Jan 27 '25 15:01 krrishdholakia

Sounds great, sure thing, I can help with that.

JanWerder avatar Jan 27 '25 15:01 JanWerder

Nice, looks good to me. 👍

JanWerder avatar Jan 28 '25 08:01 JanWerder

@JanWerder Just added support for openai metadata param (in preview)

here's how to use it

import litellm

litellm.enable_preview_features = True 

resp = litellm.completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content", "Hey, how's it going?"}], metadata={"key": "value"}) 

print(resp)

please let me know if anything unexpected gets logged - I imagine this might require some work as we try to migrate to litellm_metadata for any of our internal logic

krrishdholakia avatar Jan 29 '25 02:01 krrishdholakia