[FR]: DSPy Integration - Add open router model selection compatibility.
Proposal summary
I would like openrouter model internal selection to be added to DSPy integration. When I configured DSPy integration with openrouter, I simply changed the endpoint, added the api key and selected the model. When I executed it, I got the exception:
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=selected_model
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
Then, I checked the url https://docs.litellm.ai/docs/providers/openrouter and had to change model's name to f"openrouter/{model}". As such, Opik is registering provider as openrouter and model as the one I selected it but the real provider, as mentioned in the body received is completely different and depends on openrouter's internal routing.
The trace i get is as follows:
function (span, general, tagged)
|--- Module (span, general, with graph)
|--- Predict (span, llm, with graph)
|-- LLM (span, llm, with openrouter as provider and selected model)
I am receiving bodies like this as answer:
{
"id":"identifier",
"provider":"provider_different_than_openrouter", # Because of openrouters internal routing
"model":"selected_model",
"object":"chat.completion",
"created":1234567890,
"choices":[
{
"logprobs":null,
"finish_reason":"stop",
"native_finish_reason":"stop",
"index":0,
"message":{
"role":"assistant",
"content":"response",
"refusal":null,
"reasoning":null
}
}
],
"usage":{
"prompt_tokens":123,
"completion_tokens":11,
"total_tokens":134,
"prompt_tokens_details":null
}
}
I see that provider and module are stored in span as:
provider, model = instance.model.split(r"/", 1)
span_data.update(
provider=provider,
model=model,
name=f"{span_data.name}: {provider} - {model}",
)
And while it is true, the real provider is selected internally by openrouter and returned in the body.
Motivation
I haven't found a workaround yet.
I think this feature is necessary in order to accurately track cost.
Hey @ralsofias, thanks for opening this issue and sharing the details! Could you please let us know which SDK version you were using when you encountered this? It’ll help us reproduce and debug more accurately.
Thanks again for the report!
I am using opik version 1.8.75 and dspy version 3.0.3.
In order to be able to check that body i went to .venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/llm_http_handler.py and added some prints.
Hey @ralsofias, I couldn't reproduce your issue by a minimal example. As you can see in this screenshot:
The provider and model are correctly classified.
I'm closing the issue. Feel free to reopen if you see fit.
The code I used:
lm = dspy.LM("openrouter/deepseek/deepseek-chat-v3-0324:free")
project_name = "DSPY test"
opik_callback = OpikCallback(project_name=project_name, log_graph=True)
dspy.configure(lm=lm, callbacks=[opik_callback])
cot = dspy.ChainOfThought("question -> answer")
cot(question="What is the meaning of life?")
Code:
from dotenv import load_dotenv
load_dotenv(".env.dev")
import dspy
from opik.integrations.dspy import OpikCallback
from src.agents.llms.dspy import get_dspy_llm # Just instantiates lm with f"openrouter/{model}" and appropiate config
project_name = "DSPY test"
opik_callback = OpikCallback(project_name=project_name, log_graph=True)
dspy.configure(lm=get_dspy_llm(), callbacks=[opik_callback])
print("before call")
cot = dspy.ChainOfThought("question -> answer")
print(cot(question="What is the meaning of life?"))
Result:
before call
STATUS: 200
BODY: {'id': 'gen-1760936825-2WAfKuXDGFuwwq16CMsC', 'provider': 'Cerebras', 'model': 'qwen/qwen3-235b-a22b-2507', 'object': 'chat.completion', 'created': 1760936825, 'choices': [{'logprobs': None, 'finish_reason': 'stop', 'native_finish_reason': 'stop', 'index': 0, 'message': {'role': 'assistant', 'content': '[[ ## reasoning ## ]]\nThe question "What is the meaning of life?" is one of the most profound and widely debated philosophical inquiries. Different cultures, religions, and philosophical traditions offer varied interpretations. From a scientific perspective, life may not have an inherent meaning, but humans often create personal or collective meaning through relationships, purpose, growth, and contribution. Religious views frequently tie the meaning of life to spiritual fulfillment, service to a higher power, or preparation for an afterlife. Philosophers such as existentialists argue that meaning is not given but must be created by each individual. Given the diversity of perspectives, the answer is inherently subjective and context-dependent.\n\n[[ ## answer ## ]]\nThere is no single definitive meaning of life; it is interpreted differently across philosophical, religious, and personal frameworks. Many believe it is to seek purpose, connection, growth, or happiness, while others find meaning through service, love, or spiritual beliefs.\n\n[[ ## completed ## ]]', 'refusal': None, 'reasoning': None}}], 'system_fingerprint': 'fp_c7f612779ed4ae92496f', 'usage': {'prompt_tokens': 173, 'completion_tokens': 191, 'total_tokens': 364, 'prompt_tokens_details': {'cached_tokens': 0}}}
Prediction(
reasoning='The question "What is the meaning of life?" is one of the most profound and widely debated philosophical inquiries. Different cultures, religions, and philosophical traditions offer varied interpretations. From a scientific perspective, life may not have an inherent meaning, but humans often create personal or collective meaning through relationships, purpose, growth, and contribution. Religious views frequently tie the meaning of life to spiritual fulfillment, service to a higher power, or preparation for an afterlife. Philosophers such as existentialists argue that meaning is not given but must be created by each individual. Given the diversity of perspectives, the answer is inherently subjective and context-dependent.',
answer='There is no single definitive meaning of life; it is interpreted differently across philosophical, religious, and personal frameworks. Many believe it is to seek purpose, connection, growth, or happiness, while others find meaning through service, love, or spiritual beliefs.'
)
Body is captured in print inside .venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/llm_http_handler.py.
Trace is captured properly and model and provider is obtained thanks to the provided str when initiating the model:
As we can see inside opik's callback:
def on_lm_start(
self,
call_id: str,
instance: Any,
inputs: Dict[str, Any],
) -> None:
span_data = self._collect_common_span_data(instance, inputs)
provider, model = instance.model.split(r"/", 1)
span_data.update(
provider=provider,
model=model,
name=f"{span_data.name}: {provider} - {model}",
)
self._map_call_id_to_span_data[call_id] = span_data
self._set_current_context_data(span_data)
In openrouter, however, each model has many different providers, each of them with specific prices:
This time, as seen in the body received, it chose Cerebras as provider. As we can see, even if openrouter is the right provider, there is no way we can calculate costs unless the internal real provider is supported.
Thanks for getting back with more details on this! I will mark it as a feature request.
Jira Ticket Created: OPIK-3398
Hi @ralsofias, I have merged an improvement for our DSPy integration that will automatically track the cost when using OpenRouter models. It will be part of the next update.