RD-Agent icon indicating copy to clipboard operation
RD-Agent copied to clipboard

[BUG] RD-Agent/LiteLLM: "LLM Provider NOT provided" even with correct provider/model for Ollama/DeepSeek (env, patch, CLI all fail)

Open ericforai opened this issue 5 months ago • 1 comments


Description

When using the latest RD-Agent (0.6.x) and LiteLLM (1.72.x), it is impossible to use any non-OpenAI provider (e.g., Ollama, DeepSeek) for local LLM inference.
Even with all environment variables, CLI arguments, and code patches set to provider=ollama (or deepseek), RD-Agent always falls back to deepseek-chat and throws:

litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek-chat

This makes it impossible to use local LLMs or any non-OpenAI provider, even though the official documentation claims full support.


Environment

  • OS: macOS 14.x (Apple Silicon)
  • Python: 3.13.x
  • RD-Agent: 0.6.0 / 0.6.1 (latest)
  • LiteLLM: 1.72.4 (latest)
  • Ollama: Installed and running (qwen2:7b model pulled, ollama serve active)
  • DeepSeek API: Also tested, same error

Reproduction Steps

  1. Set all environment variables for Ollama:
    export LITELLM_PROVIDER=ollama
    export LITELLM_MODEL=qwen2:7b
    export OLLAMA_BASE_URL=http://localhost:11434
    export RDAGENT_LLM_BACKEND=rdagent.oai.backend.litellm.LiteLLMBackend
    
  2. (Also tried DeepSeek, same result)
  3. Run:
    source rdagent_venv/bin/activate
    rdagent fin_factor --provider ollama --model qwen2:7b --max_iterations 3 --fast_mode
    
  4. Result: Always fails with LLM Provider NOT provided, and falls back to deepseek-chat.

Error Log

litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek-chat
...
RuntimeError: Failed to create chat completion after 10 retries.
  • Even after patching rdagent/oai/backend/litellm.py and base.py to forcibly inject provider="ollama" (or "deepseek"), the error persists.
  • CLI arguments, environment variables, and code patching all fail to pass the provider to LiteLLM.

What I Tried

  • Set all relevant environment variables (LITELLM_PROVIDER, LITELLM_MODEL, etc.)
  • Used CLI arguments (--provider, --model)
  • Patched rdagent/oai/backend/litellm.py and base.py to forcibly inject provider into kwargs
  • Cleaned all .env files, restarted shell, reinstalled packages
  • Confirmed Ollama is running and accessible
  • Also tested with DeepSeek API (same error)

Expected Behavior

  • RD-Agent should correctly pass the provider/model to LiteLLM and allow local LLM inference via Ollama (or DeepSeek API).
  • The official documentation claims Ollama is supported, but the code does not work as described.

Additional Context

  • This used to work in RD-Agent 0.5.x + LiteLLM 1.6x/1.7x (provider was not strictly required).
  • The bug only appears after upgrading to RD-Agent 0.6.x and LiteLLM 1.72.x.
  • There are similar issues reported in LiteLLM repo and SWE-agent.

Request

  • Please provide a working example or fix for using Ollama (or any non-OpenAI provider) with RD-Agent 0.6.x + LiteLLM 1.72.x.
  • If possible, clarify the correct way to pass provider/model through all layers (env, CLI, code) so that it is not overwritten or lost.

Thank you for your help!


如需补充具体 patch 代码或更详细的环境变量/命令行/报错堆栈,可随时补充。
建议直接复制此模板到 GitHub issue,官方会更容易定位和修复问题。

ericforai avatar Jul 04 '25 01:07 ericforai

maybe you can try our new version and as far as I know quite a lot people have succeeded in using deepseek model

Hoder-zyf avatar Jul 08 '25 13:07 Hoder-zyf