How can I make the env file for the third party provider LLM model
How can I make env file using the third party provider LLM model info as below. base_url: https://api.cursorai.art/v1 chat model: gpt-5-2025-08-07 embedding model: text-embedding-3-small
the result of running "rdagent health_check" is ok
- 🧪 Testing embedding model: text-embedding-3-small
- ✅ Embedding test passed.
- 🧪 Testing chat model: gpt-5-2025-08-07
- ✅ Chat test passed.
- ✅ All tests completed
But the result of running "rdagent fin_quant" is Failed RuntimeError: Failed to create chat completion after 10 retries.
When integrating third-party LLM APIs, you need to tell RD-Agent where to find the model, what type it is, and how to authenticate.
Here’s an example .env file you can use:
# ===============================
# RD-Agent Third-Party LLM Config
# ===============================
# Third-party LLM provider base URL
LLM_API_BASE=https://api.cursorai.art/v1
# Chat model name (as provided by the third-party)
LLM_MODEL_NAME=gpt-5-2025-08-07
# Embedding model name
LLM_EMBEDDING_MODEL=text-embedding-3-small
# Your API key from the third-party provider
LLM_API_KEY=your_api_key_here
# (Optional) Timeout or retry configurations
LLM_REQUEST_TIMEOUT=60
LLM_MAX_RETRIES=5
# RD-Agent specific configurations
RDAGENT_MODE=production
RDAGENT_LOG_LEVEL=info
Hi, @OVERSKY2003
Based on the information provided so far, it’s difficult to determine the exact cause of the issue.
If you could share more details — for example, the full error message or traceback — it would help us better analyze and resolve the problem.
@SunsetWolf , would you please give me the sample env file using the third party provider LLM model info? Your env configuration guide in github only demostrate the CHAT_MODEL config for openai, AZURE,deepseek.
@OVERSKY2003 , Since RD-Agent is built on LiteLLM, you can refer to the LiteLLM documentation for guidance on configuring other third-party providers. Hope this helps.