How to use the qwen model in the tool
Feature request
The project uses LiteLLM and currently cannot call the qwen model. I hope there is a good way to use the qwen model. I hope to get help
Motivation
In China, qwen is a popular model
:bulb: Auto-generated documentation-based answer:
Yes, you can use the Qwen model with PR-Agent/Qodo Merge through LiteLLM integration. There are multiple ways to do this:
Using Ollama (locally)
You can run Qwen models locally through Ollama:
[config]
model = "ollama/qwen2.5-coder:32b"
fallback_models=["ollama/qwen2.5-coder:32b"]
custom_model_max_tokens=128000 # set the maximal input tokens for the model
duplicate_examples=true # will duplicate the examples in the prompt, to help the model to generate structured output
[ollama]
api_base = "http://localhost:11434" # or whatever port you're running Ollama on
Using DeepInfra
To use Qwen model with DeepInfra:
[config] # in configuration.toml
model = "deepinfra/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
fallback_models = ["deepinfra/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"]
[deepinfra] # in .secrets.toml
key = ... # your DeepInfra api key
Using Custom Model Configuration
If the specific Qwen model you want to use isn't directly listed, you can use the custom model approach:
- Set the model name in the configuration file:
[config]
model="custom_qwen_model_name"
fallback_models=["custom_qwen_model_name"]
custom_model_max_tokens=128000 # adjust based on the specific Qwen model
- Set the relevant environment variables according to the LiteLLM documentation for the specific Qwen model provider you're using.
Note that while PR-Agent is compatible with many AI models, analyzing complex code repositories requires models optimized for code analysis. Make sure the Qwen model you choose has sufficient capabilities for code analysis tasks.
Relevant Sources:
- https://qodo-merge-docs.qodo.ai//usage-guide/changing_a_model#changing-a-model-in-pr-agent
- https://qodo-merge-docs.qodo.ai//usage-guide/changing_a_model#ollama
- https://qodo-merge-docs.qodo.ai//usage-guide/changing_a_model#deepinfra
- https://qodo-merge-docs.qodo.ai//usage-guide/changing_a_model#custom-models
how to set api key for custom model
@947776795
Qwen 在百炼平台的 API 是 OpenAI 兼容的,直接在 .secrets.toml 配置 [openai] 部分的 key 和 api_base 就行了.
ps: 我没有尝试过,但是理论上是完全可行的。我用的别的平台的 openai-compatiable api 跑通了
I also encountered a similar problem
- I didn't see where to set the base_url in the custom model
- A third-party model transfer platform compatible with open ai. I can't see where to set the base_url
Here is a successful example that may help you.
For .secrets.toml file:
[openai]
key = "sk-xxx"
api_base = "https://dashscope.aliyuncs.com/compatible-mode/v1"
For configuration.toml file:
[config]
# models
model="dashscope/qwen3-max"
fallback_models=["qwen3-max"]
custom_model_max_tokens=32000