graphrag icon indicating copy to clipboard operation
graphrag copied to clipboard

[Issue]: <title>

Open J-zeze opened this issue 1 month ago • 1 comments

Do you need to file an issue?

  • [x] I have searched the existing issues and this bug is not already filed.
  • [ ] My model is hosted on OpenAI or Azure. If not, please look at the "model providers" issue and don't file a new one here.
  • [ ] I believe this is a legitimate bug, not just a question. If this is a question, please use the Discussions area.

Describe the issue

🐛 Bug: LiteLLM enable_thinking Parameter Failing to Propagate with DashScope (Qwen) Model

📝 Description

When attempting to run the graphrag.index command using a model hosted on Alibaba Cloud DashScope (specifically Qwen3-32B, accessed via LiteLLM), the process fails with a litellm.BadRequestError.

This error indicates that the enable_thinking parameter, which is apparently required by the underlying API for non-streaming calls to be set to false, is not being successfully passed to the LiteLLM client from the settings.yaml file.

Despite explicitly setting this parameter in the configuration, the error persists.

💻 Steps to Reproduce

Environment:

GraphRAG Version: (Please fill in your current GraphRAG version, e.g., pip show graphrag)

LiteLLM Version: (Please fill in your current LiteLLM version, e.g., pip show litellm)

Python Version: (e.g., Python 3.10)

Operating System: (e.g., Windows 11 / Ubuntu 22.04)

Configuration (settings.yaml): Configure the llm (and/or embeddings) section to use the DashScope model and explicitly include the necessary LiteLLM parameter:

llm: model: "dashscope/qwen3-32b" # Or "qwen3-32b" if using the shorter format api_key: "<YOUR_DASH_SCOPE_API_KEY>" type: "openai" # Assuming this is the configured model type for LiteLLM routing

Explicitly setting the required parameter as per LiteLLM error message

litellm_params: enable_thinking: false

Execution: Run the indexing command:

python -m graphrag.index --root <project_root>

❌ Observed Error

The execution immediately fails with the following traceback snippet:

litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - parameter.enable_thinking must be set to false for non-streaming calls

✅ Expected Behavior

The litellm_params specified in the settings.yaml should be correctly merged into the final LiteLLM API call, allowing the indexing process to proceed without the BadRequestError.

❓ Workarounds Attempted

Explicitly setting litellm_params: {enable_thinking: false} in the llm block of settings.yaml.

(If applicable) Explicitly setting litellm_params: {enable_thinking: false} in the embeddings block of settings.yaml.

[Note to Reporter: Please ensure you replace the placeholder values (e.g., version numbers, OS) with your actual environment details before submitting the issue.]

Steps to reproduce

No response

GraphRAG Config Used

# Paste your config here

Logs and screenshots

No response

Additional Information

  • GraphRAG Version:
  • Operating System:
  • Python Version:
  • Related Issues:

J-zeze avatar Nov 12 '25 08:11 J-zeze

Custom parameters aren't currently supported - we just have the list of essentially OpenAI parameters in the docs here. The introduction of LiteLLM does introduce a lot more possibility, but our config model doesn't support it yet. The good news is that we're reworking config for v3, which should be out in the next few weeks and will allow pass-through of any args that any model on LiteLLM supports.

natoverse avatar Nov 18 '25 00:11 natoverse