Chih-Yu Yeh
Chih-Yu Yeh
I'll label this as good first issue. See if anyone are interested for contribution, the end result is a config example file for setting up qwen3 models. Examples are here:...
@flyrun9527 We are using litellm under the hood, so I suggest you check if litellm supports qwen3 models first.
@flyrun9527 someone just contributed open-router config example! qwen3 is available via open-router, I suggest you try open-router first! https://github.com/Canner/WrenAI/blob/main/wren-ai-service/docs/config_examples/config.open_router.yaml
@flyrun9527 someone contributed qwen3 using open-router! please take a look at the config example: https://github.com/Canner/WrenAI/blob/main/wren-ai-service/docs/config_examples/config.qwen3.yaml Also you could check relevant documentation here: https://github.com/Canner/WrenAI/tree/main/wren-ai-service/docs/config_examples#qwen3-think-and-no_think-configuration
I'll close the issue now, please reopen it if you found any issues.
@Spirizeon sure, I've assigned you to this issue!
@OmarAhmed-A could you dm me and share your langfuse trace? I suppose we already give time information to llms
@OmarAhmed-A this is a known issue as of now. To mitigate this issue, you could set `SHOULD_FORCE_DEPLOY=` in .env and set `recreate_index: false` in document store section in config.yaml
> Can you please explain how the fix works, does this require the launcher? (my deployment requires docker) @cyyeh No need launcher
> > Can you please explain how the fix works, does this require the launcher? (my deployment requires docker) @cyyeh > > No need launcher Please check docker folder in...