Chih-Yu Yeh
Chih-Yu Yeh
@mohamedaliabid10 from the error messages in ai service log: Request too large for gpt-4o-mini in organization org-xxxxxx on tokens per min (TPM): Limit 200000, Requested 565735. The input or output...
> Are there any missing configurations in the .env or config.yaml files? Please help! Please check this for reference: https://github.com/Canner/WrenAI/blob/main/wren-ai-service/docs/config_examples/config.ollama.yaml
@XiaoWuGg could you use these service versions and try again? WREN_ENGINE_VERSION=0.14.8 WREN_AI_SERVICE_VERSION=0.19.3 IBIS_SERVER_VERSION=0.14.8 WREN_UI_VERSION=0.24.1 WREN_BOOTSTRAP_VERSION=0.1.5
@haojie-hub your embedder is wrong, you filled in llm. please use one available embedding model instead
@haojie-hub I mean embedding model in your config.yaml. you could learn more about the structures in config.yaml here: https://github.com/Canner/WrenAI/blob/main/wren-ai-service/docs/configuration.md
> @cyyeh Hello, my model deployment issue has been resolved, but there is a new problem where it cannot connect to remote MySQL and SQL servers. Please help me Could...
@ersadul would you like to try the latest wren ai service version? please use WREN_AI_SERVICE_VERSION=0.19.2 in ~/.wrenai/.env
@prannerta100 for instruction token limit, I suggest you put independent instruction in separate instructions. or is that what you already doing and the 1000 char limit is still a burden...
> 请问问题解决了吗,我也是这个报错 Please try to use the latest config examples and the latest version of Wren AI
@omarragi9 This is an interesting idea; however I am thinking that whether LLMs can be so smart to follow the order of instructions. What is your thought here? How about...