WrenAI
WrenAI copied to clipboard
Add a Manual Toggle for LLM "Deep Thinking" Mode
I haven't used other LLM models, but when dealing with dozens of tables, I find the DeepSeek R1-0528 API from DeepSeek to be too slow in answering questions — it often takes 3 to 10 minutes per question. Therefore, for simple questions, I would prefer to use the DeepSeek V3-0324 model to get answers quickly, rather than waiting for an extended period. For complex questions, I would consider using the deep thinking model.