Deploy Failed
Describe the bug When attempting to deploy WrenAI locally with Ollama LLM and embedding services, modeling consistently fails to deploy. The deployment failure occurs specifically in the modeling stage
Desktop (please complete the following information):
- OS: macOS
- WrenAI Configuration:
- LLM: Ollama (qwen2.5:14b)
- Embedder: Ollama (nomic-embed-text)
** Reproduce
- docker- compos up
- Open browser and navigate to http://localhost:3000 Select database in the initial interface
- Modeling Configuration Navigate to modeling page Click "Deploy" button
- Observe deployment failure
Wren AI Information
- Version: 0.14.0
Relevant log output config.yaml.log docker-compose.yaml.log
wrenai-ibis-server.log wrenai-wren-ai-service.log wrenai-wren-engine.log wrenai-wren-ui.log
I think it's relay to the API request timeout. I think it's a good first issue, would you like to contribute?
I'm glad to contribute. Please let me know the specific steps or areas where I can help.