Setup Trouble - LLocalSearch appears to have some issue fetching Ollama models
Hi all! Hopefully this is the right template to use and as the title says, I am having trouble trying to get LLocalSearch to load models downloaded through Ollama on a 2023 Macbook Pro (Apple M2 Pro chip). When I look at the Settings screen at "The agent chain is using the
I followed issue #117 for setup. OpenWebUI is configured successfully and accessible via port 3000 and runs in a container. Ollama is running via port 11434 and installed on my local machine. Attached is my .env file and docker-compose.yaml, but as text files since GitHub only accepts those. Additionally, I have some output from docker container logs llocalsearch-backend-1:
Mar 21 15:13:10.996 INF app/main.go:33 created example session
Mar 21 15:13:11.001 INF app/main.go:36 Starting the server
Mar 21 15:13:11.003 INF app/apiServer.go:222 Starting server at http://localhost:8080
Mar 21 15:13:14.403 INF app/apiServer.go:213 Chat list sent
Mar 21 15:13:14.411 ERR app/apiServer.go:105 Error getting models
Mar 21 15:13:14.593 INF app/apiServer.go:170 Loaded Chat id=tutorial "message count"=2
Would anyone happen to know where I might be going wrong or if I missed a config setting? I have no advanced knowledge of Docker, just a little experience running containers. Please let me know if there is anything else I should provide and thank you for your time!
Same error on windows
Apologies for the ping but I thought that this issue might end up sitting here too silently, as I wasn’t sure if notifications were sent: @nilsherzig
Yes I have the same issue on Linux. The setup documentation is not very clear and is apparently missing steps? I looked at gitub issues + troubleshooting and it still won't work. Blank models dropdown + 404 errors when set up to non-docker Ollama instance. Fedora Linux 42 with gnome.
I have the same problem with Ubuntu 24.04.2 LTS