papiche
papiche
This is my settings.  What surprise me is that I only have "open_ai" to choose from Chat model Provider
I am using Perplexica on a LAN GPU equiped computer. I used ``` perplexica-frontend: build: context: . dockerfile: app.dockerfile args: - NEXT_PUBLIC_API_URL=http://127.0.0.1:3001/api - NEXT_PUBLIC_WS_URL=ws://127.0.0.1:3001 ``` as docker_compose parameters then I...
 I succeed connecting to ollama with this parameters (OpenAI custom parameters) But i get no answers in frontend Still the same backend errors ``` error: Error loading Ollama models:...
Using LAN address (which is 192.168.1.27 5000/3001) in docker-compose.yaml, accessing without ssh tunnel  IT WORKS
Is there any way to access Perplexica from WAN ?
OK. At least, it needs "websocket" relay for port 3001 do you plan to add user access control ?
I'll try this other branch. Thx And so many thanks to you and the wonderful FOSS you made! You help AI to become a "common good"
It also happens to me regularly. Just wait in front of the prompt and after a while "Failed to connect to server" appears  In console,...
you are right, it could be network quality, and it can happen in LAN (wifi with room mates condition) so maybe having some retry instead of direct timeout could help....
> fwiw, I ran into a similar error and what fixed it for me was changing the base image of `node` that runs from within `backend.dockerfile`. Essentially changing it to...