Loading Ollama models fail to load a default server
MacOS 15.3 Transformer.Lab updated, running on the machine where the models are located
When we load a Local Ollama model on Foundation, no server is loaded by default, resultan on error when running the model
The ollama model is working on ollama (macOS app) and with OpenWebUi or Perplexica running on Docker in the same machine
Selecting manually ollama server vives a normal behaviour
Hi @delfireinoso, Sorry for completely missing this issue, we will test this out and get back on how we can solve this one.
Oh hmmm I thought this issue was the same as this one: https://github.com/transformerlab/transformerlab-app/issues/329
But htat is possibly not the case. But I think fixing the other issue will also fix this one.