Michael Yang

Results 84 comments of Michael Yang

I'm going to hold off on this until there's more breaking changes we can bundle together.

ECONNREFUSED indicates Ollama server isn't running. Can you check it is running and accessible on localhost:11434?

Can you clarify where everything is deployed? You mentioned something is deployed in Vercel but the wording is vague. I assume the NextJS app you're calling Ollama from. If this...

The current docker image should work out of the box with CUDA provided the prerequisites (nvidia-container-toolkit and `--gpus=all`) are met. If that's not the case, please describe how you're running...

The nvidia-container-toolkit must be installed on the Docker host, Windows WSL2 in your case. It's required for Docker to expose the GPU to the container. The Ollama Docker image contains...

Please see the link @wrapss posted

FWIW the shell expands `~` before it gets set the variable so `OLLAMA_MODELS="~/models" ollama serve` already works as expected. However, if it's set without expansion, e.g. `OLLAMA_MODELS='~/models'` or not by...

There appears to be some instability with the backing file store. We're actively investigating

How is the repo cloned? It can be a problem if the ollama repo is itself a submodule which looks to be the case here. You can skip this with...

What version of git are you using?