Michael Yang

Results 84 comments of Michael Yang

This has been fixed and release in 0.1.37

This is waiting on https://github.com/ollama/ollama/pull/3718 which I'll merge after the next release is out so it can bake

Open WebUI and Ollama CLI are two distinct applications implementing frontends for the Ollama API. While you can definitely use `context` with `/api/generate` and `messages` with `/api/chat` to implement chat...

Phi3 medium 4k is available [here](https://ollama.com/library/phi3:medium). 128k, small, and vision models are coming soon :tm:

Do you perhaps have a `llama3` directory under `C:\AI-models`? It seems `ollama create` is getting confused between that directory and the upstream `llama3` model

Updated the safetensors and pytorch conversion interfaces to take F32, F16, and BF16 inputs. This allows this change to convert llama3 derivatives such as nvidia's ChatQA and NousResearch's Hermes 2...

https://github.com/ollama/ollama/pull/4190 broke lint on windows. gofmt is still a problem

rocm libraries are ridiculously large. cuda is much more reasonable. using cuda in docker requires nvidia-container-toolkit and the container must be started with `--gpus` flag. these two prerequisites with the...

I can't reproduce this. Using the example from the link, this is what I get: ``` $ curl http://localhost:11434/api/generate -d '{ "model": "llava", "prompt":"What is in this picture?", "stream": false,...

As @easp already mentioned, `OLLAMA_MODELS` must be set in the same context as `ollama serve`. Setting it in `.bashrc` is _probably_ not what you want _unless_ you're invoking `ollama serve`...