Patrick Devine
Patrick Devine
Sorry for the slow response on this, peeps. @cosmo3769 I think your question got answered before? @epratik the Ollama API is stateless, so you'll have to keep track of the...
I think what's happening here is the manifest (for whatever reason) didn't get written correctly after the pull. For the rest of the blobs, we do verify each of them,...
You'll need to upgrade to a newer version of ollama to make gemma work correctly. Gemma requires at least 0.1.26.
Ack, forgot to close this before. Just restart the ollama server. @0xrsydn check out the [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored) to see where the models are stored.
Hey @jbdatascience I think it's OK to close this? I think you could do it through Docker Spaces, but I'm not sure if that's the best way to do it...
@louisabraham you can always use the `/api/generate` endpoint w/ `raw` mode set to true in order to specify the full prompt with your own template. I'm going to go ahead...
As @zimeg mentioned, you're already running an instance of ollama on port 11434. You shouldn't need to run a second copy of it.
Are you running on Linux, Mac, or Windows? You'll need to change how `ollama serve` is being called when starting the server. There is a doc [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) which explains how...
Use the docs [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux) to set the `OLLAMA_HOST` variable.
Going to close the issue. Feel free to keep commenting or reopen it.