frob
frob
[Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) would aid in debugging, but probably https://github.com/ollama/ollama/issues/5975.
[Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) would aid in debugging, but probably https://github.com/ollama/ollama/issues/5975.
``` PARAMETER num_gpu xxx ```
When importing safetensors, you specify the directory, not the full path: ``` FROM ./model ```
Where did you download the safetensor from? That's where config.json should be.
This PR is just for removing a DoS vector for projects using the llama.cpp codebase. Improving the parser is out of scope.
Trying to get a handle on the use case here. Are you looking to do something like: ``` $ ollama run --temperature 0.7 gemma2 >>> ``` instead of: ``` $...
Understood. We currently use `expect` for this sort of scripting, and command line args would be generally useful. ``` #!/bin/bash temperature=0 num_ctx=2048 eval set -- $(getopt --options=t:,n: --longoptions=temperature:,num_ctx: --name "$0"...
The OpenAI compatibility endpoints don't support setting the context size as that's not part of the official OpenAI API standard. If you want to use the OpenAI endpoints with a...