frob
frob
``` FROM deepseek-r1:671b PARAMETER num_ctx 163840 PARAMETER num_predict 8192 ```
Choose a size that allows the model to process the input tokens and generate output tokens to satisfy the requirements of the task you want to use an LLM for....
Do you have logs from around the time the request was manually cancelled?
``` time=2025-06-10T22:57:39.869Z level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046 ``` Your context window is too small. The runner fills up the available context space and...
`num_predict` can be set in the API or in a [Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter). Unfortunately there's no global environment variable that can be set. The default value of -1 is infinite. There is...
Set `OLLAMA_DEBUG=1`.
This logs shows two apparently successful generations. It's more helpful if you post the full log rather than a handful of lines.
Set `OLLAMA_MODELS=/path/to/a/partition/with/space` in the service override config. `chown ollama /path/to/a/partition/with/space`. ollama will use `/usr/share/ollama` to store key and history, all of the space consuming model files will be in `/path/to/a/partition/with/space`.
https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807
``` lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving ``` Not an issue with ollama. DNS server is acting up. What's the result of ``` nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com ```