frob

Results 843 comments of frob

Anything that changes how model resources are allocated will cause a model reload. Size of context (`num_ctx`), number of layers (`num_gpu`), memory mapping (`use_mmap`), memory locking (`use_mlock`), threads (`num_thread`). If...

It seems like this model is not as ready to do tools calls as some other models. If you run your query enough times, it will eventually return a tool...

Increase [`OLLAMA_LOAD_TIMEOUT`](https://github.com/ollama/ollama/blob/da09488fbfc437c55a94bc5374b0850d935ea09f/envconfig/config.go#L244).

https://github.com/ggerganov/llama.cpp/issues/8519

Can you post the code you used to get the results? I took the code from [maybe](https://python.useinstructor.com/concepts/maybe/), adjusted it for ollama as per [ollama](https://python.useinstructor.com/hub/ollama/), used the model qwen2.5:7b-instruct-q8_0 since I...

I suspect that if you use the tool in the way suggested by the authors, the results will be more acceptable. If you are using it the way they suggest...

[Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.

[Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.

The ollama server crashed when you did the `run` command. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may show why.

You don't have to use a proxy, and in fact you shouldn't for 127.0.0.1.