frob
frob
ollama team have chosen to keep this issue open for tracking. > and ... a GPU? On a RPi5 ;) Did you click through the link?
Your problem is different, it is this one: https://github.com/ollama/ollama/issues/7288. The problem is that the context length that ollama is using is longer than the context length that the model supports....
Follow up in #8431
> As I understand, there is no solution but to rollback to an old version of ollama 0.3.13 or to get a GPU, correct? Correct. The llama.cpp issue has been...
Any model can be used for function calling, but if it's not been trained for it, results can be poor. For models that don't explictly support tools, you can pass...
llama3:8b-instruct-q4_0, but that was just because the original poster was trying to use that model. Results: ``` { "functionName": "get_weather", "parameters": [ { "parameterName": "query", "parameterValue": "Beijing" } ] }...
``` ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll ``` ollama can't find...
Set `OLLAMA_DEBUG=1` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) and post the resulting logs.
`OLLAMA_GPU_LAYERS` is not an ollama environment variable. ``` 2月 21 13:33:11 tc ollama[285585]: time=2025-02-21T13:33:11.387+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/bin ``` ollama couldn't find an backends to...
Mis-spelled: "llama3.1:8b-instruct-fp16", not "ollama3.1:8b-instruct-fp16".