frob
frob
The Vulkan backend is not yet enabled in production releases. If you want to use the Vulkan backend, install the [Vulkan SDK](https://vulkan.lunarg.com/) and set VULKAN_SDK in your environment, then follow...
The Linux tarball for 0.12.11 was accidentally released without the Vulkan libraries ([#13104](https://github.com/ollama/ollama/issues/13104)). 0.12.12 will fix this. If you want to try Vulkan in the meantime, you can build the...
Latest vendor commit is in the [header](https://github.com/ollama/ollama/blob/main/llama/llama-cpp.h).
> Built latest Ollama from main and still the same no tools error. Tool calling is a function of the model template, not the ollama binary. > it sets up...
I answered this is a different issue, but it's probably of interest to the folk subscribed to this thread. A tool-enabled deepseek-r1 does "thinking" so in theory is likely to...
I believe that some frameworks hide the fact that some models don't do tools, and use the old insert-tool-in-system-prompt method to implement function calling.
You're expending a lot of effort to get the distillate to use tools. Why not just use the original base model?
Not deepseek-r1:671b, qwen2.5:7b or llama3.1:8b.
ollama doesn't currently support distributed inference. See #6729 for ongoing work.
This sounds like you've exceeded the context buffer and the value is the number of tokens that were processed in the last slot window. Try adding `"num_ctx":60000` to the `options`...