frob

Results 743 comments of frob

Broken image link. Function call results can use `role=tool` if the template supports it, eg `{{- else if eq .Role "tool" }}` in the [llama3.2 template](https://ollama.com/library/llama3.2/blobs/966de95ca8a6). The tool id would...

A quick first pass with generating embeddings with llama3.2:latest failed to show any inconsistencies. Can you give me an idea of the type of input, chunk length and context length...

Maybe page-assist needs to update its library dependencies then? I don't know how how bun works or how the langchain dependencies play out. https://github.com/langchain-ai/langchainjs/pull/7337

[Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.

I thought it might be something strange like only 1 thread being assigned to the runner or relying heavily on swap, but it all looks normal. For comparison, i7-13700: ```...

ollama sums the number of (cores - efficencyCores) over all CPUs, it doesn't treat the Xeons as GPUs. If Windows Server 2016 limits applications to one CPU then I assume...

The error is likely a k-shift failure: https://github.com/ollama/ollama/issues/5975

The error is likely a k-shift failure: https://github.com/ollama/ollama/issues/5975

> Is there a way to check each model's num_ctx default value assigned by Olliama? Default context is [2048](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size). It can be [overriden](https://github.com/ollama/ollama/issues/5965#issuecomment-2252354726) in the Modelfile. > BTW, I have...

> while large 671B model will run out of space with 2048 context only at few hundred words generated The deepseek models generate reasoning tokens which consume context space.