Mark Ward

Results 11 issues of Mark Ward

### What is the issue? I sometimes find that Ollama runs a model that should be on the GPU on the CPU. I just upgraded to v0.1.32. I am still...

bug
nvidia

Issue: When the Ollama `Scheduler` requests a runner to stop (kill), the `Scheduler` reads the available VRAM and gets a size that includes the terminating runner. This results in offloading...

What would be required to build with [Intel oneMKL](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#intel-onemkl)? It seems this is how `avx_vnni` instruction set is made available for Intel processors that do not support AVX512. The Intel...

feature request
intel

### What is the issue? Ollama v0.1.33 Intel Core i9 14900K 96GB ram Nvidia RTX 4070 TI Super 16GB Attempts to load the `gemma:7b-instruct-v1.1-fp16` are failing. I have tried *...

bug
nvidia
gpu
memory

### What is the issue? `ollama pull orca-mini:13b-v3-fp16` results in `Error: EOF` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33

bug
ollama.com

### What is the issue? `ollama pull orca2:13b-fp16` results in `Error EOF` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33

bug
ollama.com

### What is the issue? `ollama pull qwen:14b-chat-v1.5-fp16` results in `Error: EOF` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33

bug
ollama.com

### What is the issue? `ollama pull qwen:32b-chat-v1.5-q4_0` results in `Error: unepxected end of JSON input` However, `ollama pull qwen:32b` works (right now they point to the same hash) ###...

bug

### What is the issue? `ollama pull starcoder2:7b-fp16` when pulling manifest outputs `Error: EOF` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33

bug

### What is the issue? `ollama pull orca2:7b-fp16` errors with `Error: EOF` when it is pulling manifest. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33

bug