Cybervet

Results 10 comments of Cybervet

What is your Linux Kernel? I think 6+ kernels don't support a lot of older nvidia cards.

Ok so far I found that in the older version when a cuda error was found then the ollama server was starting again in cpu only mode. In this version...

Yeap exact same models . I tried different models and even with small models like deepseek-coder it is doing the same thing. This problem started when a change happened in...

Ok here is the differences in the errors b/w the 2 versions, first is the one that works ok second is the newer. llm_load_tensors: using CUDA for GPU acceleration llm_load_tensors:...

> I commented about it here: [#2560 (comment)](https://github.com/ollama/ollama/issues/2560#issuecomment-1950690705) > > maybe that could be it. Nope thats not the problem.

I think the problem continues , at least when we compile from source. Here is a the error msg when trying to run a small model in a 2 g...

So if the cpu has no AVX can not use cuda and GPU not matter what, even after compilation from source?

> @Cybervet to answer your question about building from source, we don't currently optimize our build configuration for this scenario but if you do have a situation that call's for...

> @Cybervet the one other change you'll need is to alter the gpu detection logic to bypass the fairly recent check we added to skip GPUs on non-AVX systems -...

I managed to run Ollama in proxmox fine in a old workstation withou AVX capable cpu. It runs in a CT (without GPU) slow, but on a VM with GPU...