Iman Mirbioki
Iman Mirbioki
> You are using an outdated version > > Try a recent build, e.g. https://github.com/gdl-org/builds/releases My antivirus keeps blocking me from using the new version? Even if I disable the...
> Hi my friend, > > your gpu does not have support for DMLExecutionProvider. Are you on an Azure server? Hello, my friend! I hope you're doing well, and I'm...
> [@catclaw](https://github.com/catclaw) could you quit Ollama in the system tray, and then run the following in a powershell terminal and share the logs? > > $env:OLLAMA_DEBUG="2" > ollama serve 2>&1...
> My suspicion is it's from `CUDA_VISIBLE_DEVICES=0,1,2`. Try unsetting that first and see if it discovers the GPUs, or use the UUIDs instead which you can gather from `nvidia-smi -L`...
> Could you share the server log with OLLAMA_DEBUG="2" without CUDA_VISIBLE_DEVICES set? I'm hoping that may have a little more information on what's going wrong. Yes, of course! Thank you...
This seems to be a bug in the new 0.12.5 version. I uninstalled Ollama 0.12.5 and installed version 0.11.11 - and it's detecting my GPUs just fine? Debug is set...
I can't figure out what the problem is? I've been using 70B models in OpenWebUI using GPUs only (OLLAMA__SCHED_SPREAD:1) - but I'm back to square one! Ollama keeps falling back...
> I haven't been able to reproduce on a Windows system with NVIDIA GPUs running the same driver version 581.57. The cuda_v13 library should be able to enumerate the GPUs,...
> [@catclaw](https://github.com/catclaw) the server2.log you attached appears to be from version 0.11.11 not 0.12.6 > > ``` > time=2025-10-17T18:55:33.248+02:00 level=INFO source=routes.go:1385 msg="Listening on 127.0.0.1:11434 (version 0.11.11)" > ``` Yes, because...
I've used Nvidia CleanUpTool to remove everything and reinstalled CUDA 12.8, then 12.9, and 13.0. I tried running Ollama (both version 0.11.11 and 0.12.6) between the installations - but; 0.11.11...