TacoCake
TacoCake
Could you use `Vulkan` instead of trying to use `ROCm`? Kinda how `GTP4ALL` does it https://github.com/nomic-ai/gpt4all Genuine question
> do you have rocm installed on your system? I think I can make Ollama use the system installation I don't have ROCm on my system, since it's kind of...
> As far as i know the backend ollama use rcom instead of vulkan from front end this not too easy to implement this > GPT4All uses llama.cpp backend while...
I'm also getting this issue. Very annoying
What's missing for this PR to be merged?
Ah, I see. Thanks for the clarification!
Still relevant, I'd love to be able to see the HDR image, in true HDR, as I process it. I'm using KDE Plasma 6 on Wayland with the color management...
I'm also having this issue, this is the full crash log [crash_log.txt](https://github.com/user-attachments/files/17175842/crash_log.txt) I've also tried the Flatseal `GPU Accelration` toggle, no change. I've been able to **successfully** run `vkcube` in...
Same issues for me ### Operating System Version: "openSUSE Tumbleweed" (64 bit) Kernel Name: Linux Kernel Version: 6.12.8-2-default X Server Vendor: SUSE LINUX X Server Release: 12401004 X Window Manager:...
Would also love this feature