evcharger
evcharger
I get the same error and the solution provided by @BlackHawk616 fixed it.
@the-crypt-keeper is there any way to make this work with multi-gpu on Vulkan?
There is no way, the idea of headless mode is that it does not show on screen, runs in the background.
Use ollama, you have incorrectly chosen "openai" as LLM provider
It's working, thank you! Latest version is running
I tested with latest commits, but its the same: ggml_extend.hpp:1587 - vae compute buffer size: 2467.97 MB(RAM) ggml_extend.hpp:1587 - flux compute buffer size: 7822.92 MB(VRAM) With commits from 17.10.25: ggml_extend.hpp:1579...
Ok, now it works, but for some reason it refuses to use the specified model(available and downloaded on ollama machine) and triggers Ollama to start downloading (I have no idea...
+1 for Vulkan, it will enable everyone with iGPUs from AMD to use comfyUI. Vulkan at the moment is 2x faster(tg and pp) than ROCm for LLM inferencing in the...
I am getting the same unsupported op 'IM2COL_3D' error with Linux and Vulkan too.
Ok with today's version it works, although the output is gibberish and actually all outputs with today's version with Ubuntu and Vulkan produce gibberish with most models :(