gerbil icon indicating copy to clipboard operation
gerbil copied to clipboard

Never finishes on macOS

Open fuzzy76 opened this issue 3 weeks ago • 1 comments

After launching Gerbil and try to create a picture, the app just stays at "Generating" and the clock ticks up forever.

When inspecting the terminal I see that the underlying process has crashed and the app didn't notice:

Swap to Diffusion Model Path:/Users/fuzzy76/Applications/Gerbil/models/sdmodel/QuantStack/Qwen-Image-Edit-2509-GGUF/Qwen-Image-Edit-2509-Q4_K_S.gguf
Try read vocab from /private/var/folders/h4/6zq4vvdd3k746jyysn2qmdfr0000gn/T/_MEI0UbipS/embd_res/qwen2_merges_utf8_c_str.embd
  |================================>                 | 1933/2984 - 151.41it/s
  |======================================>           | 2271/2984 - 133.78it/s
  |==============================================>   | 2790/2984 - 155.14it/s
  |==================================================| 2984/2984 - 162.23it/sLoad Text Model OK: True
Load Image Model OK: True
Llama.cpp UI loaded.
======
Active Modules: TextGeneration ImageGeneration
Inactive Modules: VoiceRecognition MultimodalVision MultimodalAudio NetworkMultiplayer ApiKeyPassword WebSearchProxy TextToSpeech VectorEmbeddings AdminControl
Enabled APIs: KoboldCppApi OpenAiApi OllamaApi A1111ForgeApi ComfyUiApi
Starting Kobold API on port 5001 at http://localhost:5001/api/
Starting OpenAI Compatible API on port 5001 at http://localhost:5001/v1/
Starting llama.cpp secondary WebUI at http://localhost:5001/lcpp/
StableUI is available at http://localhost:5001/sdui/
======
Please connect to custom endpoint at http://localhost:5001/


[09:27:28] Generating Image (20 steps)
  |==================================================| 20/20 - 8.45s/itggml/src/ggml-metal/ggml-metal-ops.cpp:203: unsupported op
(lldb) process attach --pid 49430
Process 49430 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
    frame #0: 0x000000019efde2f4 libsystem_kernel.dylib`__semwait_signal + 8
libsystem_kernel.dylib`__semwait_signal:
->  0x19efde2f4 <+8>:  b.lo   0x19efde314               ; <+40>
    0x19efde2f8 <+12>: pacibsp 
    0x19efde2fc <+16>: stp    x29, x30, [sp, #-0x10]!
    0x19efde300 <+20>: mov    x29, sp
Target 0: (koboldcpp-launcher) stopped.
Executable binary set to "/Users/fuzzy76/Applications/Gerbil/koboldcpp-mac-arm64-1.102.3/koboldcpp-launcher".
Architecture set to: arm64-apple-macosx-.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
  * frame #0: 0x000000019efde2f4 libsystem_kernel.dylib`__semwait_signal + 8
    frame #1: 0x000000019eeb6d6c libsystem_c.dylib`nanosleep + 220
    frame #2: 0x000000010a5ff254 Python`time_sleep + 172
    frame #3: 0x000000010a46c41c Python`_PyEval_EvalFrameDefault + 106384
    frame #4: 0x000000010a44fde0 Python`_PyEval_Vector + 632
    frame #5: 0x000000010a44fac4 Python`PyEval_EvalCode + 160
    frame #6: 0x0000000102e1a410 koboldcpp-launcher`___lldb_unnamed_symbol97 + 424
    frame #7: 0x0000000102e1ab1c koboldcpp-launcher`___lldb_unnamed_symbol99 + 1584
    frame #8: 0x000000019ec55d54 dyld`start + 7184
(lldb) quit

fuzzy76 avatar Dec 02 '25 08:12 fuzzy76

Uh-oh. This is a llama.cpp error: ggml/src/ggml-metal/ggml-metal-ops.cpp:203: unsupported op Unfortunately it doesn't list which op is not supported for some reason. This means that qwen image gen is not actually supported on Macs (metal) yet. We'll have to wait for another koboldcpp release that hopefully pulls in a fix from llama.cpp. You'll need to use other models until that happens.

I'll try to detect koboldcpp crashes and pop up a modal to notify the user in case they're not looking at the terminal.

lone-cloud avatar Dec 02 '25 09:12 lone-cloud

I've added a new "Backend Crashed" modal to notify the user of KoboldCpp crashing at runtime. I was testing this new modal by sending various process signals to terminate the koboldcpp process at runtime and it worked for that case. I'm not 100% that it will show up in your case as it depends on how KoboldCpp handles its internal (llama.cpp in this case) code crashing.

lone-cloud avatar Dec 04 '25 03:12 lone-cloud

The detection triggered, and the modal appeared. Though Stable UI still says "generating" and the clock continues.

fuzzy76 avatar Dec 04 '25 08:12 fuzzy76