Fooocus is not working on last few versions Stability Matrix
Package
Fooocus Fooocus - mashb1t 1-Up Edition
When did the issue occur?
Running the Package
What GPU / hardware type are you using?
Nvidia 5090
What happened?
Fooocus can be installed and launched but when we generate the image it shows following error...
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
Console output
Requested to load SDXLClipModel
Loading 1 new model
Traceback (most recent call last):
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\modules\async_worker.py", line 1483, in worker
handler(task)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\modules\async_worker.py", line 1172, in handler
tasks, use_expansion, loras, current_progress = process_prompt(async_task, async_task.prompt, async_task.negative_prompt,
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\modules\async_worker.py", line 662, in process_prompt
pipeline.refresh_everything(refiner_model_name=async_task.refiner_model_name,
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\modules\default_pipeline.py", line 265, in refresh_everything
prepare_text_encoder(async_call=True)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\modules\default_pipeline.py", line 228, in prepare_text_encoder
ldm_patched.modules.model_management.load_models_gpu([final_clip.patcher, final_expansion.patcher])
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\modules\patch.py", line 447, in patched_load_models_gpu
y = ldm_patched.modules.model_management.load_models_gpu_origin(*args, **kwargs)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\model_management.py", line 437, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\model_management.py", line 304, in model_load
raise e
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\model_management.py", line 300, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\model_patcher.py", line 199, in patch_model
temp_weight = ldm_patched.modules.model_management.cast_to_device(weight, device_to, torch.float32, copy=True)
File "C:\StabilityMatrix\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\model_management.py", line 615, in cast_to_device
return tensor.to(device, copy=copy, non_blocking=non_blocking).to(dtype, non_blocking=non_blocking)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
Total time: 5.38 seconds
Version
v2.15.0
What Operating System are you using?
Windows
This issue is stale because it has been open 60 days with no activity. Remove the stale label or comment, else this will be closed in 7 days.
This issue was closed because it has been stale for 7 days with no activity.