Cannot get Chroma to generate
`Creating venv in directory E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu/venv using python "C:\Users\archa\AppData\Local\Programs\Python\Python310\python.exe" Requirement already satisfied: pip in e:\ai-stuff\ai\stable-diffusion-webui-amdgpu\venv\lib\site-packages (25.1.1) venv "E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu/venv\Scripts\Python.exe" WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next. fatal: No names found, cannot describe anything. Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: f2.0.1v1.10.1-1.10.1 Commit hash: d7f548d54e116d9458c512f157baf170708d84b4 ROCm: agents=['gfx1100'] ROCm: version=6.2, using agent gfx1100 ZLUDA support: experimental ZLUDA load: path='E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge.zluda' nightly=False Total VRAM 20464 MB, total RAM 32675 MB pytorch version: 2.3.1+cu118 Set vram state to: NORMAL_VRAM Device: cuda:0 AMD Radeon RX 7900 XT [ZLUDA] : native VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16 Launching Web UI with arguments: --api --zluda --use-zluda --theme dark --no-download-sd-model --cuda-stream --attention-quad --ckpt-dir 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu/models/Stable-diffusion' --hypernetwork-dir 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu/models/hypernetworks' --embeddings-dir 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu/embeddings' --lora-dir 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu/models/Lora' CUDA Using Stream: True Using sub quadratic optimization for cross attention Using split attention for VAE ONNX: version=1.20.1 provider=CPUExecutionProvider, available=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] ControlNet preprocessor location: E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\ControlNetPreprocessor Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 24.11.1, num models: 10 CivitAI Browser+: Aria2 RPC started 2025-06-26 13:51:27,457 - ControlNet - INFO - ControlNet UI callback registered. add tab *** Error executing callback ui_tabs_callback for E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\extensions\a1111-sd-webui-haku-img\scripts\main.py Traceback (most recent call last): File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\modules\script_callbacks.py", line 283, in ui_tabs_callback res += c.callback() or [] File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\extensions\a1111-sd-webui-haku-img\scripts\main.py", line 456, in add_tab _release_if_possible( File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\extensions\a1111-sd-webui-haku-img\scripts\main.py", line 635, in _release_if_possible if isinstance(component, gr.events.Releaseable): AttributeError: module 'gradio.events' has no attribute 'Releaseable'
[ERROR]: Config states E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\config_states\civitai_subfolders.json, "created_at" does not exist Model selected: {'checkpoint_info': {'filename': 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu\models\Stable-diffusion\Flux\chroma-unlocked-v39-detail-calibrated-Q8_0.gguf', 'hash': '0d39234b'}, 'additional_modules': ['E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\FLUX_VAE_NEW.safetensors', 'E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\t5-v1_1-xxl-encoder-Q8_0.gguf'], 'unet_storage_dtype': None} Using online LoRAs in FP16: True Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Startup time: 146.6s (initial startup: 0.4s, prepare environment: 38.5s, launcher: 2.8s, import torch: 50.8s, initialize shared: 8.5s, other imports: 0.3s, setup gfpgan: 0.3s, list SD models: 1.2s, load scripts: 23.7s, initialize extra networks: 12.2s, create ui: 8.0s, gradio launch: 2.5s, add APIs: 14.6s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 95.00% GPU memory (19440.00 MB) to load weights, and use 5.00% GPU memory (1024.00 MB) to do matrix computation.
Model selected: {'checkpoint_info': {'filename': 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu\models\Stable-diffusion\Flux\chroma-unlocked-v39-detail-calibrated-Q8_0.gguf', 'hash': '0d39234b'}, 'additional_modules': ['E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\FLUX_VAE_NEW.safetensors', 'E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\t5-v1_1-xxl-encoder-Q8_0.gguf'], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Model selected: {'checkpoint_info': {'filename': 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu\models\Stable-diffusion\Flux\chroma-unlocked-v39-detail-calibrated-Q8_0.gguf', 'hash': '0d39234b'}, 'additional_modules': ['E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\FLUX_VAE_NEW.safetensors', 'E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\t5-v1_1-xxl-encoder-Q8_0.gguf'], 'unet_storage_dtype': None}
Using online LoRAs in FP16: True
Model selected: {'checkpoint_info': {'filename': 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu\models\Stable-diffusion\Flux\chroma-unlocked-v39-detail-calibrated-Q8_0.gguf', 'hash': '0d39234b'}, 'additional_modules': ['E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\FLUX_VAE_NEW.safetensors', 'E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\t5-v1_1-xxl-encoder-Q8_0.gguf'], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Loading Model: {'checkpoint_info': {'filename': 'E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu\models\Stable-diffusion\Flux\chroma-unlocked-v39-detail-calibrated-Q8_0.gguf', 'hash': '0d39234b'}, 'additional_modules': ['E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\FLUX_VAE_NEW.safetensors', 'E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\models\VAE\t5-v1_1-xxl-encoder-Q8_0.gguf'], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
Traceback (most recent call last):
File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\backend\loader.py", line 500, in forge_loader
state_dicts, estimated_config = split_state_dict(sd, additional_state_dicts=additional_state_dicts)
File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\backend\loader.py", line 457, in split_state_dict
sd = replace_state_dict(sd, asd, guess)
File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\backend\loader.py", line 209, in replace_state_dict
asd_new[k] = asd_new[k].dequantize_as_pytorch_parameter()
File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\backend\operations_gguf.py", line 41, in dequantize_as_pytorch_parameter
self.gguf_cls.bake(self)
AttributeError: type object 'Q8_0' has no attribute 'bake'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\modules_forge\main_thread.py", line 30, in work self.result = self.func(*self.args, **self.kwargs) File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\modules\txt2img.py", line 131, in txt2img_function processed = processing.process_images(p) File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\modules\processing.py", line 837, in process_images manage_model_and_prompt_cache(p) File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\modules\processing.py", line 805, in manage_model_and_prompt_cache p.sd_model, just_reloaded = forge_model_reload() File "E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\modules\sd_models.py", line 504, in forge_model_reload sd_model = forge_loader(state_dict, additional_state_dicts=additional_state_dicts) File "E:\AI-Stuff\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AI-Stuff\AI\Forge\stable-diffusion-webui-amdgpu-forge\backend\loader.py", line 502, in forge_loader raise ValueError('Failed to recognize model type!') ValueError: Failed to recognize model type! Failed to recognize model type!`
I've tried the FP8 and the GGUF models. GGUF results in above errors, FP8 results in green grid nonsense image. If anyone can tell me what I'm doing wrong or walk me through getting Chroma to generate I'd be very grateful.