Flux bnb_nf4 'ForgeParams4bit' object has no attribute 'quant_storage'
Expected Behavior
It load the model super fast but give this error message.
Actual Behavior
Stop in ksampler custom
Steps to Reproduce
Load the Flux model " flux1-dev-bnb-nf4 " with checkpoint loaderNF4
Debug Logs
AttributeError: 'ForgeParams4bit' object has no attribute 'quant_storage'
Prompt executed in 224.19 seconds
got prompt
Failed to validate prompt for output 188:
* LayerMask: Florence2Ultra 186:
- Required input is missing: image
Output will be ignored
Failed to validate prompt for output 189:
Output will be ignored
Failed to validate prompt for output 298:
* LayerMask: MaskBoxDetect 294:
- Required input is missing: mask
Output will be ignored
[rgthree] Using rgthree's optimized recursive execution.
Requested to load Flux
Loading 1 new model
!!! Exception during processing!!! 'ForgeParams4bit' object has no attribute 'quant_storage'
Traceback (most recent call last):
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\comfy\model_management.py", line 319, in model_load
self.real_model = self.model.patch_model_lowvram(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\comfy\model_patcher.py", line 422, in patch_model_lowvram
self.lowvram_load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights)
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\comfy\model_patcher.py", line 406, in lowvram_load
m.to(device_to)
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1152, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 825, in _apply
param_applied = fn(param)
^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4\__init__.py", line 64, in to
quant_storage=self.quant_storage,
^^^^^^^^^^^^^^^^^^
Other
No response
The issues are being submitted in the custom node page. https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
Ok, as it says in another threat to install bitsandbyte, I install bitsandbyte to a new version and know it works but this is the result:
- White render in sampler
- The sampler render takes much longer than before
The result is white and blurry:
Yes the right place to report issues is the custom node repo: https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
python.exe -s -m pip install -U bitsandbytes
python.exe -s -m pip install -U bitsandbytes
Where should I type it? I did it on E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI using powershell and I still have the same issue.