ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

Flux bnb_nf4 'ForgeParams4bit' object has no attribute 'quant_storage'

Open kakachiex2 opened this issue 1 year ago • 2 comments

Expected Behavior

It load the model super fast but give this error message.

Actual Behavior

Stop in ksampler custom

Steps to Reproduce

Load the Flux model " flux1-dev-bnb-nf4 " with checkpoint loaderNF4

Debug Logs

AttributeError: 'ForgeParams4bit' object has no attribute 'quant_storage'

Prompt executed in 224.19 seconds
got prompt
Failed to validate prompt for output 188:
* LayerMask: Florence2Ultra 186:
  - Required input is missing: image
Output will be ignored
Failed to validate prompt for output 189:
Output will be ignored
Failed to validate prompt for output 298:
* LayerMask: MaskBoxDetect 294:
  - Required input is missing: mask
Output will be ignored
[rgthree] Using rgthree's optimized recursive execution.
Requested to load Flux
Loading 1 new model
!!! Exception during processing!!! 'ForgeParams4bit' object has no attribute 'quant_storage'
Traceback (most recent call last):
  File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\comfy\model_management.py", line 319, in model_load
    self.real_model = self.model.patch_model_lowvram(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\comfy\model_patcher.py", line 422, in patch_model_lowvram
    self.lowvram_load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights)
  File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\comfy\model_patcher.py", line 406, in lowvram_load
    m.to(device_to)
  File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1152, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 825, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1150, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4\__init__.py", line 64, in to
    quant_storage=self.quant_storage,
                  ^^^^^^^^^^^^^^^^^^

Other

No response

kakachiex2 avatar Aug 12 '24 12:08 kakachiex2

The issues are being submitted in the custom node page. https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4

JorgeR81 avatar Aug 12 '24 13:08 JorgeR81

Ok, as it says in another threat to install bitsandbyte, I install bitsandbyte to a new version and know it works but this is the result:

  • White render in sampler
  • The sampler render takes much longer than before The result is white and blurry: Screenshot 2024-08-12 092311

kakachiex2 avatar Aug 12 '24 13:08 kakachiex2

Yes the right place to report issues is the custom node repo: https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4

comfyanonymous avatar Aug 12 '24 16:08 comfyanonymous

python.exe -s -m pip install -U bitsandbytes

hillleaf avatar Aug 13 '24 02:08 hillleaf

python.exe -s -m pip install -U bitsandbytes

Where should I type it? I did it on E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI using powershell and I still have the same issue.

vsardenberg avatar Nov 11 '24 21:11 vsardenberg