ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

TrainLoraNode: cannot access local variable 'multi_res' where it is not associated with a value

Open rodrigoandrigo opened this issue 3 weeks ago • 5 comments

Custom Node Testing

Expected Behavior

train the lora

Actual Behavior

Image

Steps to Reproduce

run

Image

Debug Logs

loaded completely; 95367431640625005117571072.00 MB usable, 5913.57 MB loaded, full load: True
!!! Exception during processing !!! cannot access local variable 'multi_res' where it is not associated with a value
Traceback (most recent call last):
  File "/home/rodrigoandrigo/Notebooks/ComfyUI/execution.py", line 510, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rodrigoandrigo/Notebooks/ComfyUI/execution.py", line 324, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rodrigoandrigo/Notebooks/ComfyUI/execution.py", line 292, in _async_map_node_over_list
    await process_inputs(input_data_all, 0, input_is_list=input_is_list)
  File "/home/rodrigoandrigo/Notebooks/ComfyUI/execution.py", line 286, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "/home/rodrigoandrigo/Notebooks/ComfyUI/comfy_api/internal/__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rodrigoandrigo/Notebooks/ComfyUI/comfy_api/latest/_io.py", line 1275, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rodrigoandrigo/Notebooks/ComfyUI/comfy_extras/nodes_train.py", line 614, in execute
    real_dataset=latents if multi_res else None,
                            ^^^^^^^^^
UnboundLocalError: cannot access local variable 'multi_res' where it is not associated with a value

Other

No response

rodrigoandrigo avatar Dec 05 '25 15:12 rodrigoandrigo

Having same issue

Image

desgraci avatar Dec 06 '25 21:12 desgraci

Same here: Image

Full logs:

ComfyUI Error Report

Error Details

  • Node ID: 36
  • Node Type: TrainLoraNode
  • Exception Type: UnboundLocalError
  • Exception Message: cannot access local variable 'multi_res' where it is not associated with a value

Stack Trace

  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 515, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 329, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 297, in _async_map_node_over_list
    await process_inputs(input_data_all, 0, input_is_list=input_is_list)

  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 291, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^

  File "E:\AI\ComfyUI\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "E:\AI\ComfyUI\ComfyUI\comfy_api\latest\_io.py", line 1520, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "E:\AI\ComfyUI\ComfyUI\comfy_extras\nodes_train.py", line 614, in execute
    real_dataset=latents if multi_res else None,
                            ^^^^^^^^^

System Information

  • ComfyUI Version: 0.4.0
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: win32
  • Python Version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.9.0+rocmsdk20251116

Devices

  • Name: cuda:0 AMD Radeon RX 9070 : native
    • Type: cuda
    • VRAM Total: 17095983104
    • VRAM Free: 16937648128
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2025-12-14T12:02:32.686512 - Adding extra search path checkpoints E:\AI\Models\Stable-diffusion
2025-12-14T12:02:32.686512 - Adding extra search path controlnet E:\AI\Models\ControlNet
2025-12-14T12:02:32.686512 - Adding extra search path loras E:\AI\Models\Lora
2025-12-14T12:02:32.687514 - Adding extra search path vae E:\AI\Models\VAE
2025-12-14T12:02:32.687514 - Adding extra search path upscale_models E:\AI\Models\ESRGAN
2025-12-14T12:02:32.687514 - Adding extra search path upscale_models E:\AI\Models\GFPGAN
2025-12-14T12:02:32.805557 - [WARNING] failed to run amdgpu-arch: binary not found.2025-12-14T12:02:32.805557 - 
2025-12-14T12:02:34.086615 - Checkpoint files will always be loaded safely.
2025-12-14T12:02:34.611838 - Total VRAM 16304 MB, total RAM 65462 MB
2025-12-14T12:02:34.611838 - pytorch version: 2.9.0+rocmsdk20251116
2025-12-14T12:02:34.612839 - Set: torch.backends.cudnn.enabled = False for better AMD performance.
2025-12-14T12:02:34.612839 - AMD arch: gfx1201
2025-12-14T12:02:34.612839 - ROCm version: (7, 1)
2025-12-14T12:02:34.612839 - Set vram state to: NORMAL_VRAM
2025-12-14T12:02:34.613839 - Device: cuda:0 AMD Radeon RX 9070 : native
2025-12-14T12:02:34.630621 - Enabled pinned memory 29457.0
2025-12-14T12:02:35.565228 - Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
2025-12-14T12:02:37.668293 - Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr  8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
2025-12-14T12:02:37.668293 - ComfyUI version: 0.4.0
2025-12-14T12:02:37.708065 - ComfyUI frontend version: 1.34.8
2025-12-14T12:02:37.710067 - [Prompt Server] web root: E:\AI\ComfyUI\python_embeded\Lib\site-packages\comfyui_frontend_package\static
2025-12-14T12:02:38.233166 - Total VRAM 16304 MB, total RAM 65462 MB
2025-12-14T12:02:38.233166 - pytorch version: 2.9.0+rocmsdk20251116
2025-12-14T12:02:38.233166 - Set: torch.backends.cudnn.enabled = False for better AMD performance.
2025-12-14T12:02:38.233166 - AMD arch: gfx1201
2025-12-14T12:02:38.233166 - ROCm version: (7, 1)
2025-12-14T12:02:38.234166 - Set vram state to: NORMAL_VRAM
2025-12-14T12:02:38.234166 - Device: cuda:0 AMD Radeon RX 9070 : native
2025-12-14T12:02:38.252184 - Enabled pinned memory 29457.0
2025-12-14T12:02:38.698420 - 
Import times for custom nodes:
2025-12-14T12:02:38.698420 -    0.0 seconds: E:\AI\ComfyUI\ComfyUI\custom_nodes\websocket_image_save.py
2025-12-14T12:02:38.698420 - 
2025-12-14T12:02:39.047459 - Context impl SQLiteImpl.
2025-12-14T12:02:39.047459 - Will assume non-transactional DDL.
2025-12-14T12:02:39.048460 - No target revision found.
2025-12-14T12:02:39.132470 - Starting server

2025-12-14T12:02:39.133438 - To see the GUI go to: http://127.0.0.1:8188
2025-12-14T12:09:40.735400 - got prompt
2025-12-14T12:09:41.052928 - model weight dtype torch.float16, manual cast: None
2025-12-14T12:09:41.053929 - model_type EPS
2025-12-14T12:09:41.998948 - Using split attention in VAE
2025-12-14T12:09:42.000950 - Using split attention in VAE
2025-12-14T12:09:42.109158 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-12-14T12:09:43.175915 - Requested to load SDXLClipModel
2025-12-14T12:09:43.198936 - loaded completely; 95367431640625005117571072.00 MB usable, 1560.80 MB loaded, full load: True
2025-12-14T12:09:43.202940 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
2025-12-14T12:09:44.405770 - Requested to load AutoencoderKL
2025-12-14T12:09:45.059614 - 0 models unloaded.
2025-12-14T12:09:45.113675 - loaded completely; 6409.05 MB usable, 159.56 MB loaded, full load: True
2025-12-14T12:09:46.404533 - Warning: Ran out of memory when regular VAE encoding, retrying with tiled VAE encoding.
2025-12-14T12:09:57.197330 - Total Images: 1, Total Captions: 1
2025-12-14T12:09:57.611584 - Requested to load SDXL
2025-12-14T12:09:57.845361 - 0 models unloaded.
2025-12-14T12:09:59.220290 - loaded completely; 95367431640625005117571072.00 MB usable, 4947.19 MB loaded, full load: True
2025-12-14T12:09:59.236304 - !!! Exception during processing !!! cannot access local variable 'multi_res' where it is not associated with a value
2025-12-14T12:09:59.240307 - Traceback (most recent call last):
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 515, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 329, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 297, in _async_map_node_over_list
    await process_inputs(input_data_all, 0, input_is_list=input_is_list)
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 291, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\latest\_io.py", line 1520, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_extras\nodes_train.py", line 614, in execute
    real_dataset=latents if multi_res else None,
                            ^^^^^^^^^
UnboundLocalError: cannot access local variable 'multi_res' where it is not associated with a value

2025-12-14T12:09:59.241308 - Prompt executed in 18.50 seconds
2025-12-14T12:11:14.546497 - got prompt
2025-12-14T12:11:14.897944 - model weight dtype torch.float16, manual cast: None
2025-12-14T12:11:14.897944 - model_type V_PREDICTION
2025-12-14T12:11:22.479504 - Using split attention in VAE
2025-12-14T12:11:22.480506 - Using split attention in VAE
2025-12-14T12:11:23.399971 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-12-14T12:11:26.894136 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-12-14T12:11:27.138354 - Requested to load SD2ClipModel
2025-12-14T12:11:27.299959 - loaded completely; 9331.14 MB usable, 675.26 MB loaded, full load: True
2025-12-14T12:11:27.339995 - Requested to load AutoencoderKL
2025-12-14T12:11:27.645100 - 0 models unloaded.
2025-12-14T12:11:27.698148 - loaded completely; 4340.14 MB usable, 159.56 MB loaded, full load: True
2025-12-14T12:11:29.338166 - Warning: Ran out of memory when regular VAE encoding, retrying with tiled VAE encoding.
2025-12-14T12:11:39.141094 - Total Images: 1, Total Captions: 1
2025-12-14T12:11:39.292230 - Requested to load BaseModel
2025-12-14T12:11:39.774665 - 0 models unloaded.
2025-12-14T12:11:40.292024 - loaded completely; 95367431640625005117571072.00 MB usable, 1668.71 MB loaded, full load: True
2025-12-14T12:11:40.298029 - !!! Exception during processing !!! cannot access local variable 'multi_res' where it is not associated with a value
2025-12-14T12:11:40.299030 - Traceback (most recent call last):
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 515, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 329, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 297, in _async_map_node_over_list
    await process_inputs(input_data_all, 0, input_is_list=input_is_list)
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 291, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\latest\_io.py", line 1520, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_extras\nodes_train.py", line 614, in execute
    real_dataset=latents if multi_res else None,
                            ^^^^^^^^^
UnboundLocalError: cannot access local variable 'multi_res' where it is not associated with a value

2025-12-14T12:11:40.300031 - Prompt executed in 25.75 seconds
2025-12-14T12:13:19.711530 - got prompt
2025-12-14T12:13:19.718536 - Total Images: 1, Total Captions: 1
2025-12-14T12:13:19.996786 - Requested to load BaseModel
2025-12-14T12:13:19.997787 - 0 models unloaded.
2025-12-14T12:13:21.088484 - loaded completely; 95367431640625005117571072.00 MB usable, 1668.71 MB loaded, full load: True
2025-12-14T12:13:21.095491 - !!! Exception during processing !!! cannot access local variable 'multi_res' where it is not associated with a value
2025-12-14T12:13:21.096492 - Traceback (most recent call last):
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 515, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 329, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 297, in _async_map_node_over_list
    await process_inputs(input_data_all, 0, input_is_list=input_is_list)
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 291, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\latest\_io.py", line 1520, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_extras\nodes_train.py", line 614, in execute
    real_dataset=latents if multi_res else None,
                            ^^^^^^^^^
UnboundLocalError: cannot access local variable 'multi_res' where it is not associated with a value

2025-12-14T12:13:21.097493 - Prompt executed in 1.38 seconds
2025-12-14T12:13:39.706445 - got prompt
2025-12-14T12:13:39.714453 - Total Images: 1, Total Captions: 1
2025-12-14T12:13:39.980230 - Requested to load BaseModel
2025-12-14T12:13:39.981231 - 0 models unloaded.
2025-12-14T12:13:41.054894 - loaded completely; 95367431640625005117571072.00 MB usable, 1668.71 MB loaded, full load: True
2025-12-14T12:13:41.060900 - !!! Exception during processing !!! cannot access local variable 'multi_res' where it is not associated with a value
2025-12-14T12:13:41.061901 - Traceback (most recent call last):
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 515, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 329, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 297, in _async_map_node_over_list
    await process_inputs(input_data_all, 0, input_is_list=input_is_list)
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 291, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\latest\_io.py", line 1520, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_extras\nodes_train.py", line 614, in execute
    real_dataset=latents if multi_res else None,
                            ^^^^^^^^^
UnboundLocalError: cannot access local variable 'multi_res' where it is not associated with a value

2025-12-14T12:13:41.063902 - Prompt executed in 1.35 seconds
2025-12-14T12:14:04.383652 - got prompt
2025-12-14T12:14:04.390658 - Total Images: 1, Total Captions: 1
2025-12-14T12:14:04.638121 - Requested to load BaseModel
2025-12-14T12:14:04.638121 - 0 models unloaded.
2025-12-14T12:14:05.721798 - loaded completely; 95367431640625005117571072.00 MB usable, 1668.71 MB loaded, full load: True
2025-12-14T12:14:05.728805 - !!! Exception during processing !!! cannot access local variable 'multi_res' where it is not associated with a value
2025-12-14T12:14:05.729806 - Traceback (most recent call last):
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 515, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 329, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 297, in _async_map_node_over_list
    await process_inputs(input_data_all, 0, input_is_list=input_is_list)
  File "E:\AI\ComfyUI\ComfyUI\execution.py", line 291, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_api\latest\_io.py", line 1520, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\AI\ComfyUI\ComfyUI\comfy_extras\nodes_train.py", line 614, in execute
    real_dataset=latents if multi_res else None,
                            ^^^^^^^^^
UnboundLocalError: cannot access local variable 'multi_res' where it is not associated with a value

2025-12-14T12:14:05.730807 - Prompt executed in 1.34 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"id":"076156ec-0997-4f02-8256-db09fb5df388","revision":0,"last_node_id":43,"last_link_id":75,"nodes":[{"id":37,"type":"CheckpointLoaderSimple","pos":[412.6626585490824,376.93614484143416],"size":[270,98],"flags":{},"order":0,"mode":0,"inputs":[{"localized_name":"ckpt_name","name":"ckpt_name","type":"COMBO","widget":{"name":"ckpt_name"},"link":null}],"outputs":[{"localized_name":"MODEL","name":"MODEL","type":"MODEL","links":[66]},{"localized_name":"CLIP","name":"CLIP","type":"CLIP","links":[67]},{"localized_name":"VAE","name":"VAE","type":"VAE","links":[69]}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["stable-diffusion-v2-1_768-ema-pruned.safetensors"]},{"id":39,"type":"VAEEncode","pos":[1007.839968842095,807.5131546085265],"size":[140,46],"flags":{},"order":3,"mode":0,"inputs":[{"localized_name":"pixels","name":"pixels","type":"IMAGE","link":71},{"localized_name":"vae","name":"vae","type":"VAE","link":69}],"outputs":[{"localized_name":"LATENT","name":"LATENT","type":"LATENT","links":[70]}],"properties":{"Node name for S&R":"VAEEncode"}},{"id":40,"type":"LoadImage","pos":[551.2351604273695,702.667174142711],"size":[282.5166702270508,314],"flags":{},"order":1,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"COMBO","widget":{"name":"image"},"link":null},{"localized_name":"choose file to upload","name":"upload","type":"IMAGEUPLOAD","widget":{"name":"upload"},"link":null}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[71]},{"localized_name":"MASK","name":"MASK","type":"MASK","links":null}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["Figure 2_2.png","image"]},{"id":36,"type":"TrainLoraNode","pos":[1378.4336219134696,321.0149037956032],"size":[307.9166717529297,430],"flags":{},"order":4,"mode":0,"inputs":[{"localized_name":"model","name":"model","type":"MODEL","link":66},{"localized_name":"latents","name":"latents","type":"LATENT","link":70},{"localized_name":"positive","name":"positive","type":"CONDITIONING","link":68},{"localized_name":"batch_size","name":"batch_size","type":"INT","widget":{"name":"batch_size"},"link":null},{"localized_name":"grad_accumulation_steps","name":"grad_accumulation_steps","type":"INT","widget":{"name":"grad_accumulation_steps"},"link":null},{"localized_name":"steps","name":"steps","type":"INT","widget":{"name":"steps"},"link":null},{"localized_name":"learning_rate","name":"learning_rate","type":"FLOAT","widget":{"name":"learning_rate"},"link":null},{"localized_name":"rank","name":"rank","type":"INT","widget":{"name":"rank"},"link":null},{"localized_name":"optimizer","name":"optimizer","type":"COMBO","widget":{"name":"optimizer"},"link":null},{"localized_name":"loss_function","name":"loss_function","type":"COMBO","widget":{"name":"loss_function"},"link":null},{"localized_name":"seed","name":"seed","type":"INT","widget":{"name":"seed"},"link":null},{"localized_name":"training_dtype","name":"training_dtype","type":"COMBO","widget":{"name":"training_dtype"},"link":null},{"localized_name":"lora_dtype","name":"lora_dtype","type":"COMBO","widget":{"name":"lora_dtype"},"link":null},{"localized_name":"algorithm","name":"algorithm","type":"COMBO","widget":{"name":"algorithm"},"link":null},{"localized_name":"gradient_checkpointing","name":"gradient_checkpointing","type":"BOOLEAN","widget":{"name":"gradient_checkpointing"},"link":null},{"localized_name":"existing_lora","name":"existing_lora","type":"COMBO","widget":{"name":"existing_lora"},"link":null}],"outputs":[{"localized_name":"model","name":"model","type":"MODEL","links":null},{"localized_name":"lora","name":"lora","type":"LORA_MODEL","links":[73]},{"localized_name":"loss_map","name":"loss_map","type":"LOSS_MAP","links":[75]},{"localized_name":"steps","name":"steps","type":"INT","links":[74]}],"properties":{"Node name for S&R":"TrainLoraNode"},"widgets_values":[1,1,10,0.0005,8,"AdamW","MSE",688186843033973,"randomize","bf16","bf16","LoRA",true,"[None]"]},{"id":42,"type":"SaveLoRA","pos":[1740.267038714372,326.1776249315917],"size":[270,82],"flags":{},"order":6,"mode":0,"inputs":[{"localized_name":"lora","name":"lora","type":"LORA_MODEL","link":73},{"localized_name":"prefix","name":"prefix","type":"STRING","widget":{"name":"prefix"},"link":null},{"localized_name":"steps","name":"steps","shape":7,"type":"INT","widget":{"name":"steps"},"link":74}],"outputs":[],"properties":{"Node name for S&R":"SaveLoRA"},"widgets_values":["loras/FARRELL-LORA-TEST-STEPS_10",0]},{"id":43,"type":"LossGraphNode","pos":[1753.7670387143714,471.77762493159156],"size":[270,58],"flags":{},"order":5,"mode":0,"inputs":[{"localized_name":"loss","name":"loss","type":"LOSS_MAP","link":75},{"localized_name":"filename_prefix","name":"filename_prefix","type":"STRING","widget":{"name":"filename_prefix"},"link":null}],"outputs":[],"properties":{"Node name for S&R":"LossGraphNode"},"widgets_values":["loss_graph"]},{"id":38,"type":"CLIPTextEncode","pos":[830.3338080607281,431.9622154650251],"size":[400,200],"flags":{},"order":2,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":67},{"localized_name":"text","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"CONDITIONING","name":"CONDITIONING","type":"CONDITIONING","links":[68]}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["FARRELL-LORA-TEST"]}],"links":[[66,37,0,36,0,"MODEL"],[67,37,1,38,0,"CLIP"],[68,38,0,36,2,"CONDITIONING"],[69,37,2,39,1,"VAE"],[70,39,0,36,1,"LATENT"],[71,40,0,39,0,"IMAGE"],[73,36,1,42,0,"LORA_MODEL"],[74,36,3,42,2,"INT"],[75,36,2,43,0,"LOSS_MAP"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.9090909090909091,"offset":[-319.1670387143719,-235.77762493159173]},"workflowRendererVersion":"LG"},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

chenshaoju avatar Dec 14 '25 04:12 chenshaoju

Based on https://github.com/comfyanonymous/ComfyUI/tree/5ac3b26a7dedb9b13c681abe8733c54f13353273 version, I made a quick change.

But, unfortunately, my graphics card only has 16GB of VRAM, so I can't verify if it works:

HIP out of memory. Tried to allocate 2.80 GiB. GPU 0 has a total capacity of 15.92 GiB of which 0 bytes is free. Of the allocated memory 26.27 GiB is allocated by PyTorch, and 2.66 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Here is the file I modified:

Copy this file to comfy_extras/ folder and overwrite: nodes_train.py

       485            elif isinstance(latents, torch.Tensor):
       486                latents = latents.to(dtype)
       487                num_images = latents.shape[0]
       488 +              multi_res = False
       489            else:
       490                logging.error(f"Invalid latents type: 
             {type(latents)}")
       491 +              # Set default values ​​to avoid subsequent errors.
       492 +              multi_res = False
       493 +              num_images = 0
       494    
       495            logging.info(f"Total Images: {num_images}, Total 
             Captions: {len(positive)}")
       496            if len(positive) == 1 and num_images > 1:

Copy this file to comfy/ldm/modules/ folder and overwrite: sub_quadratic_attention.py

       171            del attn_scores
       172        except model_management.OOM_EXCEPTION:
       173            logging.warning("ran out of memory while running 
             softmax in  _get_attention_scores_no_kv_chunking, trying 
             slower in place softmax instead")
       174 -          attn_scores -= attn_scores
           -  .max(dim=-1, keepdim=True).values
           -   # noqa: F821 attn_scores is not defined
       175 -          torch.exp(attn_scores, out=attn_scores)
       174 +          attn_scores = attn_scores - attn_scores
           +  .max(dim=-1, keepdim=True).values
       175 +          attn_scores = torch.exp(attn_scores)
       176            summed = torch.sum(attn_scores, dim=-1, 
             keepdim=True)
       177 -          attn_scores /= summed
       177 +          attn_scores = attn_scores / summed
       178            attn_probs = attn_scores
       179    
       180        hidden_states_slice = 
             torch.bmm(attn_probs.to(value.dtype), value)

If you have time, please give it a try.

chenshaoju avatar Dec 15 '25 04:12 chenshaoju