comfyui-inpaint-nodes icon indicating copy to clipboard operation
comfyui-inpaint-nodes copied to clipboard

(IMPORT FAILED) ComfyUI Inpaint Nodes?

Open biu5332 opened this issue 1 year ago • 10 comments

{22A83C12-DA85-4d8c-8CD5-475AA6530AB0}

biu5332 avatar Sep 10 '24 08:09 biu5332

I have the same problem.

s-akamatsu-vb avatar Sep 11 '24 01:09 s-akamatsu-vb

I also can't install with this problem

Koxae avatar Sep 11 '24 23:09 Koxae

1.update ComfyUI and pip install -r requirements.txt 2.reinstall this node

Use this way I solve this problem, I hope it will also help you guys!

youxiaohanpian avatar Sep 12 '24 09:09 youxiaohanpian

pip install -r requirements.txt

1.update ComfyUI and pip install -r requirements.txt 2.reinstall this node

Use this way I solve this problem, I hope it will also help you guys!

1.update ComfyUI and pip install -r requirements.txt 2.reinstall this node

Use this way I solve this problem, I hope it will also help you guys!

Thanks! Unfortunately, it didn't work for me. Maybe there is some dependency on the Python version, I'm not sure.

Koxae avatar Sep 12 '24 10:09 Koxae

You will find the reason for failed import in ComfyUI's output (terminal window).

Acly avatar Sep 12 '24 11:09 Acly

You will find the reason for failed import in ComfyUI's output (terminal window).

Thanks! I have the following records:

File "C:\ai\ComfyUI\ComfyUI\custom_nodes\comfyui-inpaint-nodes\nodes.py", line 63, in <module>
    raise RuntimeError(too_old_msg)
RuntimeError: comfyui-inpaint-nodes requires a newer version of ComfyUI (v0.1.1 or later), please update!

Cannot import C:\ai\ComfyUI\ComfyUI\custom_nodes\comfyui-inpaint-nodes module for custom nodes: comfyui-inpaint-nodes requires a newer version of ComfyUI (v0.1.1 or later), please update!

But ComfyUI has already been updated to the latest version (ComfyUI v0.2.2 Release). photo_2023-01-12_19-22-07

Koxae avatar Sep 12 '24 12:09 Koxae

I was able to fix it. I can't be 100% sure about the correct sequence, because some of my actions were repetitions of found solutions, and not conscious actions. I performed these actions:

python -m pip uninstall torch
pip install h5py
pip install typing-extensions
pip install wheel
pip install tensorflow==2.13.0rc2
python -m pip install typing_extensions==4.11
python -m pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --extra-index-url https://download.pytorch.org/whl/cu121
pip install insightface==0.7.3

Koxae avatar Sep 12 '24 23:09 Koxae

have same problem.

image

image

lsky-walt avatar Sep 16 '24 12:09 lsky-walt

I also encountered this problem, but after I updated comfyui, the problem was not solved. Finally, I found that the root cause of the problem was that the code in the comfy/lora.py file was changed (probably a custom node). Paste the following code at the end of the lora.py file to solve the problem.

def weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype):
   dora_scale = comfy.model_management.cast_to_device(dora_scale, weight.device, intermediate_dtype)
   lora_diff *= alpha
   weight_calc = weight + lora_diff.type(weight.dtype)
   weight_norm = (
       weight_calc.transpose(0, 1)
       .reshape(weight_calc.shape[1], -1)
       .norm(dim=1, keepdim=True)
       .reshape(weight_calc.shape[1], *[1] * (weight_calc.dim() - 1))
       .transpose(0, 1)
   )

   weight_calc *= (dora_scale / weight_norm).type(weight.dtype)
   if strength != 1.0:
       weight_calc -= weight
       weight += strength * (weight_calc)
   else:
       weight[:] = weight_calc
   return weight

def pad_tensor_to_shape(tensor: torch.Tensor, new_shape: list[int]) -> torch.Tensor:
   """
   Pad a tensor to a new shape with zeros.

   Args:
       tensor (torch.Tensor): The original tensor to be padded.
       new_shape (List[int]): The desired shape of the padded tensor.

   Returns:
       torch.Tensor: A new tensor padded with zeros to the specified shape.

   Note:
       If the new shape is smaller than the original tensor in any dimension,
       the original tensor will be truncated in that dimension.
   """
   if any([new_shape[i] < tensor.shape[i] for i in range(len(new_shape))]):
       raise ValueError("The new shape must be larger than the original tensor in all dimensions")

   if len(new_shape) != len(tensor.shape):
       raise ValueError("The new shape must have the same number of dimensions as the original tensor")

   # Create a new tensor filled with zeros
   padded_tensor = torch.zeros(new_shape, dtype=tensor.dtype, device=tensor.device)

   # Create slicing tuples for both tensors
   orig_slices = tuple(slice(0, dim) for dim in tensor.shape)
   new_slices = tuple(slice(0, dim) for dim in tensor.shape)

   # Copy the original tensor into the new tensor
   padded_tensor[new_slices] = tensor[orig_slices]

   return padded_tensor

def calculate_weight(patches, weight, key, intermediate_dtype=torch.float32):
   for p in patches:
       strength = p[0]
       v = p[1]
       strength_model = p[2]
       offset = p[3]
       function = p[4]
       if function is None:
           function = lambda a: a

       old_weight = None
       if offset is not None:
           old_weight = weight
           weight = weight.narrow(offset[0], offset[1], offset[2])

       if strength_model != 1.0:
           weight *= strength_model

       if isinstance(v, list):
           v = (calculate_weight(v[1:], comfy.model_management.cast_to_device(v[0], weight.device, intermediate_dtype, copy=True), key, intermediate_dtype=intermediate_dtype), )

       if len(v) == 1:
           patch_type = "diff"
       elif len(v) == 2:
           patch_type = v[0]
           v = v[1]

       if patch_type == "diff":
           diff: torch.Tensor = v[0]
           # An extra flag to pad the weight if the diff's shape is larger than the weight
           do_pad_weight = len(v) > 1 and v[1]['pad_weight']
           if do_pad_weight and diff.shape != weight.shape:
               logging.info("Pad weight {} from {} to shape: {}".format(key, weight.shape, diff.shape))
               weight = pad_tensor_to_shape(weight, diff.shape)

           if strength != 0.0:
               if diff.shape != weight.shape:
                   logging.warning("WARNING SHAPE MISMATCH {} WEIGHT NOT MERGED {} != {}".format(key, diff.shape, weight.shape))
               else:
                   weight += function(strength * comfy.model_management.cast_to_device(diff, weight.device, weight.dtype))
       elif patch_type == "lora": #lora/locon
           mat1 = comfy.model_management.cast_to_device(v[0], weight.device, intermediate_dtype)
           mat2 = comfy.model_management.cast_to_device(v[1], weight.device, intermediate_dtype)
           dora_scale = v[4]
           if v[2] is not None:
               alpha = v[2] / mat2.shape[0]
           else:
               alpha = 1.0

           if v[3] is not None:
               #locon mid weights, hopefully the math is fine because I didn't properly test it
               mat3 = comfy.model_management.cast_to_device(v[3], weight.device, intermediate_dtype)
               final_shape = [mat2.shape[1], mat2.shape[0], mat3.shape[2], mat3.shape[3]]
               mat2 = torch.mm(mat2.transpose(0, 1).flatten(start_dim=1), mat3.transpose(0, 1).flatten(start_dim=1)).reshape(final_shape).transpose(0, 1)
           try:
               lora_diff = torch.mm(mat1.flatten(start_dim=1), mat2.flatten(start_dim=1)).reshape(weight.shape)
               if dora_scale is not None:
                   weight = function(weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype))
               else:
                   weight += function(((strength * alpha) * lora_diff).type(weight.dtype))
           except Exception as e:
               logging.error("ERROR {} {} {}".format(patch_type, key, e))
       elif patch_type == "lokr":
           w1 = v[0]
           w2 = v[1]
           w1_a = v[3]
           w1_b = v[4]
           w2_a = v[5]
           w2_b = v[6]
           t2 = v[7]
           dora_scale = v[8]
           dim = None

           if w1 is None:
               dim = w1_b.shape[0]
               w1 = torch.mm(comfy.model_management.cast_to_device(w1_a, weight.device, intermediate_dtype),
                               comfy.model_management.cast_to_device(w1_b, weight.device, intermediate_dtype))
           else:
               w1 = comfy.model_management.cast_to_device(w1, weight.device, intermediate_dtype)

           if w2 is None:
               dim = w2_b.shape[0]
               if t2 is None:
                   w2 = torch.mm(comfy.model_management.cast_to_device(w2_a, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w2_b, weight.device, intermediate_dtype))
               else:
                   w2 = torch.einsum('i j k l, j r, i p -> p r k l',
                                       comfy.model_management.cast_to_device(t2, weight.device, intermediate_dtype),
                                       comfy.model_management.cast_to_device(w2_b, weight.device, intermediate_dtype),
                                       comfy.model_management.cast_to_device(w2_a, weight.device, intermediate_dtype))
           else:
               w2 = comfy.model_management.cast_to_device(w2, weight.device, intermediate_dtype)

           if len(w2.shape) == 4:
               w1 = w1.unsqueeze(2).unsqueeze(2)
           if v[2] is not None and dim is not None:
               alpha = v[2] / dim
           else:
               alpha = 1.0

           try:
               lora_diff = torch.kron(w1, w2).reshape(weight.shape)
               if dora_scale is not None:
                   weight = function(weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype))
               else:
                   weight += function(((strength * alpha) * lora_diff).type(weight.dtype))
           except Exception as e:
               logging.error("ERROR {} {} {}".format(patch_type, key, e))
       elif patch_type == "loha":
           w1a = v[0]
           w1b = v[1]
           if v[2] is not None:
               alpha = v[2] / w1b.shape[0]
           else:
               alpha = 1.0

           w2a = v[3]
           w2b = v[4]
           dora_scale = v[7]
           if v[5] is not None: #cp decomposition
               t1 = v[5]
               t2 = v[6]
               m1 = torch.einsum('i j k l, j r, i p -> p r k l',
                                   comfy.model_management.cast_to_device(t1, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w1b, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w1a, weight.device, intermediate_dtype))

               m2 = torch.einsum('i j k l, j r, i p -> p r k l',
                                   comfy.model_management.cast_to_device(t2, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w2b, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w2a, weight.device, intermediate_dtype))
           else:
               m1 = torch.mm(comfy.model_management.cast_to_device(w1a, weight.device, intermediate_dtype),
                               comfy.model_management.cast_to_device(w1b, weight.device, intermediate_dtype))
               m2 = torch.mm(comfy.model_management.cast_to_device(w2a, weight.device, intermediate_dtype),
                               comfy.model_management.cast_to_device(w2b, weight.device, intermediate_dtype))

           try:
               lora_diff = (m1 * m2).reshape(weight.shape)
               if dora_scale is not None:
                   weight = function(weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype))
               else:
                   weight += function(((strength * alpha) * lora_diff).type(weight.dtype))
           except Exception as e:
               logging.error("ERROR {} {} {}".format(patch_type, key, e))
       elif patch_type == "glora":
           dora_scale = v[5]

           old_glora = False
           if v[3].shape[1] == v[2].shape[0] == v[0].shape[0] == v[1].shape[1]:
               rank = v[0].shape[0]
               old_glora = True

           if v[3].shape[0] == v[2].shape[1] == v[0].shape[1] == v[1].shape[0]:
               if old_glora and v[1].shape[0] == weight.shape[0] and weight.shape[0] == weight.shape[1]:
                   pass
               else:
                   old_glora = False
                   rank = v[1].shape[0]

           a1 = comfy.model_management.cast_to_device(v[0].flatten(start_dim=1), weight.device, intermediate_dtype)
           a2 = comfy.model_management.cast_to_device(v[1].flatten(start_dim=1), weight.device, intermediate_dtype)
           b1 = comfy.model_management.cast_to_device(v[2].flatten(start_dim=1), weight.device, intermediate_dtype)
           b2 = comfy.model_management.cast_to_device(v[3].flatten(start_dim=1), weight.device, intermediate_dtype)

           if v[4] is not None:
               alpha = v[4] / rank
           else:
               alpha = 1.0

           try:
               if old_glora:
                   lora_diff = (torch.mm(b2, b1) + torch.mm(torch.mm(weight.flatten(start_dim=1).to(dtype=intermediate_dtype), a2), a1)).reshape(weight.shape) #old lycoris glora
               else:
                   if weight.dim() > 2:
                       lora_diff = torch.einsum("o i ..., i j -> o j ...", torch.einsum("o i ..., i j -> o j ...", weight.to(dtype=intermediate_dtype), a1), a2).reshape(weight.shape)
                   else:
                       lora_diff = torch.mm(torch.mm(weight.to(dtype=intermediate_dtype), a1), a2).reshape(weight.shape)
                   lora_diff += torch.mm(b1, b2).reshape(weight.shape)

               if dora_scale is not None:
                   weight = function(weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype))
               else:
                   weight += function(((strength * alpha) * lora_diff).type(weight.dtype))
           except Exception as e:
               logging.error("ERROR {} {} {}".format(patch_type, key, e))
       else:
           logging.warning("patch type not recognized {} {}".format(patch_type, key))

       if old_weight is not None:
           weight = old_weight

   return weight

balala8 avatar Sep 26 '24 13:09 balala8

I also encountered this problem, but after I updated comfyui, the problem was not solved. Finally, I found that the root cause of the problem was that the code in the comfy/lora.py file was changed (probably a custom node). Paste the following code at the end of the lora.py file to solve the problem.

def weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype):
   dora_scale = comfy.model_management.cast_to_device(dora_scale, weight.device, intermediate_dtype)
   lora_diff *= alpha
   weight_calc = weight + lora_diff.type(weight.dtype)
   weight_norm = (
       weight_calc.transpose(0, 1)
       .reshape(weight_calc.shape[1], -1)
       .norm(dim=1, keepdim=True)
       .reshape(weight_calc.shape[1], *[1] * (weight_calc.dim() - 1))
       .transpose(0, 1)
   )

   weight_calc *= (dora_scale / weight_norm).type(weight.dtype)
   if strength != 1.0:
       weight_calc -= weight
       weight += strength * (weight_calc)
   else:
       weight[:] = weight_calc
   return weight

def pad_tensor_to_shape(tensor: torch.Tensor, new_shape: list[int]) -> torch.Tensor:
   """
   Pad a tensor to a new shape with zeros.

   Args:
       tensor (torch.Tensor): The original tensor to be padded.
       new_shape (List[int]): The desired shape of the padded tensor.

   Returns:
       torch.Tensor: A new tensor padded with zeros to the specified shape.

   Note:
       If the new shape is smaller than the original tensor in any dimension,
       the original tensor will be truncated in that dimension.
   """
   if any([new_shape[i] < tensor.shape[i] for i in range(len(new_shape))]):
       raise ValueError("The new shape must be larger than the original tensor in all dimensions")

   if len(new_shape) != len(tensor.shape):
       raise ValueError("The new shape must have the same number of dimensions as the original tensor")

   # Create a new tensor filled with zeros
   padded_tensor = torch.zeros(new_shape, dtype=tensor.dtype, device=tensor.device)

   # Create slicing tuples for both tensors
   orig_slices = tuple(slice(0, dim) for dim in tensor.shape)
   new_slices = tuple(slice(0, dim) for dim in tensor.shape)

   # Copy the original tensor into the new tensor
   padded_tensor[new_slices] = tensor[orig_slices]

   return padded_tensor

def calculate_weight(patches, weight, key, intermediate_dtype=torch.float32):
   for p in patches:
       strength = p[0]
       v = p[1]
       strength_model = p[2]
       offset = p[3]
       function = p[4]
       if function is None:
           function = lambda a: a

       old_weight = None
       if offset is not None:
           old_weight = weight
           weight = weight.narrow(offset[0], offset[1], offset[2])

       if strength_model != 1.0:
           weight *= strength_model

       if isinstance(v, list):
           v = (calculate_weight(v[1:], comfy.model_management.cast_to_device(v[0], weight.device, intermediate_dtype, copy=True), key, intermediate_dtype=intermediate_dtype), )

       if len(v) == 1:
           patch_type = "diff"
       elif len(v) == 2:
           patch_type = v[0]
           v = v[1]

       if patch_type == "diff":
           diff: torch.Tensor = v[0]
           # An extra flag to pad the weight if the diff's shape is larger than the weight
           do_pad_weight = len(v) > 1 and v[1]['pad_weight']
           if do_pad_weight and diff.shape != weight.shape:
               logging.info("Pad weight {} from {} to shape: {}".format(key, weight.shape, diff.shape))
               weight = pad_tensor_to_shape(weight, diff.shape)

           if strength != 0.0:
               if diff.shape != weight.shape:
                   logging.warning("WARNING SHAPE MISMATCH {} WEIGHT NOT MERGED {} != {}".format(key, diff.shape, weight.shape))
               else:
                   weight += function(strength * comfy.model_management.cast_to_device(diff, weight.device, weight.dtype))
       elif patch_type == "lora": #lora/locon
           mat1 = comfy.model_management.cast_to_device(v[0], weight.device, intermediate_dtype)
           mat2 = comfy.model_management.cast_to_device(v[1], weight.device, intermediate_dtype)
           dora_scale = v[4]
           if v[2] is not None:
               alpha = v[2] / mat2.shape[0]
           else:
               alpha = 1.0

           if v[3] is not None:
               #locon mid weights, hopefully the math is fine because I didn't properly test it
               mat3 = comfy.model_management.cast_to_device(v[3], weight.device, intermediate_dtype)
               final_shape = [mat2.shape[1], mat2.shape[0], mat3.shape[2], mat3.shape[3]]
               mat2 = torch.mm(mat2.transpose(0, 1).flatten(start_dim=1), mat3.transpose(0, 1).flatten(start_dim=1)).reshape(final_shape).transpose(0, 1)
           try:
               lora_diff = torch.mm(mat1.flatten(start_dim=1), mat2.flatten(start_dim=1)).reshape(weight.shape)
               if dora_scale is not None:
                   weight = function(weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype))
               else:
                   weight += function(((strength * alpha) * lora_diff).type(weight.dtype))
           except Exception as e:
               logging.error("ERROR {} {} {}".format(patch_type, key, e))
       elif patch_type == "lokr":
           w1 = v[0]
           w2 = v[1]
           w1_a = v[3]
           w1_b = v[4]
           w2_a = v[5]
           w2_b = v[6]
           t2 = v[7]
           dora_scale = v[8]
           dim = None

           if w1 is None:
               dim = w1_b.shape[0]
               w1 = torch.mm(comfy.model_management.cast_to_device(w1_a, weight.device, intermediate_dtype),
                               comfy.model_management.cast_to_device(w1_b, weight.device, intermediate_dtype))
           else:
               w1 = comfy.model_management.cast_to_device(w1, weight.device, intermediate_dtype)

           if w2 is None:
               dim = w2_b.shape[0]
               if t2 is None:
                   w2 = torch.mm(comfy.model_management.cast_to_device(w2_a, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w2_b, weight.device, intermediate_dtype))
               else:
                   w2 = torch.einsum('i j k l, j r, i p -> p r k l',
                                       comfy.model_management.cast_to_device(t2, weight.device, intermediate_dtype),
                                       comfy.model_management.cast_to_device(w2_b, weight.device, intermediate_dtype),
                                       comfy.model_management.cast_to_device(w2_a, weight.device, intermediate_dtype))
           else:
               w2 = comfy.model_management.cast_to_device(w2, weight.device, intermediate_dtype)

           if len(w2.shape) == 4:
               w1 = w1.unsqueeze(2).unsqueeze(2)
           if v[2] is not None and dim is not None:
               alpha = v[2] / dim
           else:
               alpha = 1.0

           try:
               lora_diff = torch.kron(w1, w2).reshape(weight.shape)
               if dora_scale is not None:
                   weight = function(weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype))
               else:
                   weight += function(((strength * alpha) * lora_diff).type(weight.dtype))
           except Exception as e:
               logging.error("ERROR {} {} {}".format(patch_type, key, e))
       elif patch_type == "loha":
           w1a = v[0]
           w1b = v[1]
           if v[2] is not None:
               alpha = v[2] / w1b.shape[0]
           else:
               alpha = 1.0

           w2a = v[3]
           w2b = v[4]
           dora_scale = v[7]
           if v[5] is not None: #cp decomposition
               t1 = v[5]
               t2 = v[6]
               m1 = torch.einsum('i j k l, j r, i p -> p r k l',
                                   comfy.model_management.cast_to_device(t1, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w1b, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w1a, weight.device, intermediate_dtype))

               m2 = torch.einsum('i j k l, j r, i p -> p r k l',
                                   comfy.model_management.cast_to_device(t2, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w2b, weight.device, intermediate_dtype),
                                   comfy.model_management.cast_to_device(w2a, weight.device, intermediate_dtype))
           else:
               m1 = torch.mm(comfy.model_management.cast_to_device(w1a, weight.device, intermediate_dtype),
                               comfy.model_management.cast_to_device(w1b, weight.device, intermediate_dtype))
               m2 = torch.mm(comfy.model_management.cast_to_device(w2a, weight.device, intermediate_dtype),
                               comfy.model_management.cast_to_device(w2b, weight.device, intermediate_dtype))

           try:
               lora_diff = (m1 * m2).reshape(weight.shape)
               if dora_scale is not None:
                   weight = function(weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype))
               else:
                   weight += function(((strength * alpha) * lora_diff).type(weight.dtype))
           except Exception as e:
               logging.error("ERROR {} {} {}".format(patch_type, key, e))
       elif patch_type == "glora":
           dora_scale = v[5]

           old_glora = False
           if v[3].shape[1] == v[2].shape[0] == v[0].shape[0] == v[1].shape[1]:
               rank = v[0].shape[0]
               old_glora = True

           if v[3].shape[0] == v[2].shape[1] == v[0].shape[1] == v[1].shape[0]:
               if old_glora and v[1].shape[0] == weight.shape[0] and weight.shape[0] == weight.shape[1]:
                   pass
               else:
                   old_glora = False
                   rank = v[1].shape[0]

           a1 = comfy.model_management.cast_to_device(v[0].flatten(start_dim=1), weight.device, intermediate_dtype)
           a2 = comfy.model_management.cast_to_device(v[1].flatten(start_dim=1), weight.device, intermediate_dtype)
           b1 = comfy.model_management.cast_to_device(v[2].flatten(start_dim=1), weight.device, intermediate_dtype)
           b2 = comfy.model_management.cast_to_device(v[3].flatten(start_dim=1), weight.device, intermediate_dtype)

           if v[4] is not None:
               alpha = v[4] / rank
           else:
               alpha = 1.0

           try:
               if old_glora:
                   lora_diff = (torch.mm(b2, b1) + torch.mm(torch.mm(weight.flatten(start_dim=1).to(dtype=intermediate_dtype), a2), a1)).reshape(weight.shape) #old lycoris glora
               else:
                   if weight.dim() > 2:
                       lora_diff = torch.einsum("o i ..., i j -> o j ...", torch.einsum("o i ..., i j -> o j ...", weight.to(dtype=intermediate_dtype), a1), a2).reshape(weight.shape)
                   else:
                       lora_diff = torch.mm(torch.mm(weight.to(dtype=intermediate_dtype), a1), a2).reshape(weight.shape)
                   lora_diff += torch.mm(b1, b2).reshape(weight.shape)

               if dora_scale is not None:
                   weight = function(weight_decompose(dora_scale, weight, lora_diff, alpha, strength, intermediate_dtype))
               else:
                   weight += function(((strength * alpha) * lora_diff).type(weight.dtype))
           except Exception as e:
               logging.error("ERROR {} {} {}".format(patch_type, key, e))
       else:
           logging.warning("patch type not recognized {} {}".format(patch_type, key))

       if old_weight is not None:
           weight = old_weight

   return weight

This one works.

But you also have to add

import torch

at beginning of comfy/lora.py

Natriumpikant avatar Oct 02 '24 09:10 Natriumpikant