fastcomposer
fastcomposer copied to clipboard
enable_xformers_memory_efficient_attention is not supported
File "fastcomposer/fastcomposer/model.py", line 571, in forward localization_loss = get_object_localization_loss( File "fastcomposer/model.py", line 416, in get_object_localization_loss return loss / num_layers ZeroDivisionError: division by zero
May I ask if there is a solution to this problem when i am using enable_xformers_memory_efficient_attention? @Guangxuan-Xiao
in the model.py file, why the code is like this:
if isinstance(module.processor, AttnProcessor2_0): module.set_processor(AttnProcessor())
how can I accelerate the training process with torch.complie() when I am using PyTorch 2.0
same problem, did you solve it ? can you share?