Text-To-Video-Finetuning
Text-To-Video-Finetuning copied to clipboard
TypeError: Linear.forward() got an unexpected keyword argument 'scale'
Hi, I have been trying to do fine-tuning with stable LoRA, according to the manual. I only can do the basics, so I haven't modified the stable_lora_config.yaml other than the path to dataset folders and video specifications. Therefore I believe the codes aren't contaminated, but this error comes up every time. Anyone have ideas solving this? Error message:
/root/venv/work2/lib/python3.11/site-packages/diffusers/configuration_utils.py:134: FutureWarning: Accessing config attribute `num_train_timesteps` directly via 'DDPMScheduler' object attribute is deprecated. Please access 'num_train_timesteps' over 'DDPMScheduler's config object instead, e.g. 'scheduler.config.num_train_timesteps'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
/root/venv/work2/lib/python3.11/site-packages/diffusers/configuration_utils.py:134: FutureWarning: Accessing config attribute `prediction_type` directly via 'DDPMScheduler' object attribute is deprecated. Please access 'prediction_type' over 'DDPMScheduler's config object instead, e.g. 'scheduler.config.prediction_type'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
09/21/2023 07:42:50 - INFO - models.unet_3d_condition - Forward upsample size to force interpolation output size.
Traceback (most recent call last):
File "/root/another/Text-To-Video-Finetuning/train.py", line 986, in <module>
main(**OmegaConf.load(args.config))
File "/root/another/Text-To-Video-Finetuning/train.py", line 848, in main
loss, latents = finetune_unet(batch, train_encoder=train_text_encoder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/another/Text-To-Video-Finetuning/train.py", line 821, in finetune_unet
model_pred = unet(noisy_latents, timesteps, encoder_hidden_states=encoder_hidden_states).sample
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 636, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 624, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/another/Text-To-Video-Finetuning/models/unet_3d_condition.py", line 409, in forward
sample = transformer_g_c(self.transformer_in, sample, num_frames)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/another/Text-To-Video-Finetuning/models/unet_3d_blocks.py", line 75, in transformer_g_c
sample = g_c(custom_checkpoint(transformer, mode='temp'),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 251, in checkpoint
return _checkpoint_without_reentrant(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 432, in _checkpoint_without_reentrant
output = function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/another/Text-To-Video-Finetuning/models/unet_3d_blocks.py", line 63, in custom_forward
inputs = module(
^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/diffusers/models/transformer_temporal.py", line 156, in forward
hidden_states = block(
^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/diffusers/models/attention.py", line 197, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 426, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 1013, in __call__
query = attn.to_q(hidden_states, scale=scale)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Linear.forward() got an unexpected keyword argument 'scale'
my_stable_lora_config.txt This is the stable_lora_config.yaml I modified.
any updates on this ? I am facing the same issue
I am also getting the same error.
Hey, sorry for the late response @kenkenissocool! This is due to Diffusers implementing their own version of LoRA in recent versions, which causes this error.
I will look to resolve this very soon.
Everyone, presently, this issue can be effectively addressed through the following steps:
pip uninstall diffusers
pip install diffusers==0.18.1
It works for me, hope it helps
Is there any update for this?