Open-Sora-Plan
Open-Sora-Plan copied to clipboard
AssertionError: OpenSoraT2V over patched input must provide sample_size_t
Hello, it's a great project! I encountered some errors when execute script : bash scripts/text_condition/gpu/sample_t2v.sh Anyone can help me ~
(opensora-plan) root@DESKTOP-KLL8FM3:~/project/Open-Sora-Plan# bash scripts/text_condition/gpu/sample_t2v.sh
/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
[2024-07-26 03:04:51,120] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1
[WARNING] using untested triton version (2.1.0), only 1.0.0 is known to be compatible
/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/diffusers/models/transformer_2d.py:20: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 0.29. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.transformers.transformer_2d import Transformer2DModelOutput`, instead.
deprecate("Transformer2DModelOutput", "0.29", deprecation_message)
/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/diffusers/models/transformer_2d.py:25: FutureWarning: `Transformer2DModel` is deprecated and will be removed in version 0.29. Importing `Transformer2DModel` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.transformers.transformer_2d import Transformer2DModel`, instead.
deprecate("Transformer2DModel", "0.29", deprecation_message)
The npu_config.on_npu is False
pid 3419848's current affinity list: 0-19
pid 3419848's new affinity list: 0,1
The config attributes {'loss_params': {'disc_start': 2001, 'disc_weight': 0.5, 'kl_weight': 1e-06, 'logvar_init': 0.0}, 'loss_type': 'opensora.models.ae.videobase.losses.LPIPSWithDiscriminator', 'lr': 1e-05} were passed to CausalVAEModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Some weights of the model checkpoint were not used when initializing CausalVAEModel:
['loss.discriminator.main.0.bias, loss.discriminator.main.0.weight, loss.discriminator.main.11.bias, loss.discriminator.main.11.weight, loss.discriminator.main.2.weight, loss.discriminator.main.3.bias, loss.discriminator.main.3.num_batches_tracked, loss.discriminator.main.3.running_mean, loss.discriminator.main.3.running_var, loss.discriminator.main.3.weight, loss.discriminator.main.5.weight, loss.discriminator.main.6.bias, loss.discriminator.main.6.num_batches_tracked, loss.discriminator.main.6.running_mean, loss.discriminator.main.6.running_var, loss.discriminator.main.6.weight, loss.discriminator.main.8.weight, loss.discriminator.main.9.bias, loss.discriminator.main.9.num_batches_tracked, loss.discriminator.main.9.running_mean, loss.discriminator.main.9.running_var, loss.discriminator.main.9.weight, loss.logvar, loss.perceptual_loss.lin0.model.1.weight, loss.perceptual_loss.lin1.model.1.weight, loss.perceptual_loss.lin2.model.1.weight, loss.perceptual_loss.lin3.model.1.weight, loss.perceptual_loss.lin4.model.1.weight, loss.perceptual_loss.net.slice1.0.bias, loss.perceptual_loss.net.slice1.0.weight, loss.perceptual_loss.net.slice1.2.bias, loss.perceptual_loss.net.slice1.2.weight, loss.perceptual_loss.net.slice2.5.bias, loss.perceptual_loss.net.slice2.5.weight, loss.perceptual_loss.net.slice2.7.bias, loss.perceptual_loss.net.slice2.7.weight, loss.perceptual_loss.net.slice3.10.bias, loss.perceptual_loss.net.slice3.10.weight, loss.perceptual_loss.net.slice3.12.bias, loss.perceptual_loss.net.slice3.12.weight, loss.perceptual_loss.net.slice3.14.bias, loss.perceptual_loss.net.slice3.14.weight, loss.perceptual_loss.net.slice4.17.bias, loss.perceptual_loss.net.slice4.17.weight, loss.perceptual_loss.net.slice4.19.bias, loss.perceptual_loss.net.slice4.19.weight, loss.perceptual_loss.net.slice4.21.bias, loss.perceptual_loss.net.slice4.21.weight, loss.perceptual_loss.net.slice5.24.bias, loss.perceptual_loss.net.slice5.24.weight, loss.perceptual_loss.net.slice5.26.bias, loss.perceptual_loss.net.slice5.26.weight, loss.perceptual_loss.net.slice5.28.bias, loss.perceptual_loss.net.slice5.28.weight, loss.perceptual_loss.scaling_layer.scale, loss.perceptual_loss.scaling_layer.shift']
The config attributes {'video_length': 17} were passed to OpenSoraT2V, but are not expected and will be ignored. Please verify your config.json configuration file.
Traceback (most recent call last):
File "opensora/sample/sample_t2v.py", line 236, in <module>
main(args)
File "opensora/sample/sample_t2v.py", line 60, in main
transformer_model = OpenSoraT2V.from_pretrained(args.model_path, cache_dir=args.cache_dir,
File "/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/diffusers/models/modeling_utils.py", line 717, in from_pretrained
model = cls.from_config(config, **unused_kwargs)
File "/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/diffusers/configuration_utils.py", line 260, in from_config
model = cls(**init_dict)
File "/root/anaconda3/envs/opensora-plan/lib/python3.8/site-packages/diffusers/configuration_utils.py", line 658, in inner_init
init(self, *args, **init_kwargs)
File "/root/project/Open-Sora-Plan/opensora/models/diffusion/opensora/modeling_opensora.py", line 145, in __init__
self._init_patched_inputs(norm_type=norm_type)
File "/root/project/Open-Sora-Plan/opensora/models/diffusion/opensora/modeling_opensora.py", line 148, in _init_patched_inputs
assert self.config.sample_size_t is not None, "OpenSoraT2V over patched input must provide sample_size_t"
AssertionError: OpenSoraT2V over patched input must provide sample_size_t
System: WSL ubuntu GPU: 4090