LTX-Video
LTX-Video copied to clipboard
ltxv-13b-0.9.7-distilled.yaml throw exception in mac
prompt
python inference.py --prompt "a dog run in street" --height 320 --width 320 --num_frames 3 --seed 1 --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml
torch 2.3.0
Traceback (most recent call last):
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1967, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/opt/homebrew/Cellar/[email protected]/3.10.17/Frameworks/Python.framework/Versions/3.10/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 40, in <module>
from ...modeling_utils import PreTrainedModel
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/modeling_utils.py", line 69, in <module>
from .loss.loss_utils import LOSS_MAPPING
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/loss/loss_utils.py", line 21, in <module>
from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/loss/loss_deformable_detr.py", line 4, in <module>
from ..image_transforms import center_to_corners_format
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/image_transforms.py", line 21, in <module>
from .image_utils import (
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/image_utils.py", line 64, in <module>
from torchvision import io as torchvision_io
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/torchvision/__init__.py", line 10, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/torchvision/_meta_registrations.py", line 163, in <module>
@torch.library.register_fake("torchvision::nms")
AttributeError: module 'torch.library' has no attribute 'register_fake'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/gujin/workspace/python/LTX-Video/inference.py", line 17, in <module>
from transformers import (
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1956, in __getattr__
value = getattr(module, name)
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1955, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1969, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
module 'torch.library' has no attribute 'register_fake'
torch 2.7.0 (newest)
Running generation with arguments: Namespace(output_path=None, seed=1, num_images_per_prompt=1, image_cond_noise_scale=0.15, height=320, width=320, num_frames=3, frame_rate=30, device=None, pipeline_config='configs/ltxv-13b-0.9.7-distilled.yaml', prompt='a dog run in street', negative_prompt='worst quality, inconsistent motion, blurry, jittery, distorted', offload_to_cpu=False, input_media_path=None, conditioning_media_paths=None, conditioning_strengths=None, conditioning_start_frames=None)
Padded dimensions: 320x320x9
Loading checkpoint shards: 100%|█████████████████████████████████████████████| 2/2 [00:00<00:00, 15.56it/s]
Traceback (most recent call last):
File "/Users/gujin/workspace/python/LTX-Video/inference.py", line 774, in <module>
main()
File "/Users/gujin/workspace/python/LTX-Video/inference.py", line 298, in main
infer(**vars(args))
File "/Users/gujin/workspace/python/LTX-Video/inference.py", line 534, in infer
pipeline = create_ltx_video_pipeline(
File "/Users/gujin/workspace/python/LTX-Video/inference.py", line 343, in create_ltx_video_pipeline
text_encoder = text_encoder.to(device)
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3698, in to
return super().to(*args, **kwargs)
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1355, in to
return self._apply(convert)
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 915, in _apply
module._apply(fn)
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 915, in _apply
module._apply(fn)
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 915, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 942, in _apply
param_applied = fn(param)
File "/Users/gujin/workspace/python/LTX-Video/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1341, in convert
return t.to(
RuntimeError: MPS backend out of memory (MPS allocated: 45.81 GB, other allocations: 1.70 MB, max allowed: 45.90 GB). Tried to allocate 160.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
Environment
➜ LTX-Video git:(main) ✗ env/bin/python --version
Python 3.10.17
On Torch 2.7 you get an out-of-memory error:
RuntimeError: MPS backend out of memory (MPS allocated: 45.81 GB, other allocations: 1.70 MB, max allowed: 45.90 GB). Tried to allocate 160.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
Your Mac has 36GB while MPS allocated 45GB.
Try generating less frames.
@yoavhacohen thank you for your replay, I tried to downgrade to CPU, it can run successfully, but the output video is garbled, maybe i should switch to RTX4080 envrionment