LTX-Video
LTX-Video copied to clipboard
Weird video out put with ltx 13b model
Run any example workflow in comfyui. Only get weird output
Maybe it is because of scheduler. It gives me the same with Karras, but with linear it is OK.
I got the same. I think the last time I had this issue, I had to ensure I had the latest nightly build of Pytorch running.
Check out this thread... https://github.com/Lightricks/ComfyUI-LTXVideo/issues/43#issuecomment-2555502825
On MacOS, M3 Max w/36GB RAM, Pytorch 2.6, ComfyUI.
I tried, torch 2.5.1 cuda 1124 and torch 2.7 cuda 128, same issue
You try the nightly builds?
Just one more voice to confirm the problem. The example workflows for the ComfyUI LTX Video nodes still uses 0.9.6, and renders fine. But with 0.9.7 i see the above result.
Toyed around with the sampler and the scheduler, no luck. The same workflow works with version 0.9.6 just fine.
Console output:
` H:\comfyui\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build [START] Security scan [DONE] Security scan
ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-05-14 12:31:26.609 ** Platform: Windows ** Python version: 3.12.9 (tags/v3.12.9:fdb8142, Feb 4 2025, 15:27:58) [MSC v.1942 64 bit (AMD64)] ** Python executable: H:\comfyui\ComfyUI_windows_portable\python_embeded\python.exe ** ComfyUI Path: H:\comfyui\ComfyUI_windows_portable\ComfyUI ** ComfyUI Base Folder Path: H:\comfyui\ComfyUI_windows_portable\ComfyUI ** User directory: H:\comfyui\ComfyUI_windows_portable\ComfyUI\user ** ComfyUI-Manager config path: H:\comfyui\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini ** Log path: H:\comfyui\ComfyUI_windows_portable\ComfyUI\user\comfyui.log
Prestartup times for custom nodes: 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy 2.3 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
Checkpoint files will always be loaded safely. Total VRAM 16380 MB, total RAM 32687 MB pytorch version: 2.6.0+cu126 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync Using pytorch attention Python version: 3.12.9 (tags/v3.12.9:fdb8142, Feb 4 2025, 15:27:58) [MSC v.1942 64 bit (AMD64)] ComfyUI version: 0.3.34 ComfyUI frontend version: 1.19.9 [Prompt Server] web root: H:\comfyui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static NumExpr defaulting to 16 threads. [H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy-mtb] | INFO -> loaded 103 nodes successfuly [H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy-mtb] | INFO -> Some nodes (5) could not be loaded. This can be ignored, but go to http://127.0.0.1:8188/mtb if you want more information. [Crystools INFO] Crystools version: 1.22.1 [Crystools INFO] CPU: AMD Ryzen 7 5800X3D 8-Core Processor - Arch: AMD64 - OS: Windows 10 [Crystools INFO] Pynvml (Nvidia) initialized. [Crystools INFO] GPU/s: [Crystools INFO] 0) NVIDIA GeForce RTX 4060 Ti [Crystools INFO] NVIDIA Driver: 572.60 Total VRAM 16380 MB, total RAM 32687 MB pytorch version: 2.6.0+cu126 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync
Loading: ComfyUI-Manager (V3.32.2)
[ComfyUI-Manager] network_mode: public
ComfyUI Revision: 3461 [158419f3] *DETACHED | Released on '2025-05-12'
Python version is above 3.10, patching the collections module.
H:\comfyui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\image_processing_auto.py:604: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use slow_image_processor_class, or fast_image_processor_class instead
warnings.warn(
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ckpts path: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using symlinks: False
[H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
DWPose: Onnxruntime with acceleration providers detected
FizzleDorf Custom Nodes: Loaded
Efficiency Nodes: Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...Success!
2025-05-14 12:31:33.395 | INFO | cozy_comfyui.node:loader:121 - JOV_CAPTURE 4 nodes loaded
2025-05-14 12:31:33.395 | INFO | cozy_comfyui.node:loader:121 - JOV_CAPTURE 4 nodes loaded
2025-05-14 12:31:33.397 | INFO | cozy_comfyui.node:loader:121 - JOV_CAPTURE 4 nodes loaded
2025-05-14 12:31:33.397 | INFO | cozy_comfyui.node:loader:121 - JOV_CAPTURE 4 nodes loaded
[rgthree-comfy] Loaded 42 exciting nodes. 🎉
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: ffmpeg_bin_path is not set in H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 220 nodes successfully.
"Success is not the key to happiness. Happiness is the key to success." - Albert Schweitzer
Import times for custom nodes: 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-use-everywhere 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\lora-info 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_ipadapter_plus 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_fizznodes 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_essentials 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy-image-saver 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-custom-scripts 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\mikey_nodes 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-frame-interpolation 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-videodircombiner 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-various 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\derfuu_comfyui_moddednodes 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-dream-video-batches 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-advanced-controlnet 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Florence2 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-kjnodes 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui 0.0 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux 0.1 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Crystools 0.1 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LTXVideo 0.1 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-videohelpersuite 0.1 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-fluxpromptenhancer 0.1 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved 0.1 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\jovi_capture 0.2 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-detail-daemon 0.3 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager 0.4 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_caption_this 0.4 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy-mtb 0.9 seconds: H:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui
Starting server
To see the GUI go to: http://127.0.0.1:8188 FETCH ComfyRegistry Data: 5/85 F`
i have the same issue
Just to report back, Affected is the fp8 version of the weight. The full dev version produces good results.
But when i load the fp8 version ...
I have even redownloaded the fp8 one, because there was a reupload before three days, but it has the same size, and produces the same result.
please download the latest version of fp8 model, latest model can now run natively in ComfyUI without using q8_kernels and Q8 Patch node.
Make sure to update to latest comfy and ComfyUI-LTXVideo as well
@michaellightricks Everything is updated and I also still get the same issues, and running the FP8 patched causes comfyui to crash with no errors and just show prompt paused and close.
4090/windows/Python version: 3.11.6/pytorch version: 2.7.0+cu128/xformers version: 0.0.30
Latest update just moments ago:
- with the Q8 Patch regardless of enable/disable states, it crashes.
- Bypassing the Q8 Patch this is what happens
image immediately becomes super cloudy, just the first frame works :
https://github.com/user-attachments/assets/a5a34a86-47d1-425a-8cfc-9ce2e100d9e8
and this is following the workflow : https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base-fp8.json
Tried everything in the other threads, including trying all the current available variants of the fp8 model.
the e4m3fn: https://github.com/user-attachments/assets/8ff87dbe-aaa5-4c75-a8a6-2123daabcaa4
the regular: https://github.com/user-attachments/assets/78943d17-46a7-41fc-9ad8-d5fd9e1464cf
the dev: https://github.com/user-attachments/assets/9c2a6486-d711-4532-969e-e57b845f310e
I get an error message now after updating all. Same default workflow as above. And the fresh downloaded ltxv-13b-0.9.7-dev-fp8.safetensors
As a note, this default workflow still loads with version 0.9.6 first.
@michaellightricks Halp