The LTX 13B 0.9.7 model always return weird ouput
Using any sample workflow and it will always return weird output
Same thing happening to me, what is going on?
Same, its broken.
same problem
same here
na, this is because you cant use old LTX workflows with this. Its all different.
Lacking q8 stuff will cause this, avoid this by using gguf use gguf, and dont use any of the old nodes to insert latents. New sampler has the inserts. Use their samplers, because default ones dont work.
Overall, dont use any old LTX stuff. Its not for 13b model. It still works for .95-.96 but 13b needs its own workflow.
I'm using the latest workflow and also installed q8. I will try gguf if it works
https://civitai.com/models/1072696/ltx-image-to-video I was also getting the same results as you at first
na, this is because you cant use old LTX workflows with this. Its all different.
Lacking q8 stuff will cause this, avoid this by using gguf use gguf, and dont use any of the old nodes to insert latents. New sampler has the inserts. Use their samplers, because default ones dont work.
Overall, dont use any old LTX stuff. Its not for 13b model. It still works for .95-.96 but 13b needs its own workflow.
Can you help point me in the right direction because I really don't know what nodes to use now.
if someone will use gguf succesfully , post the workflow, my results are crappo as well with image2video
My workflow i posted works. I have examples.
If you downloaded 0.97 before, when they were asking people to install q8 stuff, you probably need to re-download This works without any hoops or q8 stuff needed, as do the gguf, and the rest on this page now.
https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev-fp8.safetensors
I had an old version of this that caused this same error because i did not install q8 stuff
This was the first version i tried and failed.Probably not enough ram. This does NOT work on my 3090, it disconnects my comfy, i need gguf workflow.I use like 2gb GGUF t5 and gguf ltx has about 10gb, if i use 15gb safetensors - comfy will disconnect.I have no time to detective this out , just need gguf workflow cause i use 9.6 all the time with gguf and it works fine but its not 13B... so, i just want someone else to confirm that GGUF 13b works. Can you use ltx gguf Q6 in your workflow on your machine ?
My workflow uses GGUF.... I used GGUF.
I can use FP8 normal models without a problem with 12gb 3060... if you cant get a 3090 to work... thats not an LTX problem its a your pc problem...
My workflows work with GGUF or any of the models on the link above.
You use a 3090, you should have NO issues with this model at FP8.
You mention ram, 32 gb system ram is needed to load the model. if you dont have that, make a very large swap file for windows and see if that works.
If you are using GGUF already, make sure you have the appropriate VAE. I was getting the same kind of results as what you had posted. Turns out I was using the VAE that worked with the older LTX model. I replaced it with a BF16 vae specifically for LTX 13b-0.9.7 and I no longer have the issue.