siraxe
siraxe
delete custom nodes `ComfyUI-LTXVideo` , clone again , use their example from `ComfyUI-LTXVideo\example_workflows\13b-distilled` sounds strange but worked for me
> I'm gonna try Kijai's to see whats up. I managed to get flash working on windows, but it took 11h to compile, it seems to be working fine on...
actually following this instruction and building from x64 Native Tools Command Prompt for VS 2022 helped https://huggingface.co/lldacing/flash-attention-windows-wheel 
Made this `Power Lora Loader V2 ` with a bit more ui and control over what loras are selected/used , can try it . It saves last selected folder to...
also blackwell has issues with older versions i think on win needs `pip install poetry-core` first , then new torch/cuda , then requirements
Had issues with `xformers` then `pytorch3d` then `torch-scatter` Reinstalled all packages that I could and still got to: `Warn!: xFormers is available (Attention) Warn!: Traceback (most recent call last): File...
sm120 error is cuda error , install https://pytorch.org/get-started/locally/ 12.8 and after ai-toolkit\venv\Scripts>`python.exe -m pip install torchaudio` As it doesn't come with `torchaudio` then install `requirements.txt`
Same on 5090 when changing promt , updating lora str fixes that Disabling loras entirely does not raise this error (when changing prompts after)
saw this as well if batch size/gradient_accumulation was set > 1
isaac-mcfadyen was right its the `Cache Text Embeddings` , works without it properly