kunibald
kunibald
@Kosinkadink workflow 2 x 4090 torch==2.6.0+cu124 sage attention 2 compiled form source as they say in the readme: https://github.com/thu-ml/SageAttention started comfy with `--cuda-device=0,1 --use-sage-attention` [i2v_wan_multigpu_unit.json](https://github.com/user-attachments/files/19307822/i2v_wan_multigpu_unit.json) (is fine on single gpu...
@BootsofLagrangian Hi there, hope that i can reach out to you. I also get this dtype error when training flux lora with deepspeed multigpu. Do you have any updates maybe...
i think this might be only related to tensorboard logs if you don't need them you can just comment out the code or skip it in `infer/lib/train/utils.py` find the `summarize`...
doing large preps and trains can be soul-crushing. passion and a little bit of compulsion can keep you going, but i hope you don't burn yourself out.
Thank you, as always appreciate your detailed documentation.
check prebuild wheels https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/tag/v0.0.6