celll1
celll1
I am also having the same error. I have found that this error occurs depending on the order in which the packages are installed, but have not been able to...
I have created code in this fork (https://github.com/celll1/OneTrainer/tree/dev) that supports token lengths of up to (75 tokens x) 3 chunks for the Text Encoder. It has been confirmed to work...
The author of SageAttention appears to plan its backward pass support only for Hopper-architecture GPUs, even if they implement it in the future. I have implemented the same approach using...