Ulexer

Results 7 comments of Ulexer

I was getting this issue with Lora training, not finetune. Though the same happens on Qwen 16 GB full finetuning default preset with offload set to 1.0. Torch 2.7.1 -...

Sure [config.json](https://github.com/user-attachments/files/23560572/config.json) Above testing was done with "#qwen Finetune 16GB", which has bfloat16 for prior and float8 for text encoder by default, with offloading changed to 1.0. OOM happens with...

> Please try with the actual unmodified preset instead, so that we have a control. It only needs to be for a single epoch. I did. I got OOM with...

I realized it may not be clear from the picture, but where it's flat my PC is frozen for a while and then Windows kills OneTrainer because it probably requested...

2.8 model loading 2.7.1 model loading 2.8 training 2.7.1 training Lora training with transformer and text encoder at float8 17gb more used during training on Torch 2.8

> This looks like a model that people are going to try once, wait more than 10 minutes for a 1MP image and then never use again so it's probably...