Flux 2 Image Edit LoRA training.
This is for bugs only
Did you already ask in the discord?
Yes
You verified that this is a bug and not a feature request or question by asking in the discord?
Yes
Describe the bug
Runpod RTX6000 used with Ostris Toolkit Model: Flux2 Datasets: Control 213 Images / Target 213 Images (with prompts) attached you can find the config file.
Question: after 2000 Steps there is no change in the sample images Did I do something wrong with the settings?
Is there a simple quide for Flux2? I have experience with Ostris Toolkit in Quen Image Edit and Flux kontext Lora training. All worked well.
PS: Goal was change food pictures (CGI, 3d, 2d, plastic and so on ....) to a realistic food
Update: after 3500 Steps noch change in the sampel images, stoped Runpod
It looks like the training just isn't working. I tried training two LORAs, and both failed. Same as @Astroburner , I used image pairs (control and target with captions and the same resolution).
The first one (with 260 images and 5,000 iterations) was to transform 3D scenes into a realistic style, but it did not work at all. Mine had the opposite effect. Instead of learning the style, it "forgot" what the base model knew. It's just outputting the original image untouched.
The second Lora (600 images, 2,250 iterations) was supposed to remove text overlays from anime and manga, but it was either outputting abstract art (not noise) or doing random things, like darkening the image or removing random objects. In some cases, it was outputting the original image untouched.
The datasets are completely fine. I previously trained the same model for Kontext, and it worked perfectly.
My configuration file (runpod): reaslistic_ai_toolkit_setting.txt
The datasets are completely fine. I previously trained the same model for Kontext, and it worked perfectly.
Did the same thing with Qwen Image Edit, wortking perfectly. Lost already over 30 Bucks to runpod and nothing worked for flux2
I am doing a test with a "make this person a cyclops" dataset. It seems to be working. Still looking into it though.
I let this run out until convergence. From what I can tell, it is working as expected. I just double checked your config and I think your target and control are backwards.
- folder_path: "/app/ai-toolkit/datasets/control"
control_path_1: "/app/ai-toolkit/datasets/dataset"
Based on this, the model will be learning to turn your dataset folder into your control image, which I assume is backwards from what your are intending. @Astroburner
@jaretburkett Could you please share your config file? I failed twice, with different datasets, so maybe I'm doing something really wrong with mine.
@VandersonQk I shut the pod down so I don't have the config anymore, but the only thing I changed from the default is toggling match target res. But it looked like you had that. There is always the possibility the the model just struggles with the concept. I have had a lot of reports that there are some concepts Kontext couldn't learn when Qwen Image Edit picked them up in 500 steps and vise versa.