sd-scripts
sd-scripts copied to clipboard
Support FLUX series models
These models have just been released and appear to be amazing. Links below:
Blog from fal.ai: https://blog.fal.ai/flux-the-largest-open-sourced-text2img-model-now-available-on-fal/
Huggingface: https://huggingface.co/black-forest-labs
There is a schell version and dev version.
very agree!
is it possible to finetune the model on a 3090 or do we have to do a lora due to the size?
I'm wondering if image gen models would benefit from the sophisticated quantization methods that are popular in the LLM space, like GGUF. Any ongoing research in this area?
Apparently some folks have trained LoRAs on quantized LLMs to good effect, e.g. https://old.reddit.com/r/LocalLLaMA/comments/13q8zjc/how_much_why_does_quantization_negatively_affect/
I totally agree. Since SD3 may not be able to fit a slightly larger dataset due to model problems (scripts include SimpleTuner, SD-scripts, OneTrainer), it is recommended to stop developing SD3 training scripts. I did a simple test on Flux-dev, and its capabilities are completely superior to SD3. Here are some examples:
It’s worth pointing out that this is the first model I’ve seen that can correctly draw the position of the umbrella handle and the umbrella cover, and the text prompt on the road sign is “iiilllllbddbwW”. Although the AI didn’t draw it correctly, I haven’t seen any model that can draw it correctly either.
I totally agree. Since SD3 may not be able to fit a slightly larger dataset due to model problems (scripts include SimpleTuner, SD-scripts, OneTrainer), it is recommended to stop developing SD3 training scripts. I did a simple test on Flux-dev, and its capabilities are completely superior to SD3. Here are some examples:
I strongly disagree. While the SD3 Medium model has certain drawbacks, it possesses a crucial advantage that FLUX lacks: its weights are publicly available. In contrast, FLUX only provides access to the base model's weights through an API, with no indication or information suggesting they plan to make it open-source. The models that are publicly accessible are derived through distillation of the base model; they are truncated, incomplete, and practically unsuitable for further training. It only makes sense to train the model we weren't given, as fine-tuning the distilled models would require roughly the same effort as training from scratch, if not more. Even the SDXL model was superior in this regard.
Calling it open-source is akin to labeling GPT-4o as open-source simply because we were given GPT-3 weights and the ability to fine-tune it. I'm concerned that we'll be wasting time that could be better spent studying SD3, debugging and optimizing its training script. SD3 has more potential, and Stability AI has promised to eventually release all models, including their weights, as open-source. This makes SD3 a more promising avenue for our efforts
I strongly disagree. While the SD3 Medium model has certain drawbacks, it possesses a crucial advantage that FLUX lacks: its weights are publicly available. In contrast, FLUX only provides access to the base model's weights through an API, with no indication or information suggesting they plan to make it open-source. The models that are publicly accessible are derived through distillation of the base model; they are truncated, incomplete, and practically unsuitable for further training. It only makes sense to train the model we weren't given, as fine-tuning the distilled models would require roughly the same effort as training from scratch, if not more. Even the SDXL model was superior in this regard.
Hello, the weights for the Flux series models have been released, including the dev version and the schnell version. The weights for the Pro version have not been released and can only be accessed via API, but the performance gap between the dev version and the Pro version is not significant, and both should have surpassed SD3. You can find their weights here: flux_dev flux_schnell Diffusers have initial support for LoRA training with Flux, which you can find here: diffusers SimpleTuner has initial compatibility with Flux's LoRA training in their scripts, which you can find here: SimpleTuner ComfyUI now supports Flux and its initial LoRA, which you can find here: ComfyUI
Hello, the weights for the Flux series models have been released, including the dev version and the schnell version.
Please read this https://blog.fal.ai/flux-the-largest-open-sourced-text2img-model-now-available-on-fal/ Dev and schnell are obtained by distillation of pro scales. It is possible to create LoRAs for them, they will work. But full model training is practically impossible because of this
Hello, the weights for the Flux series models have been released, including the dev version and the schnell version.
Please read this https://blog.fal.ai/flux-the-largest-open-sourced-text2img-model-now-available-on-fal/ Dev and schnell are obtained by distillation of pro scales. It is possible to create LoRAs for them, they will work. But full model training is practically impossible because of this
It should be possible to fine-tune distilled models.
> It should be possible to fine-tune distilled models.
Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/
It should be possible to fine-tune distilled models.
Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/
That is because the training code for Turbo was never released and nobody wrote one. It's not fundamentally impossible.
even training schnell with lora or full tune is fine. they're just big models and require the use of LoRA with quantised base weights, but Kohya should probably wait for the bugs to be worked out in Quanto first before going ahead and trying to integrate it. it makes a mess of the model state dict keys.
@kohya-ss Training scripts released : https://github.com/XLabs-AI/x-flux
those are pretty minimal and eg. it doesn't implement cosmap/logit-norm or any of the SD3 training details, just about the same as cloneofsimo/minRF implementation. in fact it's basically identical - the interesting thing there is probably their ControlNet training implementation details
diffusers scripts arrived
https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md
diffusers scripts arrived
https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md
@FurkanGozukara , you are amazing as usual!
@ddpasa thanks
pull request arrived already :d
https://github.com/kohya-ss/sd-scripts/pull/1374/files/da4d0fe0165b3e0143c237de8cf307d53a9de45a..36b2e6fc288c57f496a061e4d638f5641c32c9ea
It should be possible to fine-tune distilled models.
Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/
Do you guys also love it when someone is so confidently incorrect?
My flux finetune is coming in very nicely. Huge upgrade compared to SDXL and Pony, also way more trainable than SD3 Medium. It's literally impossible to add NSFW to SD3 medium because of the complete lack of NSFW content in its training data. No finetuner is going to finish SAI's pathetic job. Nobody is ever going to create any kind of content for SD3 when you can create better results for the same money with flux. so yeah, rip.
Flux seems to have seen plenty of NSFW images, and it's just filtered and dropped out via captioning. So the context and knowledge already exists in the latent space, and it only needs to... well get finetuned.
So, yeah f*ck SD3. Pyro's NSFW model goes FLUX.
It should be possible to fine-tune distilled models.
Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/
Do you guys also love it when someone is so confidently incorrect?
My flux finetune is coming in very nicely. Huge upgrade compared to SDXL and Pony, also way more trainable than SD3 Medium. It's literally impossible to add NSFW to SD3 medium because of the complete lack of NSFW content in its training data. No finetuner is going to finish SAI's pathetic job. Nobody is ever going to create any kind of content for SD3 when you can create better results for the same money with flux. so yeah, rip.
Flux seems to have seen plenty of NSFW images, and it's just filtered and dropped out via captioning. So the context and knowledge already exists in the latent space, and it only needs to... well get finetuned.
So, yeah f*ck SD3. Pyro's NSFW model goes FLUX.
what are you talking about? i trained 3.0 for 30 minutes and it can generate nsfw just fine. NSFW link https://imgur.com/a/sd-30-test-G7G7G6u
It should be possible to fine-tune distilled models.
Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/
Do you guys also love it when someone is so confidently incorrect? My flux finetune is coming in very nicely. Huge upgrade compared to SDXL and Pony, also way more trainable than SD3 Medium. It's literally impossible to add NSFW to SD3 medium because of the complete lack of NSFW content in its training data. No finetuner is going to finish SAI's pathetic job. Nobody is ever going to create any kind of content for SD3 when you can create better results for the same money with flux. so yeah, rip. Flux seems to have seen plenty of NSFW images, and it's just filtered and dropped out via captioning. So the context and knowledge already exists in the latent space, and it only needs to... well get finetuned. So, yeah f*ck SD3. Pyro's NSFW model goes FLUX.
what are you talking about? i trained 3.0 for 30 minutes and it can generate nsfw just fine. NSFW link https://imgur.com/a/sd-30-test-G7G7G6u
Someone just reads too much reddit and similar places where everyone is convinced that if a model wasn't trained on nsfw then they will never be able to create such things. How they used to create models for anime, furry and the rest for sdxl no one knows. Lost technology
In all seriousness, there's nothing stopping sd3 from learning to create any nsfw content and even worse. Due to the more efficient architecture, training does not require as much GPU overhead as sdxl.
I don't understand why everyone is so crazy with this FLUX and minus my comment that it has no scales and access only by api
Someone just reads too much reddit and similar places where everyone is convinced that if a model wasn't trained on nsfw then they will never be able to create such things.
We (group of SDXL finetuners) spend like 5k bucks making NSFW in SD3 work, but a model that can't even render women lying in grass is so lobotomized that re-introducing NSFW takes immense ressources, as in the ballpark of SAI's training infrastructure. No hobby finetuner is going to pay for that. Nobody is going to pay for that, if they can get way better results for a fraction of the cost with FLUX.
It's not hard to understand. It took 20 bucks to teach FLUX NSFW concepts.... 5k$ vs 20$ pretty clear cut.
How they used to create models for anime, furry and the rest for sdxl no one knows. Lost technology
Well it seems that you don't know the basic of how training such models work and how self-organisation of embeddings in the latent space works. LAION, the data corpus of SDXL, is full of furry and anime shit. SD3 data corpus has exactly 0 NSFW images in it. And you honestly have difficulties to understand why one is trainable and the later isn't? You're on the wrong board then.
Please stop talking about things you don't have a clue about.
Also FLUX is runnable locally and the weights are public, so I don't even know what " it has no scales and access only by api" even means.
We (group of SDXL finetuners) spend like 5k bucks making NSFW in SD3 work, but a model that can't even render women lying in grass is so lobotomized
Stabilityai promised to release the 3.1 model soon. They promised to fix this problem in it. You've been too quick to educate yourself
Well it seems that you don't know the basic of how training such models work and how self-organisation of embeddings in the latent space works. LAION, the data corpus of SDXL, is full of furry and anime shit
When sdxl came out it was written about on reddit the same thing they are now writing about sd3. That it didn't use NSFW content in training so nsfw training is impossible, "it's a terrible model, stabilityai killed their reputation by refusing to train on nsfw content, we can't use it, we stay on sd1.5".... Just like they wrote about sd2.... Let's wait a year and find out that there was nsfw in the sd3 dataset but it was removed from the sd4 dataset so we stay on sd3 and boycott the new model....
Also FLUX is runnable locally and the weights are public
Please give me link to download Flux-pro model
People are using simple tuner for flux lora creation. Unfortunately it has no windows support. Waiting for kohya ss :) . Flux dev is so much better than sd3 💯
People are using simple tuner for flux lora creation. Unfortunately it has no windows support. Waiting for kohya ss :) . Flux dev is so much better than sd3 💯
Now sd3 branch supports FLUX.1 dev LoRA training experimentally :) https://github.com/kohya-ss/sd-scripts/tree/sd3
Stabilityai promised to release the 3.1 model soon. They promised to fix this problem in it. You've been too quick to educate yourself
If SD3.1 could achieve the performance of Flux Dev while allowing training and sharing, and if the machine costs required for fine-tuning are lower than those of Flux Dev, I would be very willing to use SD3.1. However, given the performance of SD3 8b and the licensing of the SD3 series, I am pessimistic about this possibility.
Now sd3 branch supports FLUX.1 dev LoRA training experimentally :) https://github.com/kohya-ss/sd-scripts/tree/sd3
Thank you for your excellent work. The fine-tuning effect of sd-scripts with Flux has completely met my expectations, and its performance is on par with Simple Tuner.
Additionally, is there any plan to support Flux in some of the LoRA processing scripts? These scripts could help the community more quickly develop models like "detail enhancer."
People are using simple tuner for flux lora creation. Unfortunately it has no windows support. Waiting for kohya ss :) . Flux dev is so much better than sd3 💯
Now sd3 branch supports FLUX.1 dev LoRA training experimentally :) https://github.com/kohya-ss/sd-scripts/tree/sd3
Will this work for the NF4 model that was released yesterday? Up to 4x speedups, reduced vram, increased quality.
https://civitai.com/models/638572/nf4-flux1 https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981
you don't need an A100 for flux. imo kohya should release sooner than keep trying to add the million features. you can train on 16G VRAM without any quantisation at all.
you don't need an A100 for flux. imo kohya should release sooner than keep trying to add the million features. you can train on 16G VRAM without any quantisation at all.
It did in other trainers such as your own, but yeah apparently not anymore.
The NF4 model is far superior though and more accessible for inference. FP8 used to be virtually unusable on my 4080 because it'd take about 5-10 mins for 1 overquantized generation since it overloads my shared memory, and now it's <1 min for outputs that look on par with Pro. Don't really wanna waste a week training an FP8 model that's already obsolete and can't be used by most people.
it's not like that at all though. fp8 is fine, especially in pytorch 2.4. you can read back through the comments in this issue to see.
also, NF4 is definitely not "on par with Pro" 🤪