InvokeAI
InvokeAI copied to clipboard
[enhancement]: Support Stable Diffusion 2.0 model
Is there an existing issue for this?
- [X] I have searched the existing issues
Contact Details
No response
What should this feature add?
The 2.0 model has been released, hope to see support for it here soon.
https://stability.ai/blog/stable-diffusion-v2-release
Alternatives
No response
Aditional Content
No response
here is repo https://github.com/Stability-AI/StableDiffusion
can't wait !
Relevant:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/069591b06bbbdb21624d489f3723b5f19468888d
We're looking into it. We've just started putting the final touches on the 2.2 release of Invoke AI which should be out soon. We'll divert our attention to getting the new models to work after and do a minor release once we have it good to go. Just been a tad bit slow coz of the holiday weekend.
Basic work has begun here: https://github.com/invoke-ai/InvokeAI/pull/1543
people report that SD 2.0 is worse than 1.5
people report that SD 2.0 is worse than 1.5
I've seen that. It seems like a lot of discontent of the model is coming from the fact that the dataset and the training method excluded nsfw content, artists and other stuff that was really popular.
However the SD guys say that that this new base model is a lot more friendly for fine tuning. So maybe in the near future as people start training the base v2 model, opinions might change. Apparently it also converges a lot faster during fine tuning compared to the older models. But I'm yet to see good stuff. Nitrosocke has already started making models for it. Interested to see how this new model grows.
P.s Just noticed it was you @iperov ..Great job on DFL. I've personally used it so much. Very happy to see you here.
Relevant:
Which is great if you're on windows. both mac and linux users are reporting all kinds of issues, and the fixes are not trivial.
A post about experiments in prompt engineering in SD 2: https://medium.com/@catmus2048/not-only-is-stable-diffusion-2-0-not-bad-but-really-better-my-prompt-engineering-experiments-459fbc5cec2
Seems like the SD 2 can generate great results, even with improved visual quality, but requires different prompt tuning.
I would be happy to see SD 2 available in Invoke AI, either replacing SD 1.5 or being available as an option.
A post about experiments in prompt engineering in SD 2: https://medium.com/@catmus2048/not-only-is-stable-diffusion-2-0-not-bad-but-really-better-my-prompt-engineering-experiments-459fbc5cec2
Seems like the SD 2 can generate great results, even with improved visual quality, but requires different prompt tuning.
I would be happy to see SD 2 available in Invoke AI, either replacing SD 1.5 or being available as an option.
I had heard that artist prompt components in version 2 wouldn't work anymore, but the article claims that version 2 "shows high responsiveness to the artist's style". That's a relief :)
FWIW, wrt to Mac at least, Apple has released CoreML version of Stable Diffusion which seems decent:
https://github.com/apple/ml-stable-diffusion
Perhaps relevant to fast, local Mac only installs?
@blessedcoolant
Great job on DFL.
thx. By the way, why not to pack InvokeAI into the standalone portable folder like DFL/DFlive ? I don't like that huge download from pip executed from user installation of InvokeAI. Pip is package manager for developers. If the load on the pip servers increases with such solutions, the pip owners may impose restrictions in the future.
@iperov Any suggestions? We're really open to improving the installer experience. @lstein
@iperov Any suggestions? We're really open to improving the installer experience. @lstein
I guess you could provide the conda environment as a compressed archive for/with the stable releases/one click installers etc.
Also do take look at micromamba binary as an improvement over Miniconda (both practical and performance/stability-wise). It should be easy to bundle along with a ready-to-go conda environment as a compressed archive. Hell I mean, make it a self extracting archive with a portable launcher even. :D
@iperov Any suggestions? We're really open to improving the installer experience. @lstein
I guess you could provide the conda environment as a compressed archive for/with the stable releases/one click installers etc.
Also do take look at micromamba binary as an improvement over Miniconda (both practical and performance/stability-wise). It should be easy to bundle along with a ready-to-go conda environment as a compressed archive. Hell I mean, make it a self extracting archive with a portable launcher even. :D
Can we please not derail this thread? It's about SD 2.0, not some kind of installer experience.
If you want portable Stable Diffusion, check out my GUI (Repo/Download).
Can we get back to topic now?
@blessedcoolant
@iperov Any suggestions?
pack whole release as standalone zero-dependency portable folder like in dflive, and share via file hostings / torrent
@chervonij already did it for previous version of invokeai. Its 21.5Gb.
https://disk.yandex.ru/d/7Hui0JcrGQfnqA
magnet torrent link:
magnet:?xt=urn:btih:605f9cbea111439891cce12bef133b12faf5a6be&dn=InvokeAI.zip&tr=udp%3a%2f%2ftracker.dler.com%3a6969%2fannounce&tr=http%3a%2f%2ftracker.files.fm%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.zerobytes.xyz%3a1337%2fannounce&tr=https%3a%2f%2ftracker.nitrix.me%3a443%2fannounce&tr=udp%3a%2f%2fmts.tvbit.co%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.altrosky.nl%3a6969%2fannounce&tr=https%3a%2f%2ftracker.lilithraws.cf%3a443%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=http%3a%2f%2ftracker2.itzmx.com%3a6961%2fannounce&tr=http%3a%2f%2ftracker3.itzmx.com%3a6961%2fannounce&tr=udp%3a%2f%2fmail.realliferpg.de%3a6969%2fannounce&tr=udp%3a%2f%2finferno.demonoid.is%3a3391%2fannounce&tr=udp%3a%2f%2fdiscord.heihachi.pw%3a6969%2fannounce&tr=udp%3a%2f%2fwww.torrent.eu.org%3a451%2fannounce&tr=http%3a%2f%2fvps02.net.orel.ru%3a80%2fannounce&tr=udp%3a%2f%2fbt2.archive.org%3a6969%2fannounce&tr=udp%3a%2f%2fbt1.archive.org%3a6969%2fannounce&tr=udp%3a%2f%2fengplus.ru%3a6969%2fannounce&tr=https%3a%2f%2ftracker.tamersunion.org%3a443%2fannounce&tr=http%3a%2f%2ft.nyaatracker.com%3a80%2fannounce&tr=udp%3a%2f%2fretracker.lanta-net.ru%3a2710%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2ftracker2.dler.org%3a80%2fannounce&tr=udp%3a%2f%2ftracker0.ufibox.com%3a6969%2fannounce&tr=udp%3a%2f%2fmovies.zsw.ca%3a6969%2fannounce&tr=udp%3a%2f%2fcode2chicken.nl%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.0x.tf%3a6969%2fannounce&tr=http%3a%2f%2fretracker.sevstar.net%3a2710%2fannounce&tr=udp%3a%2f%2ftracker.blacksparrowmedia.net%3a6969%2fannounce&tr=udp%3a%2f%2fadmin.videoenpoche.info%3a6969%2fannounce
@n00mkrad @blessedcoolant @lstein Thank you for making InvokeAI this is an amazing tool!! Do yo have any timeline on when you will release support for the 2.0?
Also I noticed that the new Stable Diffusion on HF includes its own upscaler link here. Can this be used easily with InvokeAI once you roll out the 2.0 support or do I need to open another [enhancement] thread?
Thanks so much guys! 😊
I notice that #1543 is now closed. I wonder if there is a different associated task to connect this enhancement request to or if it's in limbo now.
@keturn can you add this to the [2.3 🧨] milestone? https://github.com/invoke-ai/InvokeAI/milestone/1
The main branch now has beta support for SD 2.x. There are some known issues such as #2329.
Awesome! Do you mean if I install the 2.2.5 I will be able to get SD 2.x support or do I need to git clone the repo as it is today? https://github.com/invoke-ai/InvokeAI