psychedelicious
psychedelicious
Use-case addressed by the [Launcher](https://github.com/invoke-ai/launcher/)
The VAE works with this diff: ```diff diff --git a/invokeai/backend/model_manager/load/model_loaders/vae.py b/invokeai/backend/model_manager/load/model_loaders/vae.py index 122b2f079..34192fc4c 100644 --- a/invokeai/backend/model_manager/load/model_loaders/vae.py +++ b/invokeai/backend/model_manager/load/model_loaders/vae.py @@ -24,6 +24,7 @@ from .generic_diffusers import GenericDiffusersLoader @ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.VAE, format=ModelFormat.Diffusers) @ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion1,...
Yes, that's the same problem this issue is about.
- Addressed feedback, organizing objects. - Added comments to queries. - Removed extraneous condition in query.
Marked draft to prevent premature merge.
Superseded by #6931
If we want to implement this change, once the details are stabilized I can play around with the frontend implementation. I don't anticipate it being difficult.
I've revised this PR a bit, simplifying the API and updating the frontend to use the new progress events. Currently it's only the two spandrel nodes that signal progress with...
Hi, the linked issue is for a different application, not Invoke. That said, it looks like the root cause is the same - the torch operation required isn't available on...
Thank you! The problem this addresses has been a persistent thorn in our sides. Would love to see this released.