InvokeAI
InvokeAI copied to clipboard
[enhancement]: allow dynamic model configuration
Is there an existing issue for this?
- [X] I have searched the existing issues
Contact Details
No response
What should this feature add?
(It's possible this is already possible w/ the rewrite but I just want to record this before I forget)
We have the seamless tiling feature which is currently a parameter on the inference nodes (eg TextToLatents, LatentsToLatents), but this is actually a model configuration thing. It would be nice if we can somehow configure the loaded model without needing to manually handle this within the node. The result is cleaner inference node code.
I think there's a puzzle to solve - doing this outside an inference node in a distributed system is tricky. We cannot configure the model outside the node.
My idea is to provide a model configuration kinda object to inference nodes - like a model plugin node - that the model loader receives and uses when it loads the model. This way we can keep this model-specific logic in the right place.
This is all just to the best of my understanding, may be some misunderstandings inherent in the request.
Alternatives
No response
Aditional Content
No response
Wouldn't be hard to do this on a single node. In a distributed environment you'd need to propagate a message throughout the whole system. Is there a framework for doing this already? Nodes still seem pretty magic to me.
Well you'd just provide this serializable config to a model node. The model node does not actually load the model, it only allows the user to select it.
I think it's the unet that is padded for seamless tiling? So a unet node identifies the unet and provides its config.
Then the node that actually does the inference gets the unet model identifier and small config object, and when the model is loaded inside that node, it applies the config if it exists.
The config would need to be wiped after the node finishes (ie when the model context is exited)
So the meat and potatoes of the implementation would be the model loader knowing what to do when it we call eg model manager.load(type=unet, id=sd1.5, config={seamless: [x,y]})
Currently the way the latents node seems to work is that it retrieves the
model and then configures seamless for the unet and vae just before using
it for generation. I'll modify this so that there is an optional model
configuration object that is passed around and applied by the model manager
service.
Could you suggest the contents of the config object? Aside from seamless, the only other configuration that is applied after retrieving the model is the scheduler.
Lincoln
On Thu, May 11, 2023 at 6:16 PM psychedelicious @.***> wrote:
Well you'd just provide this serializable config to a model node. The model node does not actually load the model, it only allows the user to select it.
I think it's the unet that is padded for seamless tiling? So a unet node identifies the unet and provides its config.
Then the node that actually does the inference gets the unet model identifier and small config object, and when the model is loaded inside that node, it applies the config if it exists.
The config would need to be wiped after the node finishes (ie when the model context is exited)
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/3391#issuecomment-1544763861, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA3EVIGMUP5U7364CSHS5LXFVQMVANCNFSM6AAAAAAX56HCPY . You are receiving this because you were assigned.Message ID: @.***>