Get Results from Multiple Models using api.
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
There should be an option to add or load multiple models in memory. So that when we pass the model name via API. It should return results from that model
Proposed workflow
- Go to Default model and select multiple model
- Press Save
- Add property model name or model hash in api for image generation like text2image etc
Additional information
No response
Models can be loaded with the API like so.
option_payload = {
"sd_model_checkpoint": "Anything-V3.0-pruned.ckpt [2700c435]", # model name as displayed in webui
}
response = requests.post(url="http://127.0.0.1:7860/sdapi/v1/options", json=option_payload)
There should be an option to add or load multiple models in memory.
See the Checkpoints to cache in RAM option in Settings > Stable Diffusion.
But how can i load like Anything-v3 and Midjourneyv4 both same time? And use any of it with api @missionfloyd
Load the model like the above, then send your txt2img/img2img request.
Or, you should be able to do it with overrides.
payload = {
"prompt": "cat with a slice of toast on its head",
"steps": 20,
}
override_settings = {
"sd_model_checkpoint": "Anything-V3.0-pruned.ckpt [2700c435]",
}
override_payload = {
"override_settings": override_settings
}
payload.update(override_payload)
response = requests.post(url=f'http://127.0.0.1:7860/sdapi/v1/txt2img', json=payload)
You can't load them both into VRAM at the same time, but caching them in RAM should make switching between them much quicker.
Great. This so helpful @missionfloyd How can i load models in ram? is there gui to do that?
Look in Settings > Stable Diffusion. There's a setting called Checkpoints to cache in RAM. It sets how many of the most recently used checkpoints should be cached in RAM.
Models can be loaded with the API like so.模型可以像这样使用 API 加载。
option_payload = { "sd_model_checkpoint": "Anything-V3.0-pruned.ckpt [2700c435]", # model name as displayed in webui } response = requests.post(url="http://127.0.0.1:7860/sdapi/v1/options", json=option_payload)There should be an option to add or load multiple models in memory.应该有一个选项可以在内存中添加或加载多个模型。
See the
Checkpoints to cache in RAMoption in Settings > Stable Diffusion.请参阅“设置”>“稳定扩散”中的Checkpoints to cache in RAM选项。
I'm curious if the option here is a global configuration? For example, if I have two requests, one uses option A to generate an image, and the other uses option B to generate another image, if these two requests are concurrent, is it possible for the first request to set option A and then be overwritten by subsequent requests?