sdwebuiapi
sdwebuiapi copied to clipboard
Python API client for AUTOMATIC1111/stable-diffusion-webui
Instead of just reading the current progress
How to train at the local rather than through the interface? Has txt2img training API?
Is it possible for to add the width and height parameter for the txt2img? Or some other way to pass in any parameters to the underlying api?
I noticed that when using ControlNet with the module='reference_only' option, there is no place for me to input the Style Fidelity parameter. The returned result seems to be for when...
Hi, Can you please guide me on how to import a 3seconds video file of the Pose (openPose + hands + face) into the SD and get an avatar animation...
# use webuiapi.py from webuiapi folder import webuiapi # api = webuiapi.WebUIApi() api = webuiapi.WebUIApi(host='127.0.0.1', port=7860, sampler='Euler a', steps=20) # img2txt result1 = api.txt2img(prompt="cute squirrel", negative_prompt="ugly, out of frame", seed=1003,...
I use the controlnetunits but it seems not effect, the output image was same as basic txt2txt and img2img. And I checked the response json find that it didn't has...
I have read the examples for Scripts support, and through api.get_scripts(), I found that I have a script named "roop". The GitHub repository for the Roop extension is: https://github.com/s0md3v/sd-webui-roop. Here...