David Martín Rius

Results 67 comments of David Martín Rius

Ah I see what happens... The endpoints of that branch are not defined in stable-diffusion-webui/modules/api/api.py So it cannot be called such as http://localhost:7860/sdapi/v1/promptgen/list_models but http://localhost:7860/promptgen/list_models Solved!

same error here... but I did not find any solution yet

The only thing that worked for me is to go to an older CN branch. Explained here: https://github.com/continue-revolution/sd-webui-animatediff/issues/412#issuecomment-1913429785

You need to do like this. The input file must be a PIL object: ``` from PIL import Image image0 = Image.open("C:\city.pang") unit1 = webuiapi.ControlNetUnit(input_image=image0, module='canny', model='control_canny-fp16 [e3fe7712]') ```

Well, it depends on the input parameters. What parameters are you using in automatic1111 frontend and what are you passing as parameters in the api call? It should take the...

Let's see guys... the speed issue is not because this repo is because how you use it. If you do not plan to share the previous requested info better close...

> ``` > > api = webuiapi.WebUIApi() > api = webuiapi.WebUIApi(host="127.0.0.1", port=7860) > > r = api.txt2img( > prompt="photo of a beautiful girl with blonde hair", height=512, enable_hr=True, > hr_scale=2,...

Maybe there is needed a reload endpoint. In automatic1111 frontend there is no unload model as well, but Reload UI. So.. I supose the way is to reload the program...

Actually you could implement serverless instances in runpod.io for each inference/user execution, so you do not need to maintain any infrastructure and queues. Or if you wanted to use your...

You could use your own LLM such as Llama 2. But you need a GPU with at least 24GB vram to work comfortably.