ddPn08
ddPn08
> > When running the Deepdanbooru model, TensorFlow tries to initiate the same primary GPU that the WebUI is using, which causes the crash. I was able to resolve this...
Same here. I want to use xformers because I want to run deepfloyd on anything less than torch v2. If I don't use it I get an OOM error.
> pytorch 2.0 automatically applies the same memory efficient attention that xformers offers (see [here](https://pytorch.org/blog/accelerated-diffusers-pt-20/#accelerating-transformer-blocks)). How much VRAM do you have? You can run the diffusers code with very little...
I am sorry. I did not understand well what you have done. Could you please give me the details of what you executed?
 Extensions can be installed by entering comma-separated urls in extensions.
I created a temporary extension to fix this issue. By using this extension, img2img processing will not be performed. https://github.com/ddPn08/sd-webui-controlnet-batch-patch Once #98 is merged it will no longer be needed.
 @Creative-Ataraxia You can use it like this. For the Patch extension, just enable it. Batch processing works similarly.
This is my notebook problem. Fixed just now. https://github.com/ddPn08/automatic1111-colab/issues/6
I wanted to get a Minerl agent to join my server, which led me to this issue. Has this feature been considered for implementation yet?
I see. If possible, I would like to implement it myself, but is the implementation cost quite high?