Astropulse

Results 19 comments of Astropulse

I've fixed the issue by replacing the VAE contained in the model, but a way to do this on the fly would be quite helpful.

Installed with an Anaconda environment (python 3.8.5, pytorch 1.12.1, torchvision 0.13.1, numpy 1.20.3) on an M2 mac mini 8gb, no issues. About 8s/it.

I believe Diffusers utilizes CoreML, that may explain the performance difference.

This looks like a question I also had, so I'll reply here instead of creating a new issue. I believe what xzitlou was asking is if there is a way...

Ah! Brilliant, thank you. I'm specifically wondering if it could be used across multiple terminals, I assume not at the moment.

I'm not super experienced in terms of memory management. I figure since the model is loaded into vram, calling inference on it from another process should just be a matter...

I'll keep looking into it and see if I find anything. I have a very strange set of restrictions for my stable diffusion environment. On another note, is there any...

In my experience [buymecoffee](https://www.buymeacoffee.com/) is quite nice for handling donations. My environment is based on using lua to call cli commands, it can either create a terminal and ignore it...

Ah, as expected it was a simple solution that just didn't occur to me. Thank you.