ml-stable-diffusion
ml-stable-diffusion copied to clipboard
Stable Diffusion with Core ML on Apple Silicon
Adds a negativePrompt parameter to generateImages and produces a prompt embedding by concatenating the negative and positive prompt encodings.
I've been experimenting with image generation in Swift with the converted CoreML models. It seems to produce different style (and noticeably worse?) images than other Stable Diffusion tools for a...
Lots of model use their custom fine tuned VAE to get color and texture right, for example (https://huggingface.co/Linaqruf/anything-v3.0/blob/main/Anything-V3.0.vae.pt). Is there a way to use these fine tuned VAE on apple's...
It would be great if we could get the data of each intermediate step as an image. This way we could build a preview in our UIs like this: ...
hi! is it possible to use embeddings .bin files? Thanks!
The current default samplers produce weird images on existing popular models on huggingface. Testing with normal python, the Euler samplers produces better images most of time. Wondering if it's possible...
The `--image-count` option in the swift example generates images with the initial seed, which the user gives. But for the rest of the images, it generates an `MLShapedArray` with the...
Image2image
Adds image2image functionality. In Python, a new CoreML model can be generated to encode the latent space for image2image. The model bakes in some of the operations typically performed in...
It would be useful if this package came with a `.upscale()` method as it's quite a common need to upscale the image after generating it. For example, using https://github.com/Stability-AI/stablediffusion#image-upscaling-with-stable-diffusion
Generating images can take quite a while and users may want to cancel the progress. Adding a way to abort `.generateImages()` would be a great benefit.