TimYao18
TimYao18
I found if `StableDiffusionPipeline(..., reduceMemory: true)` will make this happen. Set reduceMemory to **false**, this will not happen. But the app might be crashed due to out of memory.
In [Image Generation with Swift](https://github.com/apple/ml-stable-diffusion#-image-generation-with-swift): On iOS, the reduceMemory option should be set to false when constructing StableDiffusionPipeline I think there might be something wrong?
Close issue due to not the same question about title.
Change the title and reopen this issue.
Hi, I tried to add a Starting Image in Inpaint with SD1.5_cn, but it seems to have no effect and does not influence the resulting output image. I'm not sure...
I use swift diffusers and MochiDiffusion both. I just tried the Swift CLI and the starting image has no effect to the result, too. Perhaps I didn't make myself clear....
I set 2 images as the [MochiDiffusion screenshot here](https://drive.google.com/file/d/1imYHIKX_fUUV3TdVzV9ZSPYsHlncsgKe/view?usp=drive_link) The starting image is defined in the PipelineConfiguration: /// Starting image for image2image or **in-painting** public var startingImage: CGImage? = nil...
For your references. I test with DreamShaper XL1.0 on MacBook Air M2, 25 steps about 290 seconds. => 0.08 step/sec
Excuse me, I call the following command: ```bash python -m python_coreml_stable_diffusion.torch2coreml \ --convert-vae-decoder --convert-vae-encoder --convert-unet \ --unet-support-controlnet --convert-text-encoder \ --model-version runwayml/stable-diffusion-v1-5 \ --bundle-resources-for-swift-cli \ --quantize-nbits 6 \ --attention-implementation SPLIT_EINSUM_V2 \...
Hi @SaladDays831 It sounds like you're using the `reduceMemory = true` setting and it works fine. Could you let me know where you've placed your model file? Because I currently...