ml-stable-diffusion
ml-stable-diffusion copied to clipboard
result not as expected
swift cli:
swift run StableDiffusionSample "a photo of an astronaut riding a horse on mars" --resource-path <output-mlpackages-directory>/Resources/ --seed 93 --output-path </path/to/output/image>
model: https://huggingface.co/apple/coreml-stable-diffusion-1-4-palettized/blob/main/coreml-stable-diffusion-1-4-palettized_original_compiled.zip
the result is a picture with lot of small face (around 10x12 small square), not as a man riding a horse, why?
I clone this repo and conda to setup a correct env, what do I miss? please help, thanks!
macOS Sonoma 14.0
I think it's possible that you are using Intel CPU.
yes, is that possible to run on Intel CPU?
Adding this parameter should generate a normal picture, but it is quite slow. You can try it.
--compute-units cpuOnly
Adding this parameter should generate a normal picture, but it is quite slow. You can try it.
--compute-units cpuOnly
Thank you so much @czkoko
It take around 60x times to complete the image.
Intel cpu can use diffusionBee app
Intel cpu can use diffusionBee app
Thanks, will try to use it!