Timothy Kautz
Timothy Kautz
Well, the group norm fix seemed to work for me, but then it was a different layer that was a problem next. I couldn't find any reference to a work...
Oh, I see the comment you referenced.
you just need to pass in a local directory. like `./`
I finally got image2image working! It took good deal of time, and I'll need to clean it up and test on other devices before submitting a PR or forking. Essentially,...
Image2Image PR: https://github.com/apple/ml-stable-diffusion/pull/73
Image2Image PR: https://github.com/apple/ml-stable-diffusion/pull/73
Curious what your setting is for the compute units. Try setting it to `.all` `@Option(help: "Compute units to load model with {all,cpuOnly,cpuAndGPU,cpuAndNeuralEngine}") var computeUnits: ComputeUnits = .all` I've not noticed...
The current python code to convert the UNET does not reference or use the `--latent-w 64 --latent-h 96` parameters. But you can hard code them in increments of `64 +...
@godly-devotion I created two other related issues to this. [Cannot create CoreML model with Flexible input shapes.](https://github.com/apple/ml-stable-diffusion/issues/70) and [SPLIT_EINSUM - Kernel Panic when testing UNET created with height 96 and...
You would need to run the python script to generate the Encoder model. for instance: `python -m python_coreml_stable_diffusion.torch2coreml --model-version ../stable-diffusion-2-base --convert-vae-encoder --bundle-resources-for-swift-cli --check-output-correctness --attention-implementation ORIGINAL -o ../Generated/CoreML/StableDiffusion2-base/ORIGINAL` I haven't published...