Yasuhito Nagatomo
Yasuhito Nagatomo
That's strange. In my experiment, it worked fine.
at least, the masking feature for InPainting added by the PR is working. We may need to adjust the parameters and models. :)
I was the same. The Compiled models of v1.5 and v2 by HuggingFace didn't work. The converted model of v2 by myself, according to Apple's instructions, works well. 
I don't know the details, but comparing it to the procedure I tried might give you something. - GitHub: https://github.com/ynagatomo/ImgGenSD2
When you don't have any devices, please use the target `My Mac (Designed for iPad)`. :)
As an example, in my sample iOS app, I display the intermediate images step by step. - https://github.com/ynagatomo/ARDiffMuseum
Hi. In the ImageGenerator.swift, 1. set the progress handler when calling generateImages() method; let cgImages = try sdPipeline.generateImages(prompt: param.prompt, negativePrompt: param.negativePrompt, imageCount: param.imageCount, stepCount: param.stepCount, seed: UInt32(param.seed), guidanceScale: param.guidanceScale, disableSafety:...
Getting the intermediate image takes time, so the above code, as the simplest case, doesn't do it on iPhone or iPad, but does it on macOS. Ex. using an iPad...
I can use the converted Stable Diffusion v2.1 models in Swift. - MBA/M1/8GB memory, macOS 13.2, Xcode 14.2 - Xcode project: https://github.com/ynagatomo/ImgGenSD2 I converted the models with this instruction: %...
- base: 512x512 image generation - normal: 768 x768 image generation (needs more working memory)