ml-stable-diffusion icon indicating copy to clipboard operation
ml-stable-diffusion copied to clipboard

Python/Swift CLI generates images get Error with CPU_AND_NE + controlNet

Open TimYao18 opened this issue 10 months ago • 7 comments

Hi, I met error when using python cli generate images on my M1 Macbook Air. When computeUnit=CPU_AND_GPU + controlNet, it works fine. When computeUnit=CPU_AND_NE + controlNet, it errors as below log. When computeUnit=CPU_AND_NE no controlNet, it works fine.

The same command and model files on other Macbook (M2, M2 Pro), it works fine. I understand that this is a specific scenario, but I still feel it's necessary to report it.

Traceback (most recent call last):
    ....
  ml-stable-diffusion/python_coreml_stable_diffusion/coreml_model.py", line 79, in __call__
    return self.model.predict(kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/coremltools/models/model.py", line 573, in predict
    return self.__proxy__.predict(data)
RuntimeError: {
    NSLocalizedDescription = "Unable to compute the prediction using ML Program. It can be an invalid input data or broken/unsupported model.";
    NSUnderlyingError = "Error Domain=com.apple.CoreML Code=0 \"E5RT: ANE inference operation failed with error message = Error Domain=com.apple.appleneuralengine Code=8 \"processRequest:model:qos:qIndex:modelStringID:options:error:: ANEProgramProcessRequestDirect() Failed with status=0x1 : statusType=0x9: Program Inference error\" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:error:: ANEProgramProcessRequestDirect() Failed with status=0x1 : statusType=0x9: Program Inference error} (11)\" UserInfo={NSLocalizedDescription=E5RT: ANE inference operation failed with error message = Error Domain=com.apple.appleneuralengine Code=8 \"processRequest:model:qos:qIndex:modelStringID:options:error:: ANEProgramProcessRequestDirect() Failed with status=0x1 : statusType=0x9: Program Inference error\" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:error:: ANEProgramProcessRequestDirect() Failed with status=0x1 : statusType=0x9: Program Inference error} (11)}";

Also, I tried the Swift-CLI using OpenPose-SE with DreamShaper_v5_split-einsum_cn Core ML model converted by jrrjrr. Using Macbook air M1 will still get error.

swift run StableDiffusionSample "photorealistic, 8k, a japanese girl" --negative-prompt "BadDream, UnrealisticDream" --resource-path DreamShaper_v5_split-einsum_cn --compute-units cpuAndNeuralEngine --seed 1820115949 --step-count 35 --guidance-scale 7 --controlnet OpenPose-SE --controlnet-inputs openpose.png

Building for debugging...
Build complete! (0.20s)
Loading resources and creating pipeline
(Note: This can take a while the first time using these resources)
Sampling ...
Error: Unable to compute the asynchronous prediction using ML Program. It can be an invalid input data or broken/unsupported model.

I tried Mochi Diffusion and it got the same error message as above. I will have a screen capture later.

TimYao18 avatar Sep 01 '23 21:09 TimYao18

Screenshot 2023-09-03 at 10 28 18 PM

Here the Mochi Diffusion happen the same issue.

TimYao18 avatar Sep 03 '23 22:09 TimYao18

From your Mochi screen cap, it looks like you are not using a Scribble ControlNet model that was converted for use with Split-Einsum (unless you renamed it).

With Split-Einsum and CPU and GPU, you don't really need a ControlNet model converted specifically for Split-Einsum. An Original that is 512x512 (5x5) will work.

But with CPU and Neural Engine, at least in Mochi, you must use a ControlNet model converted for Split-Einsum. The list of available models for download uses -SE for Split-Einsum versions.

Screencap

jrittvo avatar Sep 03 '23 22:09 jrittvo

YES, I tried with SE on Mochi Diffusion works fine. So this might still the python CLI or Swift CLI issue? I will continue to try this.

TimYao18 avatar Sep 03 '23 22:09 TimYao18

It is very possible that the behavior is different between Mochi and a Swift / Python-CoreML CLI. Mochi was built using an ml-stable-diffusion commit that is different (older) from the current version you would get from cloning the ml-stable-diffusion git repo today. And there also seemed to be a divergence mentioned in the PR pending for the SDXL refiner between the Swift CLI and the Diffusers app, in some instance, I think, which would imply a bug somewhere as the 2 should behave the same.

(Mochi 4.2 was built with this commit: https://github.com/apple/ml-stable-diffusion/commit/ce8ee78e28613d8a2e4c8b56932b236cb57e7e20)

jrittvo avatar Sep 03 '23 23:09 jrittvo

From the Refiner PR #227:

"I noticed that trying to run inference on the CLI wasn't working quite right, and I figured out that it needed the Unet to be float32 precision to work. I'm not sure why this happens, and the refiner works perfectly fine when running through the Diffusers app."

jrittvo avatar Sep 03 '23 23:09 jrittvo

Thank you for your notice. I fallback my diffusers project to older version and the controlnet works great with NerualEngine.

TimYao18 avatar Sep 03 '23 23:09 TimYao18

Yes, some thing or things that changed in the past month or so in ml-stable-diffusion, coremltools, and/or diffusers is causing model mis-matches all over the place. With all the changes, I have been unable to isolate the source of the issues. Different pipelines throw different errors at different points in different packages. It is near impossible to tell now whether an error is from the conversion pipeline or the inference pipeline.

So I gave up trying to make sense of things. I am just sticking with older packages and conda environments that work consistently for 1.5 type models. I will likely stay there until Sonoma is released and stable, and the SDXL changes are debugged.

jrittvo avatar Sep 03 '23 23:09 jrittvo