stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: ANEProgramProcessRequestDirect() Failed on macOS Sonoma
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
got this error when I click generate
/appleinternal/library/buildroots/1a7a4148-f669-11ed-9d56-f6357a1003e8/library/caches/com.apple.xbs/sources/metalperformanceshadersgraph/mpsgraph/metalperformanceshadersgraph/runtimes/mpsruntime/operations/gpuaneregionops.mm:332: failed assertion `ane evaluation error = error domain=com.apple.appleneuralengine code=3 "processrequest:model:qos:qindex:modelstringid:options:error:: aneprogramprocessrequestdirect() failed with status=0xf : statustype=0x11: program inference timeout: timed out" userinfo={nslocalizeddescription=processrequest:model:qos:qindex:modelstringid:options:error:: aneprogramprocessrequestdirect() failed with status=0xf : statustype=0x11: program inference timeout: timed out}'
Steps to reproduce the problem
When I installed Mac OS Sonoma run Stable diffusion Web UI in Terminal (Working Fine) When I tried to Generate something I got the message above
What should have happened?
I have no idea im not really good at this so im trying to get some help
Commit where the problem happens
when I try to generate something
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
MacOS
What device are you running WebUI on?
Other GPUs
What browsers do you use to access the UI ?
Apple Safari
Command Line Arguments
cd ~/stable-diffusion-webui
~/stable-diffusion-webui/webui.sh
List of extensions
control net , openpose-editor, posex, , sd-webui-additional-networks , sd-webui-depth-lib, sd-webui-photopea-embed
Console logs
(base) rei@MR-MacBook-Air ~ % cd ~/stable-diffusion-webui
~/stable-diffusion-webui/webui.sh
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on rei user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.9 (main, Jan 11 2023, 09:18:18) [Clang 14.0.6 ]
Version: v1.3.2
Commit hash: baf6946e06249c5af9851c60171692c44ef633e0
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --use-cpu interrogate
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading booru2prompt settings
[AddNet] Updating model hashes...
100%|███████████████████████████████████████████| 3/3 [00:00<00:00, 2889.97it/s]
[AddNet] Updating model hashes...
100%|██████████████████████████████████████████| 3/3 [00:00<00:00, 10894.30it/s]
ControlNet v1.1.195
ControlNet v1.1.195
Loading weights [4199bcdd14] from /Users/rei/stable-diffusion-webui/models/Stable-diffusion/revAnimated_v122.safetensors
Create LRU cache (max_size=16) for preprocessor results.
Create LRU cache (max_size=16) for preprocessor results.
Creating model from config: /Users/rei/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Create LRU cache (max_size=16) for preprocessor results.
Startup time: 6.5s (import torch: 3.6s, import gradio: 0.3s, import ldm: 0.3s, other imports: 0.7s, load scripts: 0.8s, create ui: 0.5s, gradio launch: 0.1s).
Loading VAE weights specified in settings: /Users/rei/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt
Applying optimization: InvokeAI... done.
Textual inversion embeddings loaded(14): bad-artist, bad-artist-anime, bad-hands-5, bad-image-9600, bad-image-v2-11000, bad-image-v2-27000, bad-image-v2-39000, bad_prompt, bad_prompt_version2, EasyNegative, eonn, neg_grapefruit, ng_deepnegative_v1_75t, slpashter_2
Model loaded in 9.1s (load weights from disk: 0.4s, create model: 0.8s, apply weights to model: 5.9s, apply half(): 1.3s, load VAE: 0.2s, move model to device: 0.5s).
Saving backup of webui/extension state to /Users/rei/stable-diffusion-webui/config_states/2023_06_09-17_49_24_Backup (pre-update).json.
Restarting UI...
Closing server running on port: 7860
Loading booru2prompt settings
[AddNet] Updating model hashes...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 5893.64it/s]
ControlNet preprocessor location: /Users/rei/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
2023-06-09 17:49:27,400 - ControlNet - INFO - ControlNet v1.1.222
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 1.0s (load scripts: 0.4s, create ui: 0.2s, gradio launch: 0.4s).
0%| | 0/20 [00:00<?, ?it/s]/AppleInternal/Library/BuildRoots/1a7a4148-f669-11ed-9d56-f6357a1003e8/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Runtimes/MPSRuntime/Operations/GPUANERegionOps.mm:332: failed assertion `ANE Evaluation Error = Error Domain=com.apple.appleneuralengine Code=3 "processRequest:model:qos:qIndex:modelStringID:options:error:: ANEProgramProcessRequestDirect() Failed with status=0xf : statusType=0x11: Program Inference timeout: timed out" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:error:: ANEProgramProcessRequestDirect() Failed with status=0xf : statusType=0x11: Program Inference timeout: timed out}'
zsh: abort ~/stable-diffusion-webui/webui.sh
(base) rei@MR-MacBook-Air stable-diffusion-webui % /Users/rei/miniconda3/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Additional information
No response
I'm sorry to say, but I think you're on your own using a beta/preview version of macOS – let us know if you do manage to figure out what's happening here, but I'm afraid this will take at least a new, compatible version of PyTorch to fix.
I'm sorry to say, but I think you're on your own using a beta/preview version of macOS – let us know if you do manage to figure out what's happening here, but I'm afraid this will take at least a new, compatible version of PyTorch to fix.
Ohh isee 😕 I got baited installing the new OS cause of the Gameport kit api that they realeses and now im fucked up 😢
seems like you got the same error I got after upgrading to macOS 14 https://github.com/apple/ml-stable-diffusion/issues/192 try running with --no-half seems to work but slow
seems like you got the same error I got after upgrading to macOS 14 https://github.com/apple/ml-stable-diffusion/issues/192 try running with --no-half seems to work but slow
omg that works like charm thanks!, yeah its much slower now than before I wish some one patch and fix the problem
(This also appear in diffusion bee)
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '
Same problem here. Use this command below to stopping using the METAL API and use the CPU as primary!
The --no-half above only made me going from 2s/it to 100s/it.... With the CPU, at least, is between 10˜12s/it
- Open webui-user.sh in Xcode
- Change #export COMMANDLINE_ARGS="" to export COMMANDLINE_ARGS="--skip-torch-cuda-test --no-half --use-cpu all".
Same problem here. Use this command below to stopping using the METAL API and use the CPU as primary!
The --no-half above only made me going from 2s/it to 100s/it.... With the CPU, at least, is between 10˜12s/it
- Open webui-user.sh in Xcode
- Change #export COMMANDLINE_ARGS="" to export COMMANDLINE_ARGS="--skip-torch-cuda-test --no-half --use-cpu all".
that also works , idid try my old settings, " --skip-torch-cuda-test --upcast-sampling " and did got the error for a lot of times maybe 3-8times and It successfully working just like the old times, but relaunching everything 3-8 times is kinda painfull. Lol
Same issue here, also using Sonoma - upgraded to try the gamekit too. Damnit. :( Can't render anything now.
I upgraded torch to 2.1.0 (nightly) and that somehow fixed the issue.
I did pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cpu
as it's advised here: https://pytorch.org/get-started/pytorch-2.0/
it installed pytorch 2.1.0, but I still got the same error :(
I did
pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cpu
as it's advised here: https://pytorch.org/get-started/pytorch-2.0/it installed pytorch 2.1.0, but I still got the same error :(
Try renaming your venv to something else and let webui detect that you don’t have venv and it will reinstall everything. Give it a shot and generate something.
hi @pandora523 , have done that and a new venv was rebuilt but the same error occur when I try to generate an image : python quit with a Mac OS error window popping and I have this error in terminal :
2023-07-19 12:32:35.494 Python[10592:103178] Error = Error Domain=com.apple.appleneuralengine Code=6 "createProgramInstanceForModel:modelToken:qos:isPreCompiled:enablePowerSaving:skipPreparePhase:statsMask:memoryPoolID:enableLateLatch:modelIdentityStr:owningPid:cacheUrlIdentifier:aotCacheUrlIdentifier:error:: Program load failure (0xF0004)" UserInfo={NSLocalizedDescription=createProgramInstanceForModel:modelToken:qos:isPreCompiled:enablePowerSaving:skipPreparePhase:statsMask:memoryPoolID:enableLateLatch:modelIdentityStr:owningPid:cacheUrlIdentifier:aotCacheUrlIdentifier:error:: Program load failure (0xF0004)}
/AppleInternal/Library/BuildRoots/d8ee83b8-11b4-11ee-a66d-46d450270006/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Runtimes/MPSRuntime/Operations/GPURegionOps.mm:572: failed assertion `ANE load failed!'
/opt/homebrew/Cellar/[email protected]/3.10.12_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
./webui.sh: line 241: 10592 Abort trap: 6 "${python_cmd}" "${LAUNCH_SCRIPT}" "$@"
i have attached this Mac OS bug report if that can help someone resolve this bug : mac os sonoma bug report.txt
I did
pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cpu
as it's advised here: https://pytorch.org/get-started/pytorch-2.0/ it installed pytorch 2.1.0, but I still got the same error :(Try renaming your venv to something else and let webui detect that you don’t have venv and it will reinstall everything. Give it a shot and generate something.
What command args are you running ?
I just read the bug report it's the same bug that I got when I installed Sonoma+Command Line Tools. Reinstalling my venv fixed my problem.
My temporary Fix 1# is "--no-half " it will generate images but slow really slow
My temporary Fix 2#. This Sounds Ridiculous But Before it Fixed My problem this is the only thing that works for me.
step 1.launch it with “––no–half” generate some images it will fix the problem but it will give you headache cause of the speed 😒
step 2.relaunch it without(––no–half) and tried generate something again and if it works proceed to step 3 if not do step 1 again🙃
step 3. Its ok to Terminate the Terminal but ((Shutdown)) is not allowed just put your Mac to sleep if your done generating things for the day 😴 or else your gonna do your relaunch marathon again 😖 (Step 1 and 2)
My venv folder looks like this
Hi, I am on Sonoma, have rebuilt the venv folder renaming the old, before that I have installed the last PyTorch Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate should I change something ?
if I launch ./webui.sh --no-half only I can generate images. Thanks
if I launch ./webui.sh --no-half only I can generate images. Thanks
@vicento Im glad that —no-half works but you can have a better performance than that if the graphical acceleration is working and not crashing 🥲.
update ※ I just recreate the problem and fckup my webui again for a day or half and. --reinstall-torch with a little twist fixed the problem .
from webui-user.sh and from line 29 TORCH_COMMAND. I pasted this. "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu" and try running with args --reinstall-torch and wait for it to download all the 2.1 files.
100%|█████████████████████████████████████████████████████████████████████████████████████| 34/34 [00:36<00:00, 1.09s/it]
without no-half it generates more faster now
it works now on mac os 15 beta 4
if I launch ./webui.sh --no-half only I can generate images. Thanks
@vicento Im glad that —no-half works but you can have a better performance than that if the graphical acceleration is working and not crashing 🥲.
update ※ I just recreate the problem and fckup my webui again for a day or half and. --reinstall-torch with a little twist fixed the problem .
from webui-user.sh and from line 29 TORCH_COMMAND. I pasted this. "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu" and try running with args --reinstall-torch and wait for it to download all the 2.1 files.
100%|█████████████████████████████████████████████████████████████████████████████████████| 34/34 [00:36<00:00, 1.09s/it] without no-half it generates more faster now
I have already update my torch version to 2.1.0, but my SD webui page still tell me it's 2.0.1, that's why?
I have already update my torch version to 2.1.0, but my SD webui page still tell me it's 2.0.1, that's why?
Hey, @Smilecat1202 – Did you update PyTorch in the Python virtual environment (venv
) A1111 creates or outside of it? If you updated PyTorch outside of your virtual environment, A1111 won't see that update.
Try deleting the venv
folder in the stable-diffusion-webui
folder, in addition to modifying the webui-user.sh
as recommended by @pandora523.
I tried as mentioned above, however, torch version is still not changed. How update pytorch in stable diffusion?
I want to solve this issue, really. Please help me.
Having the same issue. Installed automatic1111 on Sonoma today and it worked for a few hours until it didn't. I'm now only able to run it with the ./webui.sh --no-half command... torch is showing 2.0.1
I tried as mentioned above, however, torch version is still not changed. How update pytorch in stable diffusion? I want to solve this issue, really. Please help me.
I solved torch version issue, however, there are some another issues. When I run stable diffusion without '--no-half', hires.fix and Detect Detailer doesn't work anymore.
Superman, please solve this problem... Please refer to my error code.
change the torch to 2.1.0
work for me
pip3 install numpy --pre torch==2.1.0
./webui.sh --reinstall-torch
./webui.sh --no-half --skip-torch-cuda-test
if I launch ./webui.sh --no-half only I can generate images. Thanks
@vicento Im glad that —no-half works but you can have a better performance than that if the graphical acceleration is working and not crashing 🥲.
update ※ I just recreate the problem and fckup my webui again for a day or half and. --reinstall-torch with a little twist fixed the problem .
from webui-user.sh and from line 29 TORCH_COMMAND. I pasted this. "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu" and try running with args --reinstall-torch and wait for it to download all the 2.1 files.
100%|█████████████████████████████████████████████████████████████████████████████████████| 34/34 [00:36<00:00, 1.09s/it] without no-half it generates more faster now
Worked for me. For those not used to working with shell scripts, make sure to uncomment the line by removing the #.
if I launch ./webui.sh --no-half only I can generate images. Thanks
@vicento Im glad that —no-half works but you can have a better performance than that if the graphical acceleration is working and not crashing 🥲.
update ※ I just recreate the problem and fckup my webui again for a day or half and. --reinstall-torch with a little twist fixed the problem .
from webui-user.sh and from line 29 TORCH_COMMAND. I pasted this. "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu" and try running with args --reinstall-torch and wait for it to download all the 2.1 files.
100%|█████████████████████████████████████████████████████████████████████████████████████| 34/34 [00:36<00:00, 1.09s/it] without no-half it generates more faster now
Worked for me too. Thx
Still having problem now
Macbook M2chips 14.3 (23D56)