diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

8007000E Not enough memory resources are available to complete this operation.

Open muhademan opened this issue 2 years ago • 7 comments

when I run the dml_onnx.py file in (amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py

I get an error like this: (amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py Fetching 19 files: 100%|███████████████████████████████████████████ | 19/19 [00:00<00:00, 1966.19it/s] 2022-10-10 10:05:28.0893026 [E:onnxruntime:, inference_session.cc:1484 onnxruntime::InferenceSession::Initialize::<lambda_70debc81dc7538bfc077b449cf61fe32>::operator()] Exception during initialization: D:\a_work\ 1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\BucketizedBufferAllocator.cpp(122)\onnxruntime_pybind11_state.pyd!00007FFA82D145C8: (caller: 00007FFA8336D326) Exception(1) 800 7000E to enough memory resources tid(4790) complete this operation.

Traceback (most recent call last): File "C:\amd-stable-diffusion\difusers-dml\examples\inference\dml_onnx.py", line 220, in image = pipe(prompt, height=512, width=768, num_inference_steps=50, guidance_scale=7.5, eta=0.0, execution_provider="DmlExecutionProvider")["sample"][0] File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\amd-stable-diffusion\difusers-dml\examples\inference\dml_onnx.py", line 73, in call unet_sess = ort.InferenceSession("onnx/unet.onnx", so, providers=[ep]) File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src122)\onnxruntime\core\providers\dml\DmlExecutionProvider\src122). onnxruntime_pybind11_state.pyd!00007FFA82D145C8: (caller: 00007FFA8336D326) Exception(1) tid(4790) 8007000E Not enough memory resources are available to complete this operation.

how to fix it ?

muhademan avatar Oct 10 '22 03:10 muhademan

@muhademan,

Could you please copy-paste a reproducible code snippet here? I currently have sadly no idea what code you ran that produced this error and therefore cannot really help :-/

patrickvonplaten avatar Oct 10 '22 13:10 patrickvonplaten

Same thing happens to me often. I have a RX560 4G and 16G DDR3 and I can use the ONNX pipeline but every so often I get that error when initializing the script which I have by following this guide: https://www.travelneil.com/stable-diffusion-windows-amd.html This is the code:

from diffusers import StableDiffusionOnnxPipeline pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")

prompt = "A happy celebrating robot on a mountaintop, happy, landscape, dramatic lighting, art by artgerm greg rutkowski alphonse mucha, 4k uhd'"

image = pipe(prompt).images[0] image.save("output.png")

It might be worth mentioning that if I place the last 2 lines inside a while loop and the first image successfully starts to generate - then no matter how many times it loops I will not get that error. It only happens sometimes when first running the script - so my guess is this happens when loading the pipe with 'from_pretrained'. Its just weird to get a 'not enough memory' error that gets solved by running the script again without even closing any programs.

GreenLandisaLie avatar Oct 10 '22 22:10 GreenLandisaLie

cc @anton-l here in case you have a hunch of what might be going on

patrickvonplaten avatar Oct 11 '22 18:10 patrickvonplaten

Windows+AMD GPUs is a very unfamiliar territory for me, but maybe @pingzing or @harishanand95 could check it out :)

anton-l avatar Oct 12 '22 10:10 anton-l

Hi, I'm @harishanand95's manager. There are critical limitations with the ONNXRuntime currently that make this inference path very sub-optimal. We're working hard to find much faster and leaner alternative solutions, but it's complicated and it takes time and effort. Thank you for your patience and I'm sorry you're running into these issues.

claforte avatar Oct 12 '22 15:10 claforte

@muhademan how much Vram and System ram do you have? Generating a 512x768 image takes more than 8GB of Vram and 16GB of System Ram using Onnx and DmlExecutionProvider.

@GreenLandisaLie when you encounter the reported issue and open Task Manager what is the GPU VRAM and System RAM usage? I believe a 4GB Card w/16 GB of System ram would be near the bare minimum and would at times run into a out of memory error when initializing the pipe depending on what is running on your Windows System.

@anton-l I've heard that the diffusers process can be broken into separate fragments, loading only parts of the model when needed, reducing the memory requirements. Example: https://github.com/neonsecret/stable-diffusion Might be something that could help with the Memory issues that users report.

averad avatar Nov 02 '22 23:11 averad

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Nov 30 '22 15:11 github-actions[bot]