diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

8007000E Not enough memory resources are available to complete this operation.

Open muhademan opened this issue 2 years ago • 3 comments

Describe the bug

when I run the dml_onnx.py file in (amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py

I get an error like this: (amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py Fetching 19 files: 100%|███████████████████████████████████████████ | 19/19 [00:00<00:00, 1966.19it/s] 2022-10-10 10:05:28.0893026 [E:onnxruntime:, inference_session.cc:1484 onnxruntime::InferenceSession::Initialize::<lambda_70debc81dc7538bfc077b449cf61fe32>::operator()] Exception during initialization: D:\a_work\ 1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\BucketizedBufferAllocator.cpp(122)\onnxruntime_pybind11_state.pyd!00007FFA82D145C8: (caller: 00007FFA8336D326) Exception(1) 800 7000E to enough memory resources tid(4790) complete this operation.

Traceback (most recent call last): File "C:\amd-stable-diffusion\difusers-dml\examples\inference\dml_onnx.py", line 220, in image = pipe(prompt, height=512, width=768, num_inference_steps=50, guidance_scale=7.5, eta=0.0, execution_provider="DmlExecutionProvider")["sample"][0] File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\amd-stable-diffusion\difusers-dml\examples\inference\dml_onnx.py", line 73, in call unet_sess = ort.InferenceSession("onnx/unet.onnx", so, providers=[ep]) File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src122)\onnxruntime\core\providers\dml\DmlExecutionProvider\src122). onnxruntime_pybind11_state.pyd!00007FFA82D145C8: (caller: 00007FFA8336D326) Exception(1) tid(4790) 8007000E Not enough memory resources are available to complete this operation.

how to fix it ?

Reproduction

No response

Logs

No response

System Info

Python 3.7.0 , windows 10 pro 21H2

muhademan avatar Oct 10 '22 03:10 muhademan

This works for me:

Page File to 32 GB.

and edit the file: examples\inference\dml_onnx.py in the line "image = pipe(prompt, height=512, width=512, num_inference_steps=50, guidance_scale=7.5, eta=0.0, execution_provider="DmlExecutionProvider")["sample"][0]" to "image = pipe(prompt, height=256, width=256, num_inference_steps=45, guidance_scale=8, eta=0.0, execution_provider="DmlExecutionProvider")["sample"][0]"

Edit the file: examples\inference\save_onnx.py in the line "convert_to_onnx(pipe.unet, pipe.vae.post_quant_conv, pipe.vae.decoder, text_encoder, height=512, width=512)" to "convert_to_onnx(pipe.unet, pipe.vae.post_quant_conv, pipe.vae.decoder, text_encoder, height=256, width=256)"

Do: python save_onnx.py "For rebuild the data"

And try again: python dml_onnx.py

Not forgged stay logged with huggingface-cli

In my case a run it on an old A4-5050 quadcore with a RX 560 4GB VRAM OC, but is not full compatible. Dml_0001

Patometro06 avatar Oct 13 '22 05:10 Patometro06

@Patometro06 is there a way to do it without sacrificing image dimensions ?

brentleywilson avatar Oct 22 '22 04:10 brentleywilson

@Patometro06 is there a way to do it without sacrificing image dimensions ?

A don't know, maybe with another version, but it wil better use more than 1 GPU for have more VRAM.

Patometro06 avatar Oct 22 '22 04:10 Patometro06