stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: I cannot create multiple batches [Failed to fetch ID: 2 per 1 second]

Open Dr3aDL0cK opened this issue 1 year ago • 4 comments

Checklist

  • [ ] The issue exists after disabling all extensions
  • [ ] The issue exists on a clean installation of webui
  • [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [ ] The issue exists in the current version of the webui
  • [X] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened?

I try to generate multiple images i get error failed to fetch ID: 2 per 1 second. i have tried re registering and getting a new API this does not fix the issue.

Steps to reproduce the problem

create prompt, select more than 3 generations in batch, click generate, get error Failed to fetch ID: 2 per 1 second

What should have happened?

create more than 2 images.

What browsers do you use to access the UI ?

No response

Sysinfo

cannot find

Console logs

cannot find

Additional information

No response

Dr3aDL0cK avatar Jan 08 '24 11:01 Dr3aDL0cK

Sorry im not sure if im doing this right but hopefully my issue is clear enough.

Dr3aDL0cK avatar Jan 08 '24 11:01 Dr3aDL0cK

no your issue is not clear at all, I have no idea what you're talking about

i have tried re registering and getting a new API this

what??? registering ??? get API ???


create prompt

ok, you input some priompt

select more than 3 generations in batch

I assume you set either batch count or batch size to three in the UI

click generate, get error Failed to fetch ID: 2 per 1 second

care to say where are you get this error in the console in the terminal


Setting > Sysinfo > Download system info is this that hard to follow image

w-e-w avatar Jan 08 '24 13:01 w-e-w

I think he's saying that when he tries to have a batch count higher than 1 he's getting CUDA errors. I'm getting them too. I can generate one image, but if batch count is higher than 1 I get a CUDA error: illegal memory access was encoutnered

Traceback (most recent call last): File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "D:\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "D:\stable-diffusion-webui\modules\processing.py", line 734, in process_images res = process_images_inner(p) File "D:\stable-diffusion-webui\modules\processing.py", line 877, in process_images_inner x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True) File "D:\stable-diffusion-webui\modules\processing.py", line 600, in decode_latent_batch devices.test_for_nans(sample, "vae") File "D:\stable-diffusion-webui\modules\devices.py", line 118, in test_for_nans if not torch.all(torch.isnan(x)).item(): RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. --- Traceback (most recent call last): File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "D:\stable-diffusion-webui\modules\call_queue.py", line 77, in f devices.torch_gc() File "D:\stable-diffusion-webui\modules\devices.py", line 51, in torch_gc torch.cuda.empty_cache() File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 133, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

silverhammer751 avatar Jan 09 '24 02:01 silverhammer751

I think he's saying that when he tries to have a batch count higher than 1 he's getting CUDA errors. I'm getting them too. I can generate one image, but if batch count is higher than 1 I get a CUDA error: illegal memory access was encoutnered

Traceback (most recent call last): File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "D:\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "D:\stable-diffusion-webui\modules\processing.py", line 734, in process_images res = process_images_inner(p) File "D:\stable-diffusion-webui\modules\processing.py", line 877, in process_images_inner x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True) File "D:\stable-diffusion-webui\modules\processing.py", line 600, in decode_latent_batch devices.test_for_nans(sample, "vae") File "D:\stable-diffusion-webui\modules\devices.py", line 118, in test_for_nans if not torch.all(torch.isnan(x)).item(): RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Traceback (most recent call last): File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "D:\stable-diffusion-webui\modules\call_queue.py", line 77, in f devices.torch_gc() File "D:\stable-diffusion-webui\modules\devices.py", line 51, in torch_gc torch.cuda.empty_cache() File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 133, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

yup, but for me i can generate 2 images max

Dr3aDL0cK avatar Jan 19 '24 00:01 Dr3aDL0cK