SUPIR
SUPIR copied to clipboard
bfloat16 error
Hi I'm testing the local install & interface Dr. Furkan Gözükara made for Supir and its its working really well on a 4090 but i get the following error when i try to use it on an RTX8000.
RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16. Traceback (most recent call last): File "E:\AI\Supir\SUPIR\venv\lib\site-packages\gradio\queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "E:\AI\Supir\SUPIR\venv\lib\site-packages\gradio\route_utils.py", line 233, in call_process_api output = await app.get_blocks().process_api( File "E:\AI\Supir\SUPIR\venv\lib\site-packages\gradio\blocks.py", line 1608, in process_api result = await self.call_function( File "E:\AI\Supir\SUPIR\venv\lib\site-packages\gradio\blocks.py", line 1176, in call_function prediction = await anyio.to_thread.run_sync( File "E:\AI\Supir\SUPIR\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "E:\AI\Supir\SUPIR\venv\lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "E:\AI\Supir\SUPIR\venv\lib\site-packages\anyio_backends_asyncio.py", line 851, in run result = context.run(func, *args) File "E:\AI\Supir\SUPIR\venv\lib\site-packages\gradio\utils.py", line 689, in wrapper response = f(*args, **kwargs) File "E:\AI\Supir\SUPIR\gradio_demo.py", line 69, in stage1_process LQ = model.batchify_denoise(LQ, is_stage1=True) File "E:\AI\Supir\SUPIR\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AI\Supir\SUPIR\SUPIR\models\SUPIR_model.py", line 76, in batchify_denoise x = self.encode_first_stage_with_denoise(x, use_sample=False, is_stage1=is_stage1) File "E:\AI\Supir\SUPIR\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AI\Supir\SUPIR\SUPIR\models\SUPIR_model.py", line 50, in encode_first_stage_with_denoise with torch.autocast("cuda", dtype=self.ae_dtype): File "E:\AI\Supir\SUPIR\venv\lib\site-packages\torch\amp\autocast_mode.py", line 306, in init raise RuntimeError( RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16.
in the interface i have it set diffusion type to fp16 to no avail.
Absolutely amazing upscaling model btw, its the best I've ever tested, by far!
Thanks for your help FG
yes i think there are still some mismatch of dtypes. I hope authors can fix this.
I have this error too
Have this problem as well
@JasonGUTU @Fanghua-Yu i hope you can fix this issue. This is also preventing Kaggle running
Facing the same issue, any advice?
I too had the same error on a RTX Titan.
I've followed these steps for setup to run locally on less VRAM. https://www.reddit.com/r/StableDiffusion/comments/1b37h5z/supir_super_resolution_tutorial_to_run_it_locally/
For it too work and remove this error I had to change :
ae_dtype: bf16
to
ae_dtype: fp32
in SUPIR_v0.yaml in the first few lines.
Once the gradio interface was loaded I also had to change the "Auto-Encoder Data Type" from bf16 to fp32 Then everything was working perfectly. I'm not using all the LLaVA stuff, so I can't say for that but step 1 and 2 of upscaling works.
I too had the same error on a RTX Titan.
I've followed these steps for setup to run locally on less VRAM. https://www.reddit.com/r/StableDiffusion/comments/1b37h5z/supir_super_resolution_tutorial_to_run_it_locally/
For it too work and remove this error I had to change :
ae_dtype: bf16
toae_dtype: fp32
in SUPIR_v0.yaml in the first few lines.
Once the gradio interface was loaded I also had to change the "Auto-Encoder Data Type" from bf16 to fp32 Then everything was working perfectly. I'm not using all the LLaVA stuff, so I can't say for that but step 1 and 2 of upscaling works.
This solved it for me, thanks!